text
stringlengths 14
5.77M
| meta
dict | __index_level_0__
int64 0
9.97k
⌀ |
|---|---|---|
Q: File path gives NullPointer On my JSP page I need to show a list with all files currently present on the /public folder.
It's a j2ee project with spring & hibernate. In my service class for getting the list with all files I have the following:
public class UploadService {
private static List<File> list = new ArrayList<File>();
public static List<File> getFileList() {
File folder = new File("C:\\Users\\Admin\\Documents\\GitHub\\MyApp\\src\\main\\webapp\\resources\\public");
File[] listOfFiles = folder.listFiles();
list.addAll(Arrays.asList(listOfFiles));
return list;
}
This works, however, when I write down:
File folder = new File(Structure.defaultUrl+"/public/");
I receive a nullpointer ..
The defaultUrl is as following:
public class Structure {
/**
*
*/
public static String defaultUrl = "/MyApp";
In the ResourceHandler:
public void addResourceHandlers(ResourceHandlerRegistry registry) {
registry.addResourceHandler("/public/**").addResourceLocations("/resources/public/");
}
The weird part is, on the homepage where i have:
<a href ="<%=Structure.defaultUrl%>/public/AlgVW2011.pdf">
It does work. I'm really stuck now, anyone know what I'm doing wrong?
Edit:
Searched for the problem, probably some problem in the MvcConfiguration.java.
public class Initializer implements WebApplicationInitializer {
/**
*
* @param servletContext
* @throws ServletException
*/
@Override
public void onStartup(ServletContext servletContext) throws ServletException {
AnnotationConfigWebApplicationContext ctx = new AnnotationConfigWebApplicationContext();
ctx.register(MvcConfiguration.class);
servletContext.addListener(new ContextLoaderListener(ctx));
ctx.setServletContext(servletContext);
Dynamic servlet = servletContext.addServlet("dispatcher", new DispatcherServlet(ctx));
servlet.addMapping("/");
servlet.setLoadOnStartup(1);
}
}
A: You can't reference /public/ from your service code. The mapping is only accessible from the Servlet context after Spring's handlers resolve it to /resources/public/.
Use javax.servlet.ServletContext and Spring's ServletContextResource to get the resource in your webapp folder.
@Autowired
private ServletContext servletContext;
public static List<File> getFileList() {
Resource resource = new ServletContextResource(servletContext, "/resources/public");
File folder = resource.getFile();
File[] listOfFiles = folder.listFiles();
list.addAll(Arrays.asList(listOfFiles));
return list;
}
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 321
|
Q: My application which works with task scheduler can't create directory in my c# code My application has a code part like that:
if (!Directory.Exists(datePath))
{
Directory.CreateDirectory(datePath);
}
If I double click and run exe, it works and creates folders.
But when I take my application to task scheduler and set to run every day, the code doesn't work. No error, nothing.
Even if I right click on task and run it manually, it doesn't work.
I use my other applications with task scheduler and they work fine. I don't understand why this one doesn't create directories....
A: Tying the comment on your question and your comment on my now-removed answer together:
You see path is E:\Engineering\2014\December. E: is a mapped drive to \\server\share.
I used server admin user and password for the Scheduled Task
That user has no mapped drives on your machine. Use the full UNC path:
string datePath = @"\\server\share\Engineering\..."
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,372
|
New Car Financing
Used Car Financing
Dealer Contact Info
Dealer FAQ
Vehicle Payoff
Where (and why) shoppers should look for 'Best' used vehicles before buying
By Mark Macesich on January 27, 2015
Photo: appletonusedvehicles.com
New cars, trucks and SUVs get a lot of attention.
It's no wonder: Automakers are expected to sell about 17 million of them this year. And, let's face it, who among us can't imagine cruising down the road in a shiny new vehicle.
For millions of Americans, though, a good used, or pre-owned, vehicle is a more likely purchase. In fact, used vehicles are expected to outsell new vehicles by more than 2 to 1 this year at 38.5 million. Purchasing a used vehicle can make financial sense when they cost, on average, about half of a new one.
But that still leaves the question of which pre-owned vehicles to consider when you go shopping.
One of the better online sources for researching used cars is U.S. News & World Report's Best Cars, which presents highest-rated vehicles at several price points in 23 categories.
Looking for a small used car under $10,000 in your geographic area? Not a problem.
Best Cars ranks 39 vehicles in a sample ZIP code, ranging from the 2009 Scion tC at No. 1 to a 2010 Chevrolet Cobalt at No. 19 and a 2011 Chevrolet Aveo at No. 39. In between, brands including Toyota, Nissan, Hyundai, Volkswagen, Suzuki, Smart, Kia, Ford, Dodge, Mitsubishi and Pontiac.
How about a luxury hybrid car under $30,000? Best Cars offers 10 possibilities in our sample ZIP code.
And there are nine other car categories from mainstream to luxury and sports cars to wagons.
Best Cars also offers recommendations in truck (2 categories), SUV (7), van (1) and hybrid (3) groupings.
"Two- and three-year-old used vehicles are often the best values," according to ConsumerReports.org, in an article Why buy a used car? Finding the right balance between value and risk.
"Not only is the price lower than a comparable new car's, but continuing ownership expenses such as collision insurance and taxes are lower, and a two- or three-year-old used vehicle has already taken its biggest depreciation hit. … Buying used is a way to get a nicer car than you'd be able to afford new."
But that also means doing your homework at sites such as Best Car before you visit a dealership.
Then, when you're ready to purchase that new or used car, truck or SUV, make sure you check out Santander Consumer USA, which provides indirect financing through 14,000 dealerships nationwide – ask for us among your financing options – or through Chrysler Capital purchase or lease programs or through SCUSA's direct-to- consumer RoadLoans program.
Mark Macesich
Mark Macesich is an experienced writer and editor whose background includes six years in marketing communications with national auto lender Santander Consumer USA, where he works on several consumer/customer and business-to-business blogs and other customer- and dealer-facing content.
These are among the best used cars to buy right now, U.S. News rankings show
Used cars are the way to go for most American buyers. Around 40 million a year, in fact, considerably more than twice the number of new cars sold each year. But picking out the best used cars to buy right…
Longest-lasting vehicles for 2019: 8 SUVs, 5 American made – iSeeCars.com
If you're shopping for a vehicle with staying power, new or used, buy a sport utility vehicle. Even better, purchase an American-made SUV. The top seven vehicles and eight of the top 11 on the 2019 list of longest-lasting vehicles…
25 vehicles that hold value best over five years – iSeeCars.com
Not all cars, trucks and SUVs are created equal. Especially when it comes to new vehicles that hold their value best in the years after purchase. A study of more than 4.3 million new and used car sales by the…
Most-popular SUVs, trucks, cars in America right now
Popularity isn't the only measure of an SUV, car or truck. But looking at the most-popular SUVs and other vehicles isn't a bad place to start shopping. After all, there probably are reasons the most-popular are, well, the most popular….
Who makes the most reliable cars – Consumer Reports
They are the best of the best. The brands that pass the test of who makes the most reliable cars as ranked by Consumer Reports. Lexus and Toyota brands rated first and second, respectively, the reverse of last year but…
Discover Santander
RoadLoans
Make these New Year's resolutions to drive safer in 2015
Around this time of the year, many of us begin thinking about New Year's resolutions – those promises we make to ourselves with every intention of following through. Santander Consumer USA hopes that your resolutions include some safe-driving promises that…
DealerRater aims to help you find an auto dealership for your next purchase
So you're ready to buy a new car, truck, crossover or SUV. You've done all your homework and have pared your list to three models, say Toyota Avalon, Hyundai Sonata and Chrysler 200 (although it could be any of 57…
Drivers of these vehicles more lead-footed than others, reports insurance.com
The next time you see a Subaru WRX, Pontiac GTO or Scion FR-S speeding down the highway you should not be too surprised if a police cruiser – its red-and-blue lights flashing and siren blaring – goes zipping past you…
New Car Loans
Dealer Payoff
© 2020 Santander Consumer USA Inc. All Rights Reserved. NMLS Consumer Access ID 4239.
Chrysler Capital is a registered trademark of FCA US LLC and licensed to Santander Consumer USA Inc.
Chrysler, Dodge, Jeep, Ram, Mopar and SRT are registered trademarks of FCA US LLC.
ALFA ROMEO and FIAT are registered trademarks of FCA Group Marketing S.p.A., used with permission.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,883
|
\section{Introduction}
For a machine learning system to gain user trust, either its reasoning should to be transparent~\citep{rudin2018please,freitas2014comprehensible,lakkaraju2016interpretable,letham2015interpretable}, or it should be capable of justifying its decisions in human-interpretable ways~\citep{gilpin2018explaining,hendricks2016generating,huk2018multimodal,wu2018faithful}.
If a system is to interact with and justify its decisions to a large population of users, it needs to be cognizant of the variance users may have in their conceptual understanding over task-related concepts, i.e., an explanation could make sense to some users and not to others.
Although there has been work studying what affects users' ability to understand the decisions of machine learning models~\citep{DBLP:journals/corr/ChandrasekaranY17}, to the best of our knowledge existing work in explainable AI (XAI) does not explicitly reason about user understanding when generating explanations for model decisions.
As an additional complication, variations in understanding can only be inferred from observed behavior, as the system typically has no access to the internal state of its users.
Further, usually not only the understanding among the population of users vary, but also how the system and its users perceive information about the world significantly differs, as is the case between human eyes and digital cameras artificial agents use for perception.
In this work, we focus on the ability of a machine learning system, i.e. an agent, to form a mental model of the task-related, conceptual understanding other communication partners have over their environment.
Particularly, we are interested in an agent that can form an internal, human-interpretable representation of other agents that encodes information about how well they would understand different descriptions presented to them.
Further, we would like our agent to be capable of forming this representation quickly for novel agents that it encounters.
Similar to~\citet{rabinowitz2018machine}, we wish to generate a representation of other agents solely from observed behavior.
Rather than implicitly encoding information about the agents' policies, we explicitly encourage our learned representation to encode information about their understanding of task-related concepts.
We accomplish this through a value function over concepts conditioned on observed agent behavior, yielding a human-interpretable representation of other agents' understanding.
As a testbed, we formulate an image reference game played in sequences between pairs of agents.
Here, agents are sampled from a population which has variations in how well they understand different visual attributes, necessitating a mental model over other agents' understanding of those visual attributes in order to improve the overall game performance.
For example, an agent might understand color attributes poorly, leading it to have trouble differentiating between images when they are described in terms of color.
We present ablation experiments evaluating the effectiveness of learned representations, and build simple models for the task showing that actively probing agents' understanding leads to faster adaptation to novel agents.
Further, we find that such a model can form clusters of agents that have similar conceptual understanding.
With this work, we hope to motivate further inquiry into models of conceptual understanding. Our exemplar task, i.e. image reference game, based on real-world image data allows us to explore and observe the utility of agents who are able to adapt to others' understanding of the world.
\section{Related Work}
\textbf{Modeling Other Agents.} Inspired by~\citet{rabinowitz2018machine}, we would like to model another agent solely from observed behavior, focusing on forming representations which encode information about their understanding of task-related concepts.
Recent works have also employed a similar idea to other multi-agent settings.
In \citep{shu2018m}, an agent learns the abilities and preferences of other agents for completing a set of tasks, however, in their work they assume that the identities of the agents the learner interacts with are given and that their representation is learned over a large number of interactions.
In contrast, we are interested in a learner that can quickly adapt to agents without having prior knowledge of who they are.
The model presented by \citet{shu2018interactive} learns how to query the behavior of another agent in order to understand its policy.
However, in their work only the environmental conditions vary, with the agent being modeled remaining the same. Here, we vary both agent and environment.
There also exists a body of work on computational models of theory of mind~\citep{butterfield2009modeling,warnier2012robot}, particularly employing Bayesian methods~\citep{baker2011bayesian,nakahashi2016modeling,baker2017rational}, although they use discrete state spaces rather than continuous ones.
\textbf{Meta Learning.} In meta-learning~\citep{schmidhuber1987,bengio1992optimization,finn2017model}, an agent is tasked with learning how to solve a family of tasks such that it can quickly adapt to new ones.
In our work, we are interested in an agent that can learn to quickly adapt to the conceptual understanding of novel gameplay partners whose understanding is correlated to other agents from the population (e.g. such as learning to identify when someone is color-blind).
\textbf{Emergent Language.} There have been a number of works presenting multi-agent systems where agents must collaboratively converge on a communication protocol for specifying goals to each other~\citep{choi2018compositional,evtimova2017emergent,foerster2016learning,havrylov2017emergence,jorge2016learning,lazaridou2016multi,lazaridou2018emergence,DasKMLB17,kottur2017natural}.
Whereas in these works the main focus is to learn an effective communication protocol and to analyze its properties, here we are interested in modeling other agents' understanding of the environment.
We therefore assume a communication protocol is given so that we test agent modeling in isolation.
Further, many of these works assume that gradients are passed between agents. Here, we assume a discrete bottleneck in that agents only have access to observations of each other's behavior.
Although some domains have a population of agents \citep{mordatch2017emergence,cogswell2019compositional}, the tasks do not use real images and all agents either share a single policy or have equal capacity to understand task-related concepts.
We believe that incorporating an emergent communication component to our domain would be an exciting avenue for future work.
\section{Image Reference Game with Varied Agent Population}\label{section:3}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{Imgs/Method/Pipeline.pdf}
\vspace{-5mm}
\caption{Our image reference game with varied agent population. In a given episode $k$, the speaker and listener encode the image pair $(x_t^k, x_c^k)$ using their perceptual modules $\phi _S, \phi _L$. The speaker selects a target image $x_t^k$ and an attribute $a_k$ to describe it using parameterized functions $\pi _S$ and $V$ conditioned on the image representations and agent embedding $h_{k-1}$. Given $a_k$, the listener guesses the target image. Finally, the speaker incorporates information about the listener into embedding $h_k$ given the reward $r_k$ received for using $a_k$ in that game.}
\label{fig:pipeline}
\end{figure}
In a multi-agent communication setting, it is generally best to send a message which maximizes the amount of task-related information, such as describing an image by appealing to its most discriminative attributes.
However, the recipient of the message may not be familiar enough with certain attributes, meaning that some messages are not useful to them despite being maximally informative.
In line with this motivation, we formulate an image reference game where agents must describe images to each other using visual attributes.
\myparagraph{Task definition.} In our visual reference game (see Figure \ref{fig:pipeline}) we have a single learner, referred to as the \textit{speaker}, who must learn to play sequences of episodes in an image reference game with gameplay partners, referred to as \textit{listeners}, that are randomly sampled from a population of agents.
Both the speaker and the listeners are given a pair of images in each episode $k$, and
the speaker selects an image $x_t^k$ to serve as the target, with the second image $x_c^k$ serving as confounder.
The speaker must then generate a description, in the form of an image attribute $a_k \in A$, which the listeners use to compare the two images before guessing the target's identity.
As listeners are effectively black-boxes to the speaker, it can be difficult to disentangle potential sources of error when they behave unexpectedly.
Namely, when a listener guesses incorrectly, it is difficult to tell whether the mistake was due to a lack in its understanding of 1) the game, 2) the language used to communicate, or 3) the attribute (i.e. concept) used to describe the image.
In this work, we focus on the third option (conceptual understanding), and isolate this problem from the other two by assuming that the speaker can communicate attribute identities noiselessly,
and that listeners are all rational game players sharing a static gameplay policy.
\myparagraph{Perceptual Module.} An agent's perceptual module $\phi$ encodes images into a list of shared concepts weighted by their relevance to the image.
Specifically, the perceptual module first extracts image features using a CNN.
The image features are further processed with a function $f$ that predicts attribute-level features $\phi (x) = f(\text{CNN}(x))$,
where $\phi (x)\in [0,1]^{|A|}$, and $|A|$ is the number of visual attribute labels in an attribute-based image classification dataset.
Every element in $\phi (x)$ represents a separate attribute, such as ``black wing", giving us a disentangled representation.
The speaker and listener policies reason about images in the attribute space $A$; we are interested in disentangled representations because they will allow for the speaker's mental model of listeners' understanding to be human interpretable.
In our setting, the speaker is given a separate module $\phi _S$, while all listeners share a single module $\phi _L$.
\subsection{Modeling Listener Populations}\label{sec:listeners}
If a listener has a good understanding of an attribute, we would expect that it would be able to accurately identify fine-grained differences in that attribute between a pair of images.
For example, someone with a poor understanding of the attribute ``red" may not be able to distinguish between the red in a tomato and the red in a cherry, although they might be capable of distinguishing between the redness of a fire truck and that of water.
Following this intuition, we generate a population of listeners $L=\{(\delta _l, p_l)\}$, where each listener $l\in L$ is defined by a vector of thresholds $\delta _l \in [0,1]^{|A|}$ and a vector of probabilities $p_l \in [0,1]^{|A|}$.
Given an image and attribute feature pair $\left(\phi _L (x_t^k), \phi _L (x_c^k)\right)$, the listener $l$ first computes the difference between the attribute features $\phi _L$ of image $x_t$ and $x_c$ for attribute $a$:
\begin{equation}
z_l ^ a = \phi _L^a (x_t^k) - \phi _L^a (x_c^k) .
\end{equation}
Using its attribute-specific threshold $\delta _l ^a$, if $|z_l ^a| < \delta _l ^ a$, then the listener does not understand the concept well enough and will choose the identity of the target image uniformly at random.
Conversely, if $|z_l ^a| \geq \delta _l ^a$, then the listener will guess rationally with probability $p_l^a$ and randomly with probability $(1 - p_l^a)$.
Here, a rational guess $g = \argmax _{x\in \{x_t^k, x_c^k\}} \phi _L^a (x)$ means choosing the image which maximizes the value of the attribute $a$.
To simplify the setup, we specify a total of two different levels of understanding. An agent can either understand an attribute, i.e. $u = (\delta, p)$, or not understand an attribute, i.e. $\bar{u} = (\bar{\delta}, \bar{p})$. For $u$, $\delta$ is small and $p$ is set to $1$, respectively meaning that attributes are easily understood and the agent always plays rationally on understood attributes.
Conversely, $\bar{u}$ specifies a high value for $\bar{\delta}$ and $\bar{p}$ is lower than $1$, such that an attribute that is not understood rarely leads to rational gameplay.
To form a diverse population of listeners, we create a set of clusters $C$ where each cluster is defined by the likelihood of assigning either $u$ or $\bar{u}$ to each individual attribute. Thus, listeners sampled from the same cluster will have correlated sets of understood and misunderstood attributes, while remaining diverse.
\subsection{Modeling the Speaker}\label{sec:speaker}
In a given sequence, the speaker plays $N$ practice episodes, each consisting of a single time-step, where the purpose is to explore and learn as much about the understanding of the listener as possible, purely from observed behavior.
During the $k$'th game in a sequence with a given listener, the speaker first encodes the image pair with $\phi _S$.
From the previous $k-1$ games, the speaker also has access to an agent embedding $h_{k-1}$, which encodes information about the listener.
The speaker uses an attribute selection policy to select an attribute $a_k$ for describing the target image.
After the listener guesses, the reward $r_k$ from the game is used to update the agent embedding into $h_k$.
After the practice episodes, $M$ evaluation episodes are used to evaluate what the speaker has learned.
\myparagraph{Agent Embedding Module.} To form a mental model of the listener in a given sequence of episodes, the speaker makes use of an agent embedding module.
This module takes the form of an LSTM~\citep{hochreiter1997long} which incorporates information about the listener after every episode, with the LSTM's hidden state serving as the agent embedding.
Specifically, after selecting an attribute $a_k$ and receiving a reward $r_{k}\in \{-1, 1\}$, a one-hot vector $o_k$ is generated, where the index of the non-zero entry is $a_k$ and its value is $r_k$.
The agent embedding $ h_k = \text{LSTM}(h_{k-1}, o_k)$ is then updated by providing $o_k$ to the LSTM.
\myparagraph{Attribute Selection Policies.} The speaker has access to two parameterized functions, $V(s_k,a_k)$ and $\pi _S(s_k,a_k)$, represented by multi-layer perceptrons.
The speaker uses these functions to select attributes during the $N$ practice and $M$ evaluation episodes, where $s_k = \left[\phi (x_t^k) - \phi (x_c^k); h_k\right]$ is a feature generated by concatenating the image-pair difference and agent embedding.
We estimate the value of using each attribute to describe the target image, i.e. $V(s_k,a_k): \mathcal{R}^{d}\times \mathcal{A}\to \mathcal{R}$ using episodes from both the practice and evaluation phases optimizing the following loss:
\begin{equation}
\mathcal{L}_V = \frac{1}{N+M} \sum _{N+M} \text{MSE}(V(s_k,a_k), r_k)
\end{equation}
As $V$ approximates the value of each attribute within the context of a listener's embedding and an image pair, it directly provides a human-interpretable representation of listeners' understanding.
Therefore, every model presented uses it greedily to select attributes during evaluation games.
The purpose of practice episodes is to generate as informative an agent embedding as possible for $V$ to use during evaluation episodes.
Therefore, speakers differ in how they select attributes during practice episodes, probing listeners' understanding with different strategies.
One strategy is to use an attribute selection policy $\pi _S$, trained with policy gradient~\citep{sutton2000policy}, which directly maps to probabilities over attributes. In the following, we describe different attribute selection strategies used during practice episodes.
\textbf{a. Epsilon Greedy Policy.} For this selection policy, we simply either randomly sample an attribute with probability $\epsilon$ or greedily choose the attribute $a_k = \argmax _{a\in A} V(s_k, a)$ using $V$.
\textbf{b. Active Policy.} The active policy is trained using policy gradient:
\begin{equation}
\mathcal{L}_a = \frac{1}{N} \sum _N -R\log \pi _S (s_t, a_t) \text{ with } R = -\frac{1}{M}\sum _M \text{MSE}(V(s_k, a_k), r_k)
\end{equation}
where the reward ($R$) for the policy is a single scalar computed from the evaluation episode performance.
This encourages the policy to maximize the correctness of the reward estimate function $V$ during evaluation episodes, requiring the formation of an informative agent embedding during the practice episodes.
Note that when optimizing the active policy $\pi _S$, gradients are not allowed to flow through $V$.
\section{Experiments}
In the following, we first evaluate the effects of using different attribute selection strategies during practice episodes and then the quality of agent embeddings generated by each model. We use the AwA2~\citep{xian2018zero}, SUN Attribute~\citep{patterson2014sun}, and CUB~\citep{WahCUB_200_2011} datasets.
Unless stated otherwise, the listener population consists of 25 clusters, each with 100 listeners.
We use two variants of the perceptual module, ResNet-152~\citep{he2016deep}, fine-tuned for attribute-based classification with an ALE~\citep{akata2013label} head, and PNASNet-5~\citep{liu2018progressive} with an attribute classifier head.
Both ResNet and PNASNet-5 are pre-trained on ImageNet~\citep{deng2009imagenet} and fine-tuned for the attribute-based image classification task.
Note that unless stated otherwise, in each experiment both the speaker and listeners use the same perceptual module, i.e. $\phi _S =\phi _L$.
For all curves we plot the average over 3 random seeds, with error curves representing one standard deviation.
We use the standard splits for CUB and SUN, but make our own split for AwA2 in order to have all classes represented in both train and test.
The training splits are used for learning speaker parameters; we present performance on the test splits, using the same splits for each seed.
We sample target and confounder images from the same dataset split.
Listener clusters $C$ are shared across train and test but a novel population of listeners is sampled at test time\footnote{Code with full specifications for experiments may be found at: \url{https://github.com/rcorona/conceptual\_img\_ref}}.
\subsection{Policy Comparison}\label{sec:policy_comparison}
We first compare the performance of the Epsilon Greedy and Active policies described in Section \ref{sec:speaker} against three baselines, the Random Agent, Reactive, and Random Sampling policies.
Among the baselines, the Random Agent policy simply always selects an attribute at random to describe images.
The Reactive policy, at the beginning of each set of $N+M$ episodes, randomly selects an attribute.
It continues using this attribute for each episode, only sampling a different attribute whenever it encounters a negative reward, keeping track over which attributes it has used.
This policy is meant as a sanity check against a degenerate strategy of only using the LSTM to remember which attributes have worked and which have not, without incorporating useful information about the listener's conceptual understanding. Finally, the Random Sampling baseline selects random attributes during practice episodes, and then follows a greedy strategy over $V$ during evaluation episodes.
\begin{figure}[t]
\centering
\includegraphics[width=0.295\textwidth]{Imgs/Exp2/ALE_CUB.pdf}
\includegraphics[trim={2cm 0 0 0 },clip,width=0.26\textwidth]{Imgs/Exp2/ALE_AWA.pdf}
\includegraphics[width=0.26\textwidth,trim={2cm 0 0 0},clip]{Imgs/Exp2/ALE_SUN.pdf}
\includegraphics[width=0.15\textwidth]{Imgs/Exp2/legend.pdf} \vspace{-2mm}
\caption{A comparison of average test set performance (Avg. Reward) for different attribute selection policies vs. the number of practice games. All agents learn from the listeners responses, i.e. using an embedding module, except for the random agent which always acts randomly. With an increasing number of games, the agent observes more responses providing information about the listener's conceptual understanding. }
\label{fig:exp2_test}
\end{figure}
The performance of these policies on the test set is presented in Figure \ref{fig:exp2_test} which shows that the Epsilon Greedy and Active selection policies both outperform the Reactive baseline, suggesting that the agent embedding is encoding information about the conceptual understanding of listeners.
After a large number of games, we would expect the performance of the Epsilon Greedy, Active and Random Sampling policy to be the same because at some point the speaker agent has learned about all the listener's understood and misunderstood attributes. By comparing against the Random Sampling policy, we can conclude that both the Epsilon Greedy and Active policies can learn more efficient strategies that identify the misunderstood attributes within the first 20 games, at least five times faster than the Random Sampling policy. This corroborates the positive effect of encouraging policies to query information that helps the speaker form a mental model of the listener.
\begin{figure}[t]
\centering
\includegraphics[width=0.295\textwidth]{Imgs/Exp1/CUB.pdf}
\includegraphics[trim={2cm 0 0 0 },clip,width=0.26\textwidth]{Imgs/Exp1/AWA.pdf}
\includegraphics[trim={2cm 0 0 0 },clip,width=0.26\textwidth]{Imgs/Exp1/SUN.pdf}
\includegraphics[width=0.15\textwidth]{Imgs/Exp1/legend.pdf} \vspace{-2mm}
\caption{Ablation study on the importance of the agent embedding module. Average reward of the Epsilon Greedy policy on the test set as the number of practice episodes played increases. We evaluate the performance on two different perception modules (ALE, PNAS) with embedding module and without (baseline).}
\label{fig:exp1_ablation}
\end{figure}
\subsection{Evaluating Agent Embedding}
Here we present an ablation study to investigate the benefit of using agent embeddings when playing the game, training an epsilon-greedy policy for each dataset until convergence with (Embeddings) and without (Baseline) agent embeddings.
Models without agent embeddings are given zero vectors, $h_k = 0$, instead of agent embeddings as input for the attribute selection policies.
In these experiments, the speaker and listeners share the same perception module; we test performance for both the ALE and PNAS perceptual modules.
Intuitively, a speaker will improve its performance over the game sequence if it encodes useful information about the listener, since it will help it avoid using attributes which the listener does not understand well.
In Figure \ref{fig:exp1_ablation}, we show the average reward at different intervals of the game sequence.
Using an agent embedding module significantly improves the performance of the speaker over time in all cases.
Most importantly, performance improves as the number of games increases, showing that a speaker using an agent embedding module can quickly adapt to individual listeners from experience to avoid using misunderstood attributes and, thus, achieve a higher average reward.
\begin{figure}[t]
\centering
\includegraphics[width=0.295\textwidth]{Imgs/Exp5/CUB.pdf}
\includegraphics[trim={2cm 0 0 0 },clip,width=0.26\textwidth]{Imgs/Exp5/AWA.pdf}
\includegraphics[trim={2cm 0 0 0 },clip,width=0.26\textwidth]{Imgs/Exp5/SUN.pdf}
\includegraphics[width=0.15\textwidth]{Imgs/Exp5/legend.pdf} \vspace{-2mm}
\caption{Variation of information ($VI$) of agent clusters $C'$ compared to ground-truth cluster assignments $C$. We present $VI$ for different policies as the number of practice games increases, lower is better. Cluster assignments $C'$ are obtained via K-Means ($k=|C|$) on agent embeddings from 50K test set sequences for each policy. Random Clusters (baseline) assigns each embedding to a random cluster. Each policy is evaluated using two different perception modules (ALE, PNAS).}
\label{fig:exp5_cluster}
\end{figure}
\subsection{Evaluating Cluster Quality}
Although we have shown that agents with an agent embedding module achieve better performance, these results do not necessarily imply that speakers with memory develop an informative mental model over the conceptual understanding of the listeners.
In order to test this, we perform an additional experiment on the trained speaker models.
Specifically, we play roughly 50K sequences on the test set in order to generate a dataset of agent embeddings.
We then perform K-Means clustering on these embeddings with $k=|C|$ (i.e. the number of listener clusters in the population) to obtain cluster assignments $C'$ and compare them to the ground-truth listener cluster assignments $C$ .
To evaluate the cluster quality, we use the variation of information (VI) metric~\citep{meilua2003comparing}:
\begin{equation}
VI(C,C') = H(C) + H(C') -2I(C,C')
\end{equation}
Here, $C$ and $C'$ are two different clusters, $H$ is the entropy, and $I$ is the mutual information.
Intuitively, the $VI$ measures how much information is lost or gained by switching from clustering $C$ to $C'$.
The more informative agent embeddings are about listeners' understanding, the greater the correlation will be between the inferred cluster and the ground-truth cluster.
Figure \ref{fig:exp5_cluster} shows clustering performance for all parameterized policies as the number of practice games increases per sequence.
We additionally compare this performance to a random cluster assignment baseline.
Firstly, we note that every policy outperforms the random assignment baseline.
The Epsilon Greedy and Active policies experience nearly identical gameplay performance, suggesting that simply optimizing for reward yields similarly informative embeddings as more explicitly encouraging the policy to maximize the value function's accuracy.
Finally, the Random Sampling baseline converges much more slowly, corroborating the idea that a more directed exploration of listeners' capabilities proves useful. Due to the significant improvement over random cluster assignments, we conclude that the speaker agent learns an embedding that clusters the listeners similar to the ground truth. This suggest that the agent not only learns from previous games, but it also forms a more general representation of listener groups with similar conceptual understandings.
\begin{figure}[t]
\centering
\includegraphics[width=0.295\textwidth]{Imgs/Exp4/ALE-PNAS_CUB.pdf}
\includegraphics[trim={2cm 0 0 0 },clip,width=0.26\textwidth]{Imgs/Exp4/ALE-PNAS_AWA.pdf}
\includegraphics[trim={2cm 0 0 0 },clip,width=0.26\textwidth]{Imgs/Exp4/ALE-PNAS_SUN.pdf}
\includegraphics[width=0.15\textwidth]{Imgs/Exp4/legend.pdf} \vspace{-2mm}
\caption{Average test set performance over different evaluation intervals for Epsilon Greedy and a random baseline. Here we test giving the speaker and listener population different perceptual modules (speaker uses ALE, listener uses PNAS).}
\label{fig:exp3_test}
\end{figure}
\subsection{Evaluating Different Perceptual Modules}
If our speaker is to interact with a varied population of agents, it not only needs to be cognizant that those it interacts with could have varying levels of understanding; the population itself could have inherently different machinery for perceiving the world, as is the case between humans and machines.
Therefore, we repeat the experiment from section \ref{sec:policy_comparison} with the Epsilon Greedy policy, and give the speaker and the listener population different perceptual modules.
Specifically, in Figure \ref{fig:exp3_test}, we show test performance when assigning ALE to the speaker and PNAS to the listeners comparing to a speaker which randomly selects attributes.
We observe a drastic change in performance, which suggests that the difficulty of the problem significantly increases when the speaker and listeners have fundamentally different perception.
Notice, however, that the performance of the Epsilon Greedy policy still significantly outperforms the random baseline.
Further, particularly in the case of the Animals with Attribute dataset, the Epsilon Greedy speaker is still able to improve its performance as the number of episodes increases.
This motivates further work in models that are capable not only of reasoning about conceptual understanding but also of adapting to fundamental differences in perception.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{Imgs/QualitativeExample.pdf} \vspace{-5mm}
\caption{Qualitative examples of the Epsilon Greedy agent on CUB interacting with a color-blind listener. Red (green) indicates an incorrect (correct) pick by the listener. We show examples where the speaker loses the first game due to selecting a discriminative color attribute. Even though color attributes are objectively more discriminative, in game 10 the speaker communicates color attributes less frequently. Finally, at convergence, i.e. game 100, the speaker prominently mentions shape-based attributes or non-color patterns.}
\label{fig:qualitative}
\end{figure}
\subsection{Qualitative Example}
To provide an illustrative example of our reference game and the behavior of the agents, we train an Epsilon Greedy policy on the CUB dataset with 5 listener clusters, pertaining to the 5 attribute types found in the dataset (i.e. color, shape, size, pattern, and length).
Each cluster in the listener population has a generally poor understanding of the attribute type it is assigned (e.g. the color cluster is color-blind). We visualize the center crop of the images as presented to both the speaker and the listener populations.
In Figure \ref{fig:qualitative}, we show sequences of games with color-blind listeners, where we can observe how the speaker adapts its strategy as it learns more about its gameplay partner -- specifically, it adapts to using non-color attributes even in cases where color attributes would generally be most discriminative. In the first game, the speaker refers to objectively very discriminative color attributes such as brown back and rufous belly (columns 1 and 3). By game 10, the speaker already chooses color-invariant patterns over color attributes for some of the color-blind listeners, e.g. pointing out a spotted belly pattern over orange legs (column 1). After 100 games, we observe that the speaker almost always refers to non-color attributes, such as the duck-like shape or the presence of an eyebrow (column 1 and 2) because it leads to a higher average reward for color-blind listeners.
\section{Conclusion}
In this work, we presented a task in which modeling the understanding that other agents have over concepts is necessary in order to succeed.
Further, we provide a formulation for an agent that is capable of modeling other agents' understanding and can represent it in a human-interpretable form.
We believe that the ability to perform this kind of reasoning will allow XAI systems to tailor their explanations to the specific users with whom they interact.
Learned agent embeddings can allow us to recover a clustering over other agents' conceptual understanding, which is a promising result to further tie this information into explanations.
For example, by having explanations that are fitted to each cluster, generated explanations would be more easily digestible by users of the system.
Further, we show that naively modeling this type of reasoning is not sufficient for cases where the perceptual machinery of the learner and the population is fundamentally different.
\myparagraph{Acknowledgements} This work has received funding from the ERC under the Horizon 2020 program (grant agreement No. 853489), DFG-EXC-Nummer 2064/1-Projektnummer 390727645 and DARPA XAI program. R. Corona was supported in part by the Fulbright U.S. Student Program.
\bibliographystyle{abbrvnat}
\section{Submission of papers to NeurIPS 2018}
NeurIPS requires electronic submissions. The electronic submission site is
\begin{center}
\url{https://cmt.research.microsoft.com/NeurIPS2018/}
\end{center}
Please read the instructions below carefully and follow them faithfully.
\subsection{Style}
Papers to be submitted to NeurIPS 2018 must be prepared according to the
instructions presented here. Papers may only be up to eight pages long,
including figures. Additional pages \emph{containing only acknowledgments and/or
cited references} are allowed. Papers that exceed eight pages of content
(ignoring references) will not be reviewed, or in any other way considered for
presentation at the conference.
The margins in 2018 are the same as since 2007, which allow for $\sim$$15\%$
more words in the paper compared to earlier years.
Authors are required to use the NeurIPS \LaTeX{} style files obtainable at the
NeurIPS website as indicated below. Please make sure you use the current files
and not previous versions. Tweaking the style files may be grounds for
rejection.
\subsection{Retrieval of style files}
The style files for NeurIPS and other conference information are available on
the World Wide Web at
\begin{center}
\url{http://www.neurips.cc/}
\end{center}
The file \verb+neurips_2018.pdf+ contains these instructions and illustrates the
various formatting requirements your NeurIPS paper must satisfy.
The only supported style file for NeurIPS 2018 is \verb+neurips_2018.sty+,
rewritten for \LaTeXe{}. \textbf{Previous style files for \LaTeX{} 2.09,
Microsoft Word, and RTF are no longer supported!}
The \LaTeX{} style file contains three optional arguments: \verb+final+, which
creates a camera-ready copy, \verb+preprint+, which creates a preprint for
submission to, e.g., arXiv, and \verb+nonatbib+, which will not load the
\verb+natbib+ package for you in case of package clash.
\paragraph{New preprint option for 2018}
If you wish to post a preprint of your work online, e.g., on arXiv, using the
NeurIPS style, please use the \verb+preprint+ option. This will create a
nonanonymized version of your work with the text ``Preprint. Work in progress.''
in the footer. This version may be distributed as you see fit. Please \textbf{do
not} use the \verb+final+ option, which should \textbf{only} be used for
papers accepted to NeurIPS.
At submission time, please omit the \verb+final+ and \verb+preprint+
options. This will anonymize your submission and add line numbers to aid
review. Please do \emph{not} refer to these line numbers in your paper as they
will be removed during generation of camera-ready copies.
The file \verb+neurips_2018.tex+ may be used as a ``shell'' for writing your
paper. All you have to do is replace the author, title, abstract, and text of
the paper with your own.
The formatting instructions contained in these style files are summarized in
Sections \ref{gen_inst}, \ref{headings}, and \ref{others} below.
\section{General formatting instructions}
\label{gen_inst}
The text must be confined within a rectangle 5.5~inches (33~picas) wide and
9~inches (54~picas) long. The left margin is 1.5~inch (9~picas). Use 10~point
type with a vertical spacing (leading) of 11~points. Times New Roman is the
preferred typeface throughout, and will be selected for you by default.
Paragraphs are separated by \nicefrac{1}{2}~line space (5.5 points), with no
indentation.
The paper title should be 17~point, initial caps/lower case, bold, centered
between two horizontal rules. The top rule should be 4~points thick and the
bottom rule should be 1~point thick. Allow \nicefrac{1}{4}~inch space above and
below the title to rules. All pages should start at 1~inch (6~picas) from the
top of the page.
For the final version, authors' names are set in boldface, and each name is
centered above the corresponding address. The lead author's name is to be listed
first (left-most), and the co-authors' names (if different address) are set to
follow. If there is only one co-author, list both author and co-author side by
side.
Please pay special attention to the instructions in Section \ref{others}
regarding figures, tables, acknowledgments, and references.
\section{Headings: first level}
\label{headings}
All headings should be lower case (except for first word and proper nouns),
flush left, and bold.
First-level headings should be in 12-point type.
\subsection{Headings: second level}
Second-level headings should be in 10-point type.
\subsubsection{Headings: third level}
Third-level headings should be in 10-point type.
\paragraph{Paragraphs}
There is also a \verb+\paragraph+ command available, which sets the heading in
bold, flush left, and inline with the text, with the heading followed by 1\,em
of space.
\section{Citations, figures, tables, references}
\label{others}
These instructions apply to everyone.
\subsection{Citations within the text}
The \verb+natbib+ package will be loaded for you by default. Citations may be
author/year or numeric, as long as you maintain internal consistency. As to the
format of the references themselves, any style is acceptable as long as it is
used consistently.
The documentation for \verb+natbib+ may be found at
\begin{center}
\url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf}
\end{center}
Of note is the command \verb+\citet+, which produces citations appropriate for
use in inline text. For example,
\begin{verbatim}
\citet{hasselmo} investigated\dots
\end{verbatim}
produces
\begin{quote}
Hasselmo, et al.\ (1995) investigated\dots
\end{quote}
If you wish to load the \verb+natbib+ package with options, you may add the
following before loading the \verb+neurips_2018+ package:
\begin{verbatim}
\PassOptionsToPackage{options}{natbib}
\end{verbatim}
If \verb+natbib+ clashes with another package you load, you can add the optional
argument \verb+nonatbib+ when loading the style file:
\begin{verbatim}
\usepackage[nonatbib]{neurips_2018}
\end{verbatim}
As submission is double blind, refer to your own published work in the third
person. That is, use ``In the previous work of Jones et al.\ [4],'' not ``In our
previous work [4].'' If you cite your other papers that are not widely available
(e.g., a journal paper under review), use anonymous author names in the
citation, e.g., an author of the form ``A.\ Anonymous.''
\subsection{Footnotes}
Footnotes should be used sparingly. If you do require a footnote, indicate
footnotes with a number\footnote{Sample of the first footnote.} in the
text. Place the footnotes at the bottom of the page on which they appear.
Precede the footnote with a horizontal rule of 2~inches (12~picas).
Note that footnotes are properly typeset \emph{after} punctuation
marks.\footnote{As in this example.}
\subsection{Figures}
\begin{figure}
\centering
\fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}}
\caption{Sample figure caption.}
\end{figure}
All artwork must be neat, clean, and legible. Lines should be dark enough for
purposes of reproduction. The figure number and caption always appear after the
figure. Place one line space before the figure caption and one line space after
the figure. The figure caption should be lower case (except for first word and
proper nouns); figures are numbered consecutively.
You may use color figures. However, it is best for the figure captions and the
paper body to be legible if the paper is printed in either black/white or in
color.
\subsection{Tables}
All tables must be centered, neat, clean and legible. The table number and
title always appear before the table. See Table~\ref{sample-table}.
Place one line space before the table title, one line space after the
table title, and one line space after the table. The table title must
be lower case (except for first word and proper nouns); tables are
numbered consecutively.
Note that publication-quality tables \emph{do not contain vertical rules.} We
strongly suggest the use of the \verb+booktabs+ package, which allows for
typesetting high-quality, professional tables:
\begin{center}
\url{https://www.ctan.org/pkg/booktabs}
\end{center}
This package was used to typeset Table~\ref{sample-table}.
\begin{table}
\caption{Sample table title}
\label{sample-table}
\centering
\begin{tabular}{lll}
\toprule
\multicolumn{2}{c}{Part} \\
\cmidrule(r){1-2}
Name & Description & Size ($\mu$m) \\
\midrule
Dendrite & Input terminal & $\sim$100 \\
Axon & Output terminal & $\sim$10 \\
Soma & Cell body & up to $10^6$ \\
\bottomrule
\end{tabular}
\end{table}
\section{Final instructions}
Do not change any aspects of the formatting parameters in the style files. In
particular, do not modify the width or length of the rectangle the text should
fit into, and do not change font sizes (except perhaps in the
\textbf{References} section; see below). Please note that pages should be
numbered.
\section{Preparing PDF files}
Please prepare submission files with paper size ``US Letter,'' and not, for
example, ``A4.''
Fonts were the main cause of problems in the past years. Your PDF file must only
contain Type 1 or Embedded TrueType fonts. Here are a few instructions to
achieve this.
\begin{itemize}
\item You should directly generate PDF files using \verb+pdflatex+.
\item You can check which fonts a PDF files uses. In Acrobat Reader, select the
menu Files$>$Document Properties$>$Fonts and select Show All Fonts. You can
also use the program \verb+pdffonts+ which comes with \verb+xpdf+ and is
available out-of-the-box on most Linux machines.
\item The IEEE has recommendations for generating PDF files whose fonts are also
acceptable for NeurIPS. Please see
\url{http://www.emfield.org/icuwb2010/downloads/IEEE-PDF-SpecV32.pdf}
\item \verb+xfig+ "patterned" shapes are implemented with bitmap fonts. Use
"solid" shapes instead.
\item The \verb+\bbold+ package almost always uses bitmap fonts. You should use
the equivalent AMS Fonts:
\begin{verbatim}
\usepackage{amsfonts}
\end{verbatim}
followed by, e.g., \verb+\mathbb{R}+, \verb+\mathbb{N}+, or \verb+\mathbb{C}+
for $\mathbb{R}$, $\mathbb{N}$ or $\mathbb{C}$. You can also use the following
workaround for reals, natural and complex:
\begin{verbatim}
\newcommand{\RR}{I\!\!R}
\newcommand{\Nat}{I\!\!N}
\newcommand{\CC}{I\!\!\!\!C}
\end{verbatim}
Note that \verb+amsfonts+ is automatically loaded by the \verb+amssymb+ package.
\end{itemize}
If your file contains type 3 fonts or non embedded TrueType fonts, we will ask
you to fix it.
\subsection{Margins in \LaTeX{}}
Most of the margin problems come from figures positioned by hand using
\verb+\special+ or other commands. We suggest using the command
\verb+\includegraphics+ from the \verb+graphicx+ package. Always specify the
figure width as a multiple of the line width as in the example below:
\begin{verbatim}
\usepackage[pdftex]{graphicx} ...
\includegraphics[width=0.8\linewidth]{myfile.pdf}
\end{verbatim}
See Section 4.4 in the graphics bundle documentation
(\url{http://mirrors.ctan.org/macros/latex/required/graphics/grfguide.pdf})
A number of width problems arise when \LaTeX{} cannot properly hyphenate a
line. Please give LaTeX hyphenation hints using the \verb+\-+ command when
necessary.
\subsubsection*{Acknowledgments}
Use unnumbered third level headings for the acknowledgments. All acknowledgments
go at the end of the paper. Do not include acknowledgments in the anonymized
submission, only in the final paper.
\section*{References}
References follow the acknowledgments. Use unnumbered first-level heading for
the references. Any choice of citation style is acceptable as long as you are
consistent. It is permissible to reduce the font size to \verb+small+ (9 point)
when listing the references. {\bf Remember that you can use more than eight
pages as long as the additional pages contain \emph{only} cited references.}
\medskip
\small
[1] Alexander, J.A.\ \& Mozer, M.C.\ (1995) Template-based algorithms for
connectionist rule extraction. In G.\ Tesauro, D.S.\ Touretzky and T.K.\ Leen
(eds.), {\it Advances in Neural Information Processing Systems 7},
pp.\ 609--616. Cambridge, MA: MIT Press.
[2] Bower, J.M.\ \& Beeman, D.\ (1995) {\it The Book of GENESIS: Exploring
Realistic Neural Models with the GEneral NEural SImulation System.} New York:
TELOS/Springer--Verlag.
[3] Hasselmo, M.E., Schnell, E.\ \& Barkai, E.\ (1995) Dynamics of learning and
recall at excitatory recurrent synapses and cholinergic modulation in rat
hippocampal region CA3. {\it Journal of Neuroscience} {\bf 15}(7):5249-5262.
\end{document}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,418
|
\section{Introduction}
At present, black hole physics is generally considered as one of the best and effective ways to explore quantum gravity. Especially with the pioneering discovery of Hawking and Bekenstein about the temperature and entropy in the black hole \cite{Hawking1975,Hawking1983,Bekenstein1973,Bardeen1973}, the general relativity, quantum mechanics and statistical physics are closely linked together to make it possible for us to glimpse the tip of the iceberg of quantum gravity. Thermodynamics theory, which has been rested on general principles spanning a wide range of pure fluid physical systems, is now applied to black hole systems extensively and successfully \cite{Wald2001}. One of the most prominent is the introduction of the extended phase space \cite{Kastor2009,Dolan2011a,Dolan2011b}, which makes the charged AdS black hole and the van der Waals fluid have a close relationship \cite{Kubiznak2012,Spallucci2013}. Black holes exhibit abundant phase transitions and critical behaviors in such an extended phase space \cite{Carlip2014,Kubiznak2017,Altamirano2014}.
Recently, the theory of the thermodynamics geometry \cite{Ruppeiner1995,Ruppeiner2007,Ruppeiner2014,Ruppeiner2010} is widely applied to the thermodynamics system of black holes, which provides a new and more traceable perspective for studying the micro-mechanism of black holes from the axioms of thermodynamics phenomenologically. This new scheme is mainly to use the Hessian matrix structure to represent the thermodynamic fluctuation theory \cite{Ruppeiner1995}. Thermodynamic curvature is the most important physical quantity in the theory. Taking advantage of the new scheme, the primary microscopic information of the Banados-Teitelboim-Zanelli (BTZ) black hole, Schwarzschild (-AdS) black hole, Reissner-Nordstr\"{o}m (-AdS) black hole, Gauss-Bonnet (-AdS) black hole, higher dimensional black holes and other black holes are explored \cite{Ruppeiner2008,Dolan2015,Wei2015,Wei2019a,Wei2019b,Wei2019c,Miao2018a,Miao2018b,Miao2019a,Miao2019b,Aman2003,Mirza2007,Dehyadegari2017,Cai1999,Zhang2015a,
Zhang2015b,Liu2010,Xu2019a,Xu2019b,Niu2012,Wang2019,Ghosh2019,Bhattacharya2017,Chen2019,Guo2019,Mansoori2014,Mansoori2015,Mansoori2016}.
In the early researches \cite{Cai1999,Sarkar2006,Quevedo2009,Wei2009,Akbar2011,Mohammadzadeh2018} on the thermodynamic geometry of BTZ black holes, the negative cosmological constant was regarded as a fixed parameter or fluctuation parameter. Such a system has no pressure and volume terms, that is, no extended phase space. Nowadays the notion of the extended phase space has been widely recognized and a good thermodynamic system must have pressure and volume terms. Therefore, it is necessary to discuss the thermodynamic geometry of the BTZ black hole again in the extended phase space.
We note that the study~\cite{Frassino2015} pointed out that due to the asymptotic structure of the BTZ black hole solution, it renders computation of the mass of charged BTZ black hole more problematic. They provided two schemes for analyzing the thermodynamic behavior of the charged BTZ black hole. In one case, the charged BTZ black hole is super-entropic \cite{Johnson2019a,Johnson2019b}, that is the reverse isoperimetric inequality is violated. While in the other, the reverse isoperimetric inequality is satisfied. Therefore, it is natural to ask how the thermodynamic geometry of the charged BTZ black hole is different or the same under different thermodynamic schemes. At present, this research content is still lacking. Hence in this paper, we shall explore the thermodynamic geometry of $(2+1)$-dimensional charged BTZ black hole in different coordinate spaces under two thermodynamic schemes in the framework of the extended phase space. Meanwhile, we provide a diagnosis for the discrimination of the two schemes from the point of view of thermodynamics geometry. This is the main motivation of our work. We find that in both schemes, the thermodynamic curvature is always positive. For the charged BTZ black hole, when the reverse isoperimetric inequality is saturated, the thermodynamic curvature of an extreme black hole tends to be infinity, while when the reverse isoperimetric inequality is violated, the thermodynamic curvature of the extreme black hole goes to a finite value.
Coincidentally, in the extended phase space, the study \cite{Ghosh2020} looked at the behavior of the thermodynamic curvature of the charged and rotating BTZ black hole with respect to entropy and thermodynamic volume respectively. Meanwhile, the authors of Ref.~\cite{Ghosh2020} extended the study to the case of the exotic BTZ black hole. We notice that in \cite{Ghosh2020} the thermodynamic volume and entropy are independent, i.e., this situation corresponds to our Case \Rmnum{1} (see the following for details). In our present paper, we discuss the thermodynamic curvature of the charged BTZ black hole in different coordinate spaces under two thermodynamic schemes (Case \Rmnum{1} and Case \Rmnum{2}) and provide a diagnosis for the discrimination of the two schemes from the point of view of the thermodynamics geometry. For the charged BTZ black hole without any angular momentum, our results are consistent with those of Ref.~\cite{Ghosh2020}, where a positive thermodynamic curvature has been obtained.
The paper is organized as follows. In section \ref{sec2}, we briefly review the Ruppeiner thermodynamic geometry. In section \ref{sec3}, we give two different thermodynamic schemes of the charged BTZ black hole. In section \ref{sec4}, we analyze the behavior of the thermodynamic curvature for the charged BTZ black hole under different thermodynamic schemes. Finally, we devote to drawing our conclusion and making further discussion in section \ref{sec5}. Throughout this paper, we adopt the units $\hbar=c=k_{_{B}}=G=1$.
\section{Ruppeiner thermodynamic geometry}\label{sec2}
Completely from a thermodynamic point of view, the Ruppeiner thermodynamic geometry is dealt with as a new attempt to extract the microscopic interaction information from the axioms of thermodynamics phenomenologically or qualitatively.
Considering the environment, i.e., an extensive, infinite thermodynamic system with entropy $S_e$, surrounding the black hole with entropy $S$, we can write the total entropy of the whole isolated system $S_{\text{total}}=S+S_e$. The $X^{\mu}$ correspond to some extensive variables of the black hole, like as mass $M$, charge $Q$, angular momentum $J$, and etc.. Correspondingly $X_e^{\mu}$ are extensities for the environment. $\Delta X^\mu\equiv X^{\mu}-X_0^{\mu}$ denotes the difference between $X^{\mu}$ and its equilibrium value $X_0^{\mu}$. We have known that in equilibrium state, the isolated thermodynamic system has a local maximum entropy. Hence for a small fluctuation $\Delta X^\mu$, $\Delta X_e^\mu$ away from this equilibrium, we have~\cite{Ruppeiner1995,Ruppeiner2007}
\begin{equation}\label{stot}
\Delta S_{\text{total}}=\frac{\partial S}{\partial X^\mu}\Delta X^\mu
+\frac{\partial S_e}{\partial X_e^\mu}\Delta X^\mu_e
+\frac{1}{2}\frac{\partial^2 S}{\partial X^\mu \partial X^\nu}\Delta X^\mu \Delta X^\nu
+\frac{1}{2}\frac{\partial^2 S_e}{\partial X_e^\mu \partial X_e^\nu}\Delta X^\mu_e \Delta X^\nu_e
+\cdots.
\end{equation}
On the one hand, the conservation laws demand $\Delta X^\mu=-\Delta X_e^\mu$ and the maximum entropy needs a necessary condition $\partial S/\partial X^\mu=\partial S_e/\partial X_e^\mu$. On the other hand, if the environment is very large, i.e., $S \ll S_e \sim S_{\text{total}}$, the second quadratic term in Eq.~(\ref{stot}) is negligible compare with the first. Since the entropy $S_e$ of the environment as an extensive thermodynamical quantity gets the same order as that of the whole system, so its second derivatives with respect to the intensive thermodynamical quantities $x^\mu$ are much smaller than those of $S$ and has been ignored. Eventually we obtain an expression
\begin{equation}
\Delta S_{\text{total}}=-\frac{1}{2}\Delta l^2,
\end{equation}
where
\begin{equation}\label{line}
\Delta l^2=-\frac{\partial^2 S}{\partial X^\mu \partial X^\nu}\Delta X^\mu \Delta X^\nu.
\end{equation}
Looking at Eq.~(\ref{line}), the entropy as a thermodynamic potential can give rise to a thermodynamic line element very similar to the geometric one, which leads directly to a Ricci curvature scalar $R$. In this framework, there is an empirical observation that the thermodynamic curvature derived from the thermodynamic geometry~(\ref{line}) is related to the interaction of the system under consideration. More specifically, the negative (positive) thermodynamic curvature is associated with attractive (repulsive) microscopic interactions for an ordinary thermodynamic system. Though we know that the statement that the sign of the Ruppeiner curvature is related to the attractive or repulsive nature of the microscopic interaction is still a conjecture, it is also verified for a large number of statistical physical models~\cite{Ruppeiner1995,Ruppeiner2014}.
For a black hole, because it can have temperature and entropy, this implies that a black hole has a microstructure. However, due to the lack of quantum gravity, there are certain assumptions in the study of microscopic properties of a black hole. From the point of view of thermodynamics to study the micro-behaviors of black holes, the current relevant researches\cite{Wei2015,Wei2019a,Wei2019b,Wei2019c,Miao2018a,Miao2018b,Miao2019a,Miao2019b,Aman2003,Mirza2007,Dehyadegari2017,Cai1999,Zhang2015a,
Zhang2015b,Liu2010,Xu2019a,Xu2019b,Niu2012,Wang2019,Ghosh2019,Bhattacharya2017,Chen2019,Guo2019,Mansoori2014,Mansoori2015,Mansoori2016,Sarkar2006,Quevedo2009,Wei2009,Akbar2011} show that the above thermodynamics geometry is a seemingly feasible scheme, which can be plausible to phenomenologically or qualitatively provide the information about interactions of black holes based on the well-established black hole thermodynamics. Hence we use this method to analyze the information of the charged BTZ black hole at present.
For black holes with AdS background, the most basic thermodynamic differential relation is $dM=TdS+VdP+\text{other work}$ and the term ``other work'' like as charge $Q$ and electrostatic potential $\Phi$, or angular momentum $J$ and angular velocity $\Omega$, or some coupling constants and their conjugations. Now we consider the situation with fixed other work terms. The form of the thermodynamic metric in Eq.~(\ref{line}) requires us to know $S=S(M,J,Q,...)$. Frequently, however, for the AdS black hole, we know instead $M=M(S,P,...)$. In this event, simplification results on writing the thermodynamic metric in the other thermodynamic potential, uaually with an additional pre-factor $1/T$. There are four conformally equivalent expressions of the above line element~(\ref{line}), which is particularly convenient for studying the thermodynamics geometry of AdS black holes. Before we start the process, let's make some comments.
\begin{itemize}
\item The mass of black hole was initially regarded as the internal energy, but for the AdS black hole, due to the introduction of the extended phase space, i.e., thermodynamic pressure and volume, the mass of black hole corresponds to thermodynamic enthalpy. It's just a formal correspondence. It's ok to think of the enthalpy $M$ (or mass) as a conserved quantity. It is also extensive and additive.
\item In thermodynamics, the extensive variable and the intensive variable are sometimes fuzzy. For example, if two ideal gas systems with the same pressure and volume are merged into a new system, which is the extensive variable for pressure and volume of the new system? If the pressure is kept constant during the merging, the volume of the total system multiplies and it is extensive and additive. Conversely if the volume is kept unchanged during the merging, the pressure of the total system multiplies and it is extensive and additive. Another very similar example is the voltage of the battery and the current in the circuit. If two batteries are connected in series, then the voltage multiplies and it is like the extensive variable; if two batteries are connected in parallel, then the current multiplies and it is like the extensive variable. The reason for this duality lies in the different ways of merging. In order to avoid this duality, we adopt the work of Quevedo~\cite{Quevedo2007}, that is, from a more mathematical (or geometric) language to define the extensive and intensive variables. It is considered that the quantities in differential form in the basic thermodynamic potentials are called extensive variables, and the corresponding conjugate quantities are intensive variables. Hence the extensive variable and the intensive variable are relative, and their roles can be exchanged for different thermodynamic potentials. For the internal energy $U$, if it is regarded as a basic thermodynamic potentials, we have $dU=TdS-PdV$ where the entropy $S$ and thermodynamic volume $V$ are extensive variables, while the temperature $T$ and thermodynamic pressure $P$ are intensive variables. If we take enthalpy $H$ as a basic thermodynamic potentials, we have $dH=TdS+VdP$ where the entropy $S$ and thermodynamic pressure $P$ are extensive variables, while the temperature $T$ and thermodynamic volume $V$ are intensive variables.
\end{itemize}
Hence for the AdS black hole, we can directly obtain the relation $dM=TdS+VdP$. It is the most basic thermodynamic differential relation of the AdS black hole. According to this expression, we have
\begin{eqnarray}
dS=\frac{1}{T}dM-\frac{V}{T}dP.
\end{eqnarray}
Hence under $S$ as a thermodynamic potential, $M$ and $P$ can be regarded as the extensive variables. We take $X^{\mu}=(M,P)$, and then the intensive variables corresponding to $X^{\mu}$ are $Y_{\mu}=\partial S/\partial X^{\mu}=(1/T,-V/T)$. It implies an isolated system of the black hole and environment with the same temperature and thermodynamic volume, where we have $M_{\text{total}}=M+M_e$ and $P_{\text{total}}=P+P_e$. This is very similar to the example of merging two ideal gas systems mentioned above.
Then the line element becomes
\begin{eqnarray}\label{xymetric}
\Delta l^2=-\frac{\partial^2 S}{\partial X^{\mu}\partial X^{\nu}}\Delta X^{\mu}\Delta X^{\nu}=-\Delta Y_{\mu} \Delta X^{\mu}.
\end{eqnarray}
Considering the specific form of each components in the above line element, we obtain
\begin{eqnarray}
\begin{aligned}
\Delta Y_0 &=\Delta\left(\frac{1}{T}\right)=-\frac{1}{T^2}\Delta T,\\
\Delta Y_1 &=\Delta\left(-\frac{V}{T}\right)=\frac{V}{T^2}\Delta T-\frac{1}{T}\Delta V.
\end{aligned}
\end{eqnarray}
Therefore, by inserting the above two expressions into Eq.~(\ref{xymetric}) and using the relation $\Delta M=T \Delta S+V \Delta P$, we finally write the line element as a universal form
\begin{eqnarray}\label{umetric}
\Delta l^2=\frac{1}{T}\Delta T \Delta S+\frac{1}{T}\Delta V \Delta P.
\end{eqnarray}
At present, we only consider the space composed of two generalized coordinates. According to the expression of the first law $dM=TdS+VdP$, we can see that there are four such coordinate spaces, which are $\{S, P\}$, $\{T, V\}$, $\{S, V\}$ and $\{T, P\}$. In the coordinate space $\{S,P\}$, we can see that
\begin{eqnarray}
\begin{aligned}
\Delta T &=\left(\frac{\partial T}{\partial S}\right)_P \Delta S+\left(\frac{\partial T}{\partial P}\right)_S \Delta P,\\
\Delta V &=\left(\frac{\partial V}{\partial S}\right)_P \Delta S+\left(\frac{\partial V}{\partial P}\right)_S \Delta P.
\end{aligned}
\end{eqnarray}
By introducing the above two expressions into the Eq.~(\ref{umetric}), we obtain \cite{Xu2019a}
\begin{eqnarray}\label{linesp}
\begin{aligned}
\Delta l^2 &=\frac{1}{T}\left(\frac{\partial T}{\partial S}\right)_P \Delta S^2+\frac{2}{T}\left(\frac{\partial T}{\partial P}\right)_S \Delta S \Delta P+\frac{1}{T}\left(\frac{\partial V}{\partial P}\right)_S \Delta P^2\\
&=g_{\mu\nu}\Delta x^{\mu}\Delta x^{\nu} \qquad (x^{\mu}=S,P),
\end{aligned}
\end{eqnarray}
where the Maxwell relation $(\partial T/\partial P)_{_S}=(\partial V/\partial S)_{_P}$ based on the relation $dM=TdS+VdP$ has been used.
For the line element in the coordinate space $\{S,V\}$, we need to use the differential relation of internal energy $U$, i.e., $dU=d(M-PV)=TdS-PdV$ and the Maxwell relation $(\partial T/\partial V)_{_S}=-(\partial P/\partial S)_{_V}$. Hence
\begin{eqnarray}\label{linesv}
\Delta l^2 =\frac{1}{T}\left(\frac{\partial T}{\partial S}\right)_V \Delta S^2+\frac{1}{T}\left(\frac{\partial P}{\partial V}\right)_S \Delta V^2=g_{\mu\nu}\Delta x^{\mu}\Delta x^{\nu} \qquad (x^{\mu}=S,V).
\end{eqnarray}
Similarly, in the coordinate space $\{T,V\}$, through the relation of the Helmholtz free energy $dF=-SdT-PdV$ and the Maxwell relation $(\partial S/\partial V)_{_T}=(\partial P/\partial T)_{_V}$, the line element is \cite{Xu2019a}
\begin{eqnarray}\label{linetv}
\begin{aligned}
\Delta l^2 &=\frac{1}{T}\left(\frac{\partial S}{\partial T}\right)_V \Delta T^2+\frac{2}{T}\left(\frac{\partial S}{\partial V}\right)_T \Delta T \Delta V+\frac{1}{T}\left(\frac{\partial P}{\partial V}\right)_T \Delta V^2\\
&=g_{\mu\nu}\Delta x^{\mu}\Delta x^{\nu} \qquad (x^{\mu}=T,V).
\end{aligned}
\end{eqnarray}
In the coordinate space $\{T,P\}$, through the relation of the Gibbs free energy $dG=-SdT+VdP$ and the Maxwell relation $(\partial S/\partial P)_{_T}=-(\partial V/\partial T)_{_P}$, the line element becomes
\begin{eqnarray}\label{linetp}
\Delta l^2=\frac{1}{T}\left(\frac{\partial S}{\partial T}\right)_P \Delta T^2+\frac{1}{T}\left(\frac{\partial V}{\partial P}\right)_T \Delta P^2=g_{\mu\nu}\Delta x^{\mu}\Delta x^{\nu} \qquad (x^{\mu}=T,P).
\end{eqnarray}
In terms of the metric $g_{\mu\nu}$ above mentioned, one can define the Christoffel symbols
\begin{eqnarray}
\Gamma^{\alpha}_{\beta\gamma}=\frac12 g^{\mu\alpha}
\left(\partial_{\gamma}g_{\mu\beta}+\partial_{\beta}g_{\mu\gamma}-\partial_{\mu}g_{\beta\gamma}\right),
\end{eqnarray}
and the Riemannian curvature tensors
\begin{eqnarray}
{R^{\alpha}}_{\beta\gamma\delta}=\partial_{\delta}\Gamma^{\alpha}_{\beta\gamma}-\partial_{\gamma}\Gamma^{\alpha}_{\beta\delta}+
\Gamma^{\mu}_{\beta\gamma}\Gamma^{\alpha}_{\mu\delta}-\Gamma^{\mu}_{\beta\delta}\Gamma^{\alpha}_{\mu\gamma}.
\end{eqnarray}Consequently the thermodynamic curvature, which is the ``thermodynamic analog'' of the geometric curvature in general relativity, is \begin{eqnarray}
R=g^{\mu\nu}{R^{\xi}}_{\mu\xi\nu}.
\end{eqnarray}
Obviously, the thermodynamic curvatures obtained by the above four line elements are equivalent to each other, because there is Legendre transformation between the thermodynamic potential functions corresponding to these different coordinate spaces. However, for the general form of thermodynamic curvature of the above four line elements respectively, the proof of equivalence of these thermodynamic curvatures seems to be more complicated. For simplicity, the equivalence can be illustrated by a specific example. Next, we will use the above four different coordinate spaces to calculate the thermodynamic curvature of the charged BTZ black hole respectively, so as to explore some micro-information that the black hole may have.
\section{Thermodynamic properties of charged BTZ black hole} \label{sec3}
For the $(2+1)$-dimensional charged BTZ black hole, its metric and the gauge field are \cite{Frassino2015,Mo2017}
\begin{eqnarray}
ds^2 &=&-f(r)dt^2+\frac{dr^2}{f(r)}+r^2 d\varphi^2, \nonumber\\
F&=&d A, \qquad A=-Q \ln\left(\frac{r}{l}\right)dt,
\end{eqnarray}
here the function $f(r)$ is
\begin{eqnarray}
f(r)=-2m-\frac{Q^2}{2}\ln\left(\frac{r}{l}\right)+\frac{r^2}{l^2}.
\end{eqnarray}
where $m$ is related to the black hole mass, $l$ is the AdS radius which is connected with the negative cosmological constant $\Lambda$ via $\Lambda=-1/l^2$ and $Q$ is the total charge of the black hole. About the basic thermodynamic properties in terms of the event horizon radius $r_h$ which is determined by the largest root of the equation $f(r_h)=0$, there are generally two different forms of these quantities.
\subsection{Case \Rmnum{1}}
By taking advantage of the Komar formula to determine the mass of the black hole, authors of Ref. \cite{Frassino2015} write the first law of thermodynamics of the charged BTZ black hole as $dM=TdS+VdP+\Phi dQ$, where the relevant quantities are
\begin{eqnarray}
M &=&\frac{m}{4}=\frac{r_h^2}{8l^2}-\frac{Q^2}{16}\ln\left(\frac{r_h}{l}\right),\label{enthalpy1}\\
T &=&\frac{r_h}{2\pi l^2}-\frac{Q^2}{8\pi r_h},\label{temperature}\\
S &=&\frac12 \pi r_h,\label{entropy}\\
P &=&-\frac{\Lambda}{8\pi}=\frac{1}{8\pi l^2}, \label{pressure}\\
V &=&\pi r_h^2-\frac14 \pi Q^2 l^2, \label{volume1}\\
\Phi &=&-\frac18 Q \ln\left(\frac{r_h}{l}\right).
\end{eqnarray}
\subsection{Case \Rmnum{2}}
An alternative scheme is renormalization procedure by enclosing the system in a circle of radius $r_0$ and taking the limit $r_0\rightarrow\infty $ whilst keeping the ratio $r/r_0=1$ \cite{Cadoni2008}. Then the black hole mass is interpreted as the total energy inside the circle of radius $r_0$. Based on this fact, Ref. \cite{Frassino2015} introduces a new thermodynamic parameter $R$ associated with the renormalization length scale $R=r_0$ via writing $$f(r)=-2m_0-\frac{Q^2}{2}\ln\left(\frac{r}{r_0}\right)+\frac{r^2}{l^2}.$$ The first law of thermodynamics of the charged BTZ black hole becomes $d\tilde{M}=TdS+\tilde{V}dP+\tilde{\Phi} dQ+K d R$, where the relevant quantities are
\begin{eqnarray}
\tilde{M} &=&\frac{m_0}{4}=\frac{r_h^2}{8l^2}-\frac{Q^2}{16}\ln\left(\frac{r_h}{R}\right),\label{enthalpy2}\\
\tilde{V} &=&\pi r_h^2, \label{volume2}\\
\tilde{\Phi} &=&-\frac18 Q \ln\left(\frac{r_h}{R}\right), \\
K &=&-\frac{Q^2}{16R},
\end{eqnarray}
and the other thermodynamic quantities $T$, $S$, $P$ are still expressions~(\ref{temperature}),~(\ref{entropy}) and~(\ref{pressure}).
\section{Thermodynamic curvature of charged BTZ black hole} \label{sec4}
In a thermodynamic system, the phase space contains both the generalized coordinates and their conjugate generalized forces (or the conjugated extensive and intensive quantities). Depending on the selection of the thermodynamic potential, the role of the two can be exchanged. Here the phase space of the BTZ black hole should be $\{T, S, P, V, \Psi, Q\}$ in Case \Rmnum{1} and $\{T, S, P, \tilde{V}, \tilde{\Phi}, Q, K, R\}$ in Case \Rmnum{2}. Once the thermodynamic potential is chosen, when a certain generalized coordinates are fixed, naturally, the conjugate generalized force can be obtained and also fixed directly.
For our present discussion, we calculate the thermodynamic curvature in two-dimensional coordinate space, in which the conjugate generalized forces are naturally emerged. On the one hand, as an analogy with a simple thermodynamic system, the influence of pressure-volume and temperature-entropy on the thermodynamic behavior of the system seems to be paid more attention. On the other hand, it is found that the results we obtained for the fixed charge and for the charge as an independent thermodynamic quantity are qualitatively consistent. Therefore, in order to simplify the calculation without losing some physical information, we limit the analysis to fixed charge $Q$ in Case \Rmnum{1} and fixed $Q$ and $R$ in Case \Rmnum{2}.
In addition, as supplementary material, we also show the effect of the charge $Q$ on the behavior of the thermodynamic curvature of the charged BTZ black hole in Appendix \ref{app}. The qualitative behavior of thermodynamic curvature of the charged BTZ black hole is consistent whether in the coordinate space $\{S,P\}$ with fixed charge $Q$ or in the coordinate spaces $\{S,Q\}$ and $\{S, P, Q\}$ with the charge $Q$ as an independent thermodynamic quantity. We all have positive thermodynamic curvature.
\subsection{Thermodynamic curvature for Case \Rmnum{1}}
In principle, there should be four ways, i.e., in coordinate spaces $\{S,P\}$, $\{S,V\}$, $\{T,V\}$ and $\{T,P\}$. For the coordinate space $\{T,V\}$, we need to write the entropy $S$ and thermodynamic pressure $P$ as a function of temperature $T$ and thermodynamic volume $V$, respectively. The analytical forms become very complicated, hence we will ignore the case here only for avoiding the technique complexity.
\begin{itemize}
\item In the coordinate space $\{S,P\}$, we need to write the temperature $T$ and thermodynamic volume $V$ as functions of entropy $S$ and pressure $P$, respectively
\begin{eqnarray}
T=\frac{128PS^2-\pi Q^2}{16\pi S}, \qquad V=\frac{128PS^2-\pi Q^2}{32\pi P}.
\end{eqnarray}
Hence according to Eq.~(\ref{linesp}), we can directly calculate the expression of thermodynamic curvature
\begin{eqnarray}
R_{SP}=\frac{384\pi Q^2 PS}{(\pi Q^2+256P S^2)^2}.
\end{eqnarray}
\item In the coordinate space $\{S,V\}$, the temperature $T$ and thermodynamic pressure $P$ can be written as functions of entropy $S$ and volume $V$, respectively
\begin{eqnarray}
T=\frac{\pi Q^2 V}{16S(4S^2-\pi V)}, \qquad P=\frac{\pi Q^2}{128 S^2-32\pi V}.
\end{eqnarray}
Hence according to Eq.~(\ref{linesv}), we can obtain the expression of thermodynamic curvature
\begin{eqnarray}
R_{SV}=\frac{12S(4S^2-\pi V)}{(\pi V-12 S^2)^2}.
\end{eqnarray}
\item In the coordinate space $\{T,P\}$, we must have the expressions of entropy $S$ and thermodynamic volume $V$ in terms of temperature $T$ and pressure $P$, respectively
\begin{eqnarray}
S=\frac{\pi T+\sqrt{\pi^2 T^2+2\pi Q^2 P}}{16P}, \qquad V=\frac{T}{32P^2}\left(\pi T+\sqrt{\pi^2 T^2+2\pi Q^2 P}\right).
\end{eqnarray}
Hence according to Eq.~(\ref{linetp}), we can obtain the expression of thermodynamic curvature
\begin{eqnarray}
R_{TP}=\frac{24P\left[8\pi T^2\left(-\sqrt{\pi}T+\sqrt{2Q^2 P+\pi T^2}\right)+3Q^2 P\left(-5\sqrt{\pi}T+3\sqrt{2Q^2 P+\pi T^2}\right)\right]}{\sqrt{\pi}(9Q^2 P+4\pi T^2)^2}.
\end{eqnarray}
\end{itemize}
Meanwhile, we can easily test the identity as desired
\begin{eqnarray}
R_{SP}=R_{SV}=R_{TP}>0.
\end{eqnarray}
Furthermore, for the extreme black hole $T=0$, we can observe clearly that thermodynamic curvature is a finite positive value $R_{_{T=0}}=1/(3S)$. Based on an empirical conclusion under the framework of thermodynamic geometry theory, i.e., the negative (positive) thermodynamic curvature is associated with attractive (repulsive) microscopic interactions for a thermodynamic system \cite{Ruppeiner2008,Wei2015,Wei2019a,Wei2019b,Wei2019c,Miao2018a,Miao2018b,Miao2019a,Miao2019b,Xu2019a,Xu2019b}, we can speculate that the charged BTZ black hole is likely to present a repulsive between its molecules phenomenologically or qualitatively. In addition, when $Q=0$, the thermodynamic curvature degenerates to zero, which implies that the neutral BTZ black hole is Ruppeiner flat \cite{Cai1999,Sarkar2006,Akbar2011}.
\subsection{Thermodynamic curvature for Case \Rmnum{2}}
Generally, there should also be four ways, i.e., in coordinate spaces $\{S,P\}$, $\{T,\tilde{V}\}$, $\{S,\tilde{V}\}$ and $\{T,P\}$. However, because entropy $S$ and thermodynamic volume $\tilde{V}$ are not independent of each other, the coordinate space $\{S,\tilde{V}\}$ is invalid.
\begin{itemize}
\item In the coordinate space $\{S,P\}$, we need to write the temperature $T$ and thermodynamic volume $\tilde{V}$ as functions of entropy $S$ and pressure $P$, respectively
\begin{eqnarray}
T=\frac{128PS^2-\pi Q^2}{16\pi S}, \qquad \tilde{V}=\frac{4S^2}{\pi}.
\end{eqnarray}
Hence according to Eq.~(\ref{linesp}), we can directly calculate the expression of thermodynamic curvature
\begin{eqnarray}
\tilde{R}_{SP}=\frac{2\pi Q^2}{S(128PS^2-\pi Q^2)}.
\end{eqnarray}
\item In the coordinate space $\{T,\tilde{V}\}$, the entropy $S$ and thermodynamic pressure $P$ can be written as functions of temperature $T$ and volume $\tilde{V}$, respectively
\begin{eqnarray}
S=\left(\frac{\pi \tilde{V}}{4}\right)^{1/2}, \qquad P=\frac{\sqrt{\pi}T}{4\sqrt{\tilde{V}}}+\frac{Q^2}{32\tilde{V}}.
\end{eqnarray}
Hence according to Eq.~(\ref{linetv}), we can obtain the expression of thermodynamic curvature
\begin{eqnarray}
\tilde{R}_{T\tilde{V}}=\frac{Q^2}{2\pi T\tilde{V}}.
\end{eqnarray}
\item In the coordinate space $\{T,P\}$, we must have the expressions of entropy $S$ and thermodynamic volume $\tilde{V}$ in terms of temperature $T$ and pressure $P$, respectively
\begin{eqnarray}
S=\frac{\pi T+\sqrt{\pi^2 T^2+2\pi Q^2 P}}{16P}, \qquad \tilde{V}=\frac{\pi T^2+Q^2 P+T\sqrt{\pi^2 T^2+2\pi Q^2 P}}{32P^2}.
\end{eqnarray}
Hence according to Eq.~(\ref{linetp}), we can obtain the expression of thermodynamic curvature
\begin{eqnarray}
\tilde{R}_{TP}=\frac{16 Q^2 P+16T\left(\pi T-\sqrt{\pi^2 T^2+2\pi Q^2 P}\right)}{\pi Q^2 T}
\end{eqnarray}
\end{itemize}
Meanwhile we also have the relation as desired
\begin{eqnarray}
\tilde{R}_{SP}=\tilde{R}_{T\tilde{V}}=\tilde{R}_{TP}>0.
\end{eqnarray}
Moreover, for the extreme black hole $T=0$, we can observe clearly that thermodynamic curvature tends to be positive infinity. Hence we can conjecture that the charged BTZ black hole is likely to present a repulsive between its molecules phenomenologically or qualitatively. When $Q=0$, the thermodynamic curvature vanishes, indicating that the neutral BTZ black hole is Ruppeiner flat \cite{Cai1999,Sarkar2006,Akbar2011}. More importantly, comparing with the thermodynamic curvature of Case \Rmnum{1} in which the thermodynamic curvature of extreme black hole tends to be a finite positive value, we can clearly observe that this difference can be used as a diagnosis for the discrimination of two different thermodynamic approaches of the charged BTZ black hole.
\section{Conclusion and Discussion}\label{sec5}
Comparing the results of Case \Rmnum{1} and Case \Rmnum{2}, we can conclude that:
\begin{itemize}
\item Both of Case \Rmnum{1} and Case \Rmnum{2}, the resulting thermodynamic curvatures are always positive, which may be related to the information of repulsive interaction between black hole molecules for the charged BTZ black hole.
\item When $Q=0$, in both cases, we can see that the thermodynamic curvature degenerates to zero, which shows that the neutral BTZ black hole is Ruppeiner flat \cite{Cai1999,Sarkar2006,Akbar2011}.
\item For the extreme black hole, i.e., $T=0$, the thermodynamic curvature is a finite positive value in Case \Rmnum{1}, while in Case \Rmnum{2}, it tends to be positive infinity. In the previous analysis of the thermodynamic curvature of AdS black holes \cite{Wei2015,Wei2019a,Wei2019b,Wei2019c,Miao2018a,Miao2018b,Miao2019a,Miao2019b,Xu2019a,Ghosh2019}, we usually see that in extreme black holes, the thermodynamic curvature tends to be positive or negative infinity. In Case \Rmnum{1} of the thermodynamic analysis about charged BTZ black hole, the thermodynamic curvature of the extreme black hole is a positive finite value, which is not consistent with the previous discussion. From this point of view, thermodynamic curvature of the extreme black hole discussed in the present paper may serve as a criterion to discriminate the two thermodynamic approaches introduced in Ref. \cite{Frassino2015} and our result seems to support the Case \Rmnum{2} of the thermodynamic analysis about charged BTZ black hole. This is consistent with the result obtained in \cite{Mo2017} by using the idea of holographic heat engine.
\end{itemize}
We have known that for Case \Rmnum{1}, the charged BTZ solution is proved to be a super-entropic black hole \cite{Johnson2019a,Johnson2019b}, which violates the reverse isoperimetric inequality. Generally, for a $d$-dimensional black hole, its thermodynamic volume $V$ and entropy $S$ satisfy the reverse isoperimetric inequality \cite{Cvetic2011}
$$\mathcal{R}=\left(\frac{(d-1)V}{\omega_{d-2}}\right)^{\frac{1}{d-1}}\left(\frac{\omega_{d-2}}{4S}\right)^{\frac{1}{d-2}}\geq 1,$$ where $\omega_n=2\pi^{(n+1)/2}/\Gamma\left[(n+1)/2\right]$ is the standard volume of the round unit sphere. In $d=3$, the charged BTZ black hole has $\mathcal{R}<1$ in Case \Rmnum{1}. While in Case \Rmnum{2}, because the thermodynamic volume $\tilde{V}$ and entropy $S$ are not independent of each other, then the above reverse isoperimetric inequality is saturated by the charged BTZ black hole, i.e., $\mathcal{R}=1$. The underlying reason for the satisfaction, violation or saturation of the reverse isoperimetric inequality lies in the definition of the thermodynamic volume of a black hole, which is also a very significant research issue in black hole thermodynamics. In general, if the thermodynamic volume is consistent with the expression of the geometric volume, often the reverse isoperimetric inequality is saturated ($\mathcal{R}=1$), while when the thermodynamic volume of the black hole does not look like any geometric volume, it generally corresponds to a super-entropic black hole ($\mathcal{R}<1$) or sub-entropic black hole ($\mathcal{R}>1$).
When {\em the reverse isoperimetric inequality is saturated, the thermodynamic curvature of extreme black hole tends to be (positive or negative) infinity}. This conjecture is verified by many examples, like Schwarzschild AdS black hole, Reissner-Nordstr\"{o}m AdS black hole and Gauss-Bonnet AdS black hole (and various other simple static black hole solutions of the pure Einstein gravity or higher-derivative generalizations thereof), and Case \Rmnum{2} of our current discussing. Moreover, according to the above calculation and analysis of Case \Rmnum{1} for the charged BTZ black hole,
these results are likely to hint an empirical conjecture that {\em for super-entropic black holes, the thermodynamic curvature of extreme black hole will go to a finite (positive or negative) value.} In the future work, we will test the conjecture in other super-entropy black holes, like ultra-spinning limit of Kerr-AdS black holes \cite{Hennigar2015a,Hennigar2015b}. Meanwhile, for the sub-entropic black hole, such as the Kerr-AdS black hole \cite{Cvetic2011,Johnson2019c}, STU black holes \cite{Johnson2019c,Caceres2015}, Taub-NUT/Bolt black hole \cite{Johnson2014}, generalized exotic BTZ black hole \cite{Johnson2019b}, noncommutative black hole \cite{Miao2017} and accelerating black holes\cite{Appels2016}, what kind of conjecture can we make? These are also very interesting topics to be discussed in the future.
\section*{Acknowledgments}
The financial supports from National Natural Science Foundation of China (Grant Nos.11947208 and 11947301), China Postdoctoral Science Foundation (Grant No. 2020M673460), Major Basic Research Program of Natural Science of Shaanxi Province (Grant No.2017ZDJC-32), Scientific Research Program Funded by Shaanxi Provincial Education Department (Program No.18JK0771) are gratefully acknowledged. This research is supported by The Double First-class University Construction Project of Northwest University. The authors would like to thank the anonymous reviewers for the helpful comments that indeed greatly improve this work.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,384
|
<div>
<h2><span class="login-title">登录</span></h2>
<div>
<form (ngSubmit)="login()" [formGroup]="form" class="form-group">
<div *ngFor="let field of fields" class="form-row">
<field [field]="field" [form]="form"></field>
</div>
<div class="form-row text-center">
<button type="submit" class="btn btn-default btn-lg" [disabled]="!form.valid">登录</button>
</div>
</form>
</div>
</div>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,099
|
"""
RTMP (Real Time Messaging Protocol) implementation.
U{http://en.wikipedia.org/wiki/Real_Time_Messaging_Protocol}
"""
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 8,671
|
Marduk > Those of the Unlight > Reviews > ClusterFuct
Those of the Unlight
View all reviews for Marduk - Those of the Unlight
Sunlight Has Failed - 100%
ClusterFuct, May 4th, 2014
This is an evil fucking album. From the opening blast of "Darkness Breeds Mortality", Marduk immerse the listener in pure darkness. Aside from the obvious and continuing theme of the black and the bleak, Those of the Unlight sees Marduk expanding the already stellar songwriting evident on their debut LP. While Dark Endless succeeds in cohesion of atmosphere, Those of the Unlight is thematically tighter than its predecessor.
Dread's departure from Marduk is certainly disconcerting for fans of his unearthly wail, though drummer Af Gravf's desperate rasp perfectly accompanies the evil songs on display. While Those of the Unlight blasts Marduk's requisite evil riffs and classic black metal vocals, the album takes a more depressing tone lyrically. By the time the listener is drowned in the brooding despair that is "Echoes From the Past", it is clear that Marduk have evolved as a band.
Stylistically, Those of the Unlight differs little from the sound established by Marduk's debut LP. The riffs evoke dread and evil, while the bass is loud enough in the mix to add a sinister undertone to the mostly blasting drums. The difference here is that Marduk seem to have "found their sound" on their second album. Those of the Unlight foreshadows the greatness of future Marduk releases. That "Marduk sound" that fans of the band's newer material have come to so easily identify has its roots deeply entrenched in this record.
Evil's guitar playing is as inspired as its ever been and really is the anchor of the album. The solo in "Wolves," the savage assault of "Burn My Coffin," the slow kill in the cold that is "Echoes From the Past"... Whether it's blasting tremolos or slow dirges of riffs, the guitar playing on the album pushes the aesthetic boundaries of what black metal had attempted to establish early on. Suddenly, slower passages in black metal became more acceptable, and the bass guitar could share the lead to great effect. In short, Those of the Unlight brought black metal to places it had never been, and pushed the genre even further.
These dark, evil, brooding songs will claw their way into your psyche and light the fires of hell in your cold mind. While Norwegian black metal bands of the early '90s were freezing the scene with cold blasts of hatred and evil, Marduk defined a "Swedish black metal sound" that would further the scope of what black metal would accomplish in the years that followed. A black metal milestone.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,942
|
A diagram representing the [semi-supervised embedding](https://dl.acm.org/citation.cfm?id=1390303) architecture, by Weston *et al.*.
## Output

## Source
[Overview of neural network architectures for graph-structured data analysis](https://www.cl.cam.ac.uk/~pv273/slides/CLGraph.pdf), *UCL AI Journal Club*
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,580
|
\section{Introduction}\label{sec:intro}
Recommendation of outfits having compatible fashion items is a well studied research topic in the fashion domain \cite{bettaney2020fashion,chen2021tops,li2020bootstrapping,Li:2020:CompositionalVisualCoherence,Lin:2020:AmazonFOCIR,revanur21semi,yang2020learning}. Recent research in this regard explores graph neural networks (GNN) to connect users, items and outfits \cite{cui2019dressing,li20hier,liu2020learning,Wang:2021:GAttnNetVSE,yang20learning,Zhan:2021:A3-FKG} based on historical purchases as well as personalization \cite{chen2019pog,chen19pers,jaradat2020outfit2vec,landia2021personalised,Lin:2020:OutfitNet,lu2019learning,Lu:2021:CVPR} and explainability \cite{dong2020fashion,han2019prototype,Lin:2019,yang2019interpretable}. An apparent shortcoming of the current research on compatibility learning is the complete disregard for the explicit style associated with an outfit. However in real life, a person, say a user on an e-commerce platform, would typically have an explicit style in mind while choosing items for an outfit. The main objective of this paper is to learn compatibility between items given a specific style which in turn helps to generate style-specific outfits.
We illustrate the importance of style-guided outfit generation through an example figure. Three sets of outfits are shown in Figure~\ref{fig:overview} with a white top-wear, an item that a user likes but is doubtful about making the final purchase (the reader is requested to ignore the values at the bottom of the figure for the time being). The platform may have the capability to showcase or to provide the user an option of generating outfits specific to various styles (this example showcases \emph{Athleisure}, \emph{Formal} and \emph{Casual}). Given this setup, a style-guided algorithm has two advantages: (a) it can generate compatible outfits from different styles, hence, providing the choice to the user, and (b) it will not generate an outfit which may be otherwise compatible but not in accordance with the desired style. The concept of jointly modelling for explicit style and compatibility is lacking in the area of fashion recommendation and current research have mostly treated them in separate silos. Having said this, one should be mindful of the fact that a style-independent compatibility algorithm followed by a style classification method, say Style2Vec \cite{Lee:2017:Style2Vec}, can allocate outfits to their relevant styles post the generation step. Thus in principle it is possible to combine existing work to generate the outfits in Figure~\ref{fig:overview}. It is however easy to see that such a technique is not efficient, since a large set of outfits need to be generated of which only a subset will be relevant to a particular style.
\vspace{-2mm}
\begin{figure}[h]
\centering
\includegraphics[scale=0.12]{images/qualitative_results.png}
\caption{\small{Given a top-wear liked by a user, a style-guided method is able to create outfits conditional on various styles (\emph{athleisure}, \emph{formal} and \emph{casual}) while a style-independent compatibility model will typically generate outfits from dominant style. The values indicate the style-conditional compatibility scores for each item. Note that for a given style, the bottom-wear corresponding to that style gets the highest score.}}
\label{fig:overview}
\vspace{-2mm}
\end{figure}
\vspace{-6mm}
In recent times there have been some attempts at connecting style and outfit recommendation. Kuhn et al.\ \cite{kuhn:2019:Outfittery} does not consider the presence of explicit styles and rather learn compatibility while inferring the presence of latent style associated with each item. Jeon et al.\ \cite{Jeon:2021:FANCY} use extracted fashion attributes of full-body outfit images for modelling style classification, ignoring compatibility learning in the process. Learning outfit level theme or style from item descriptions, done by Li et al.\cite{li:2019:coherent} is a weak approach and fails when the descriptions do not exhaustively cover different styles. Singhal et al. \cite{Singhal:2020:VCP} models style between item pairs using an autoencoder, thus treating style as an implicit idea. A common deficiency in all of these works is the ability to generate style guided outfits. Theme Matters~\cite{Lai:2020:ThemeMatters}, authored by Lai et al. is an archived work which comes closest to our model. It proposes a supervised approach that applies theme-aware attention to item pairs having fine-grained category tags (e.g., long-skirt, mini-skirt, etc.). The main handicap of their approach is that the size of the model increases exponentially with the number of fine-grained categories which was validated by our experiments.
We propose a {\bf Style-Attention-based Compatible Outfit Recommendation} (SATCORec) framework that uses high-level categories like top-wear, bottom-wear etc. (general e-commerce taxonomy) and explicit outfit-level style information (\emph{formal}, \emph{casual}, \emph{sporty} etc) to learn compatibility among items in an outfit. It consists of two components, namely a Style-Compatibility-Attention Network (SCA Net) \cite{Lin:2020:AmazonFOCIR} and a novel Style Encoder Network (SE-Net). SE-Net considers an outfit to be a \emph{set} of items and makes use of the Set Transformer \cite{Lee:2019:SetTransformer} architecture to model a style specific distribution for each outfit. We believe that we are the first to adopt the set transformer, which is state-of-the-art technique to model data points that have the properties of a set, in a framework to project an outfit into a latent style space. Several variations of extracting a style representation from the learnt distribution have been investigated. We make use of this representation to estimate style-specific subspace attention within SCA Net which helps to learn compatibility conditional on style. Finally, we use the beam search approach \cite{Bettaney:2021:BeamSearch} to generate outfits based on a parent item, a template and a style.
We have created an in-house dataset of size approx.\ 100k corresponding to women's western wear outfits, taking items from an e-commerce portal.
Various experiments have been performed on this data, comparing compatibility and style-specific metrics between baseline methods and SATCORec. Our method has been found to excel in compatibility learning, even when outfits are generated conditional on style. Most importantly, SATCORec is seen to outperform all the baselines in style metrics by a large margin.
\vspace{-4mm}
\section{Methodology}\label{methodology}
SATCORec is a deep learning model, developed to learn the compatibility between lifestyle items present within an outfit, contingent on the style to which the outfit belongs. The model first infers the style of the outfit which is subsequently used to learn compatibility between items within it.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{images/style-new4.pdf}
\caption{\small{Architecture of SATCORec. The lower module combines the SE-Net and Style classifier and is trained separately. Item images of an outfit are fed to a CNN to extract visual features which are subsequently passed onto a Set transformer to output a Gaussian distribution. The style classifier is trained using either a random sample or the parameters of the Gaussian. A linear combination of these two along with the parameters of style-specific pooled Gaussian is passed as a feature in the SCA Net module which learns compatibility via attention.}}
\label{fig:proposed}
\vspace{-2mm}
\end{figure}
We start with proposing a novel \emph{Style Encoder Network} (SE-Net) which learns a parametric probability distribution representing outfit style using the set transformer \cite{Lee:2019:SetTransformer}, followed by a style classification task further downstream. We extend the compatibility framework of Lin et al.\ \cite{Lin:2020:AmazonFOCIR} to allocate differential importance to features extracted from the image of an item not just based on category information but also on the outfit style, thus complementing SE-Net. We have further modified the compatibility loss in \cite{Lin:2020:AmazonFOCIR} to incorporate style. The entire architecture is shown in Fig. \ref{fig:proposed}. Details of SE-Net and the Style Classifier are provided in Sections~\ref{sec:senet} and \ref{sec:StyleClassifier} respectively. SCA Net and the modified compatibility loss are explained in Section~\ref{sec:scaNet}. We explain the generation of outfits based on individual or mixture of style in Section~\ref{sec:outfit_gen}.
To introduce the notations, let us assume that $m$ explicit styles, say $\mathcal{S} \equiv \{s_1, s_2, \ldots, s_m\}$, are defined in an online portal recommending complete outfits for a user. For an outfit $\mathcal{O}_i$ belonging to style $s_k$ (say $\mathcal{O}_i \lvert s_k$), we assume one of the items within the outfit to be the \emph{anchor item} and the rest is defined as \emph{query set}. We call this \texttt{<anchor item, query set>} as a \emph{positive example} of compatibility. A \emph{negative instance} is one where the anchor item is changed so that it no longer stays compatible with the query set.
\vspace{-4mm}
\subsection{Style Encoder Network}\label{sec:senet}
The process of encoding style of an outfit starts with acknowledging the fact that denoting an outfit as an ordered sequence of items, as is done in some recent work \cite{Han:2017aa,Nakamura:2018:OutfitGenSTyleExtractionBiLSTMAE}, can be seen to be unrealistic. In this paper, we portray an outfit as a set of items which serves two important properties,
(i) items within an outfit can be termed as permutation invariant, and
(ii) an outfit is allowed to be of varying length.
This characterization makes the \emph{set transformer} approach \cite{Lee:2019:SetTransformer} an appropriate candidate for our style encoder job. This approach consists of an encoder and a decoder, both of which rely on attention mechanisms to produce a representative output vector.
The idea of representing an individual outfit style by a specific embedding is apt for compatibility training but found to be lacking in the generation context. Since outfit generation is hinged on a single parent item, a pre-defined template and a style, we may not be able to pass any reference outfit to the style encoder. To circumvent this problem, we make the assumption that each $\mathcal{O}_i \lvert s_k$, is generated from some parametric continuous probability distribution thus representing a latent style space. In this paper, we assume that this distribution is Gaussian, although we acknowledge that it can be any other continuous distribution. The parameters of this Gaussian distribution is estimated by the set transformer. In this framework, as can be seen in Figure~\ref{fig:proposed}, the images of an outfit are passed through a pre-trained ResNet18 \cite{He:2016aa} and the corresponding visual feature vectors ($\in {\rm I\!R}^{d_s}$) are fed into the set transformer to provide estimates for the mean vector and co-variance matrix (we assume this to be diagonal). To summarise, the set transformer produces an unique Gaussian distribution for each outfit $\mathcal{O}_i \lvert s_k$,
\[ \displaystyle \mathcal{O}_i | s_k \sim \mathcal{N}(\bm{\mu}_{i, s_k}, \bm{\Omega}_{i, s_k}), \, \text{where } \bm{\Omega}_{i, s_k} = \text{diag}(\sigma_{il, s_k}^2),\quad l=1,\ldots, d_s \text{ and } \mu \in {\rm I\!R}^{d_s}.\]
Here, we additionally impose the restriction that the inferred Gaussian distributions are close to the unit Normal $\mathcal{N}(0, \mathbb{1})$, so that the learnt style space is smooth across the various styles. We achieve this via the KL divergence loss defined in equation~\ref{eqn:style_encoder_loss}.
\begin{equation}
\mathcal{L}_{Style} = \text{KL}(\mathcal{N}(\hat{\bm{\mu}}_{i, s_k}, \hat{\bm{\Omega}}_{i, s_k})\, \lvert \lvert \, \mathcal{N}(0, \mathbb{1})) \label{eqn:style_encoder_loss}
\end{equation}
Figure~\ref{fig:kl_diverg_tsne} demonstrates a t-SNE visualisation of random samples drawn from outfit specific Gaussians for 4 different styles. A common and smooth representation space is formed after introducing the KL-loss even though clusters are maintained. A smooth space is necessary particularly in the generation of outfits with style mixing, as we will see later.
\begin{figure}[h]
\includegraphics[scale=0.16]{images/without_kld1.png}
\includegraphics[scale=0.16]{images/with_kld1.png}
\caption{\small{t-SNE plots of the sample vectors ($\mathbf{s}_{\mathcal{O}_i, s_k}$) for 4 styles (\textcolor{red}{Casual}, \textcolor{blue}{Formal}, \textcolor{yellow}{Summer}, \textcolor{brown}{Party}). The plot on the left is when these vectors are generated without the KL-divergence loss. Existence of a smooth yet identifiable style latent space is evident in the plot on the right when we introduce the loss. Best viewed in colour.}}
\label{fig:kl_diverg_tsne}
\end{figure}
\vspace{-4mm}
The output emanating from the set transformer is passed on for a style classifier job. Depending on the specific variation, an outfit $\mathcal{O}_i \lvert s_k$, we pass either the parameters of the Gaussian ($\theta_{i, s_k} \equiv [\hat{\bm{\mu}}_{i, s_k}, \hat{\bm{\Omega}}_{i, s_k}]$) or a random sample from the Gaussian $\mathbf{s}_{\mathcal{O}_i, s_k} \sim \mathcal{N}(\hat{\bm{\mu}}_{i, s_k}, \hat{\bm{\Omega}}_{i, s_k})$ to the style classifier. We have elaborated on the exact process in section~\ref{sec:StyleClassifier}.
\vspace{-4mm}
\subsection{Style classifier}\label{sec:StyleClassifier}
The SE-Net output vector is passed as a feature to an MLP used to classify the style of the outfit. This supervision ensures that SE-Net captures specific and correct information about the outfit style. The style classification module solves an $m$-class classification problem using an MLP with $N$ layers. The classification loss is thus,
\begin{equation}
\mathcal{L}_{\text{classif}} = -\sum_{i=1}^{m}y_{s_k} log(\hat{p}(O_i\mid s_k)) \label{eqn:style-classification-loss}
\end{equation}
where $y_{s_k} = 1 \text{, if outfit } O_i$ has style $s_k$ and $\hat{p}(O_i\mid s_k) = \text{MLP}(\mathbf{s}_{\mathcal{O}_i, s_k} \text{ or } \theta_{i, s_k})$.
The SE-Net and style classifier are trained jointly as a separate module. Post training, we extract a vector ($\mathbf{r}_{\mathcal{O}_i, s_k}$) from this module as style representation of the outfit $\mathcal{O}_i \lvert s_k$ to be passed as a feature to SCA Net. Further, a \emph{global style} representation for a style $s_k$ is given by a pooled Gaussian distribution, aggregating over the parameters of all outfits belonging to that style: $\hat{\hat{\bm{\mu}}}_{s_k} = \frac{1}{n_{s_k}}\sum_{i=1}^{n_{s_k}}\hat{\mu}_{i, s_k}$ and $\hat{\hat{\bm{\Omega}}}_{s_k} =\text{diag}(\hat{\hat{\sigma_{l}^2}})$ where $\hat{\hat{\sigma}}_{l}^2 = \frac{1}{n_{s_k}^2}\sum_{i=1}^{n_{s_k}}\hat{\sigma}_{il}^2$. These global distribution parameters will be used again in the outfit generation step. Equation~\eqref{eqn:style_representation} shows a generic form of style representation vector,
\begin{equation}
\mathbf{r}_{\mathcal{O}_i, s_k} \equiv \left[\lambda_1\, \mathbf{s}_{\mathcal{O}_i, s_k} + \lambda_2\,\hat{\bm{\mu}}_{i, s_k} + \lambda_4\,\hat{\hat{\bm{\mu}}}_{s_k}, \lambda_3\hat{\bm{\Omega}}_{i, s_k} + \lambda_5\hat{\hat{\bm{\Omega}}}_{s_k}\right]. \label{eqn:style_representation}
\end{equation}
SATCORec variations, defined in Table~\ref{tab:SATCORec_variations}, are created by setting values for each $\lambda_j$. Also note that, we pass $\mathbf{s}_{\mathcal{O}_i, s_k}$ to the style classifier for \emph{SATCORec-r}, \emph{SATCORec-($p_m$+$g_m$)} and \emph{SATCORec-(r+$g_m$)} and $\theta_{i, s_k}$ for the rest. It is possible to set $\lambda$ as unknown and learn it.
\vspace{-4mm}
\begin{table}[h]
\caption{\small{Variations of SATCORec that have been experimented with.}} \label{tab:SATCORec_variations}
\footnotesize
\begin{tabular}{|l|c|c|c|c|c|l|c|c|c|c|c|}
\hline
& $\lambda_1$ & $\lambda_2$ & $\lambda_3$ & $\lambda_4$ & $\lambda_5$
& $\lambda_1$ & $\lambda_2$ & $\lambda_3$ & $\lambda_4$ & $\lambda_5$ \\ \hline
SATCORec-r & 1 & 0 & 0 & 0 & 0 &
SATCORec-p & 0 & 1 & 1 & 0 & 0 \\ \hline
SATCORec-($p_m$+$g_m$) & 0 & $\lambda$ & 0 & 1 & 0 &
SATCORec-(p+g) & 0 & $\lambda$ & $\lambda$ & 1 & 1 \\ \hline
SATCORec-(r+$g_m$) & $\lambda$ & 0 & 0 & 1
& 0 & \multicolumn{6}{c|}{}
\\ \hline
\end{tabular}
\end{table}
\vspace{-6mm}
\subsection{SCA Net}\label{sec:scaNet}
We have extended the CSA-Net framework developed by Lin et al.\ in \cite{Lin:2020:AmazonFOCIR} to incorporate the concept of style while learning item-item compatibility. In \cite{Lin:2020:AmazonFOCIR}, the image of an anchor item ($I^a$) within an outfit is passed through a ResNet18, which acts as the CNN backbone. The embedding output vector ($\mathbf{x}$) of size 64 is multiplied by $k$ learnt masks ($\mathbf{m}_1, \ldots, \mathbf{m}_\tau$) that help to learn the subspaces. The anchor item category ($c^a$) and a query set item (referred to as \emph{target}) category ($c^t$) information are consumed as 1-hot encoded vectors to estimate a set of subspace attention weights ($\omega_1, \ldots, \omega_\tau$). A weighted average of the masked embeddings results in the final embedding of the anchor item.
We simply extend the CSA-Net algorithm by providing the style representation ($\mathbf{r}_{\mathcal{O}_i, s_k}$) from SE-Net as an additional input in the estimation of attention weights. Thus, we define the final embedding as,
\[ \displaystyle f_{\mathcal{O}_i, a}^{s_k} = \psi(I^a, c^a, c^t, \mathbf{r}_{\mathcal{O}_i, s_k}) = \sum_{j=1}^\tau (\mathbf{x} \odot \mathbf{m}_j) \times \omega_{j, \mathbf{r}_{\mathcal{O}_i, s_k}}.\]
Here, $\psi(\cdot)$ represents the SCA network.
The SCA net uses the triplet loss for learning compatibility, similar to some current methods \cite{Tan:2019:LSCWES,Vasileva:2018:LTAE}. We represent the average distance between a positive item and remaining items in the outfit as $D_p^{s_k}$, same as CSA-Net. The multiple distances corresponding to the negatives are aggregated as $D_N^{s_k}$. The overall compatibility loss conditional on style is thus defined as,
\begin{equation}
\mathcal{L}_{compat} = \max(0, D_p^{s_k} - D_N^{s_k} + m), \label{eqn:compatibility_loss_1}
\end{equation}
We introduce one more loss function to account for penalisation when the wrong style is specified for an outfit. Given $\mathcal{O}_i | s_k$, we pass the style representation vector corresponding to a different style $s_q$, and compute the same distance metrics as above, and use them in the following loss function:
\begin{equation}
\mathcal{L}_{stylecompat} = \max(0, D_p^{s_k} - D_p^{s_q} + m). \label{eqn:compatibility_loss_2}
\end{equation}
The overall loss is defined as the weighted sum of these four individual losses:
\begin{equation*}
\mathcal{L}_{overall} = \sum_p \alpha_p\, \mathcal{L}_p, \quad p \in \{\text{KL, classification, compatibility, style-compatibiliy}\} \label{eqn:total_loss}
\end{equation*}
\vspace{-6mm}
\subsection{Outfit generation}\label{sec:outfit_gen}
A globally optimal outfit generation task is non-trivial since it is infeasible to look into all possible combinations. An approximate solution based on the well known \emph{beam search method} \cite{Zhang:2020aa} is provided in this case. Note that to create an outfit for a user based on a chosen parent item, a given template and a specific style, we need a style representation vector to rank compatible items. If there is a reference outfit present, then this job is trivial. In the alternative case, we assume the pooled parameters to be representative of style for all the variations within SE-Net. To generate an outfit based on mixing of styles, we simply pass a linear combination of style representation vectors ($\alpha \mathbf{r}_{\mathcal{O}_i, s_k} + \beta \mathbf{r}_{\mathcal{O}_i, s_l}$) and rank compatible items.
\section{Experimental Evaluation}
In this section, we elaborate the dataset, metrics, baselines, implementation details and the different results testing the compatibility as well as style preservation power of the algorithms. \\
\noindent{\bf Dataset creation and metrics:}
We have annotated $\sim$100K outfits in two stages. At {\bf first}, we worked with fashion experts to get approximately 5000 outfits curated with 8 style annotations, namely [\emph{party}, \emph{outdoor}, \emph{summer}, \emph{formal}, \emph{athleisure}, \emph{winter}, \emph{causal}, \emph{celeb}]. Each annotated outfit consists of the images of individual items and and its style. There 6 high level item categories, [\texttt{top-wear}, \texttt{bottom-wear}, \texttt{foot-wear},\texttt{accessory}, \texttt{clothing-accessory}, \texttt{wholebody}]. In the {\bf second} stage, we augmented the outfit set using a simple attribute based similarity algorithm, where we used attributes like brand, colour, pattern, sleeve, etc. to get top-k similar products for an item in an outfit. Given an outfit, we removed one item from the original outfit and gave approx. top-10 similar candidates as options for replacement to human taggers for verification of compatibility and style of the new outfit. We repeated this for all item in an outfit and for all outfits in the outfit set. This operation expanded the data to $\sim$100K outfits, which are then divided into train, test and validation splits in 70:20:10 ratio. The overall frequency for each style type is given in Table~\ref{tab:outfit_stats}.
Fill-in-the-blank (FITB) \cite{Vasileva:2018:LTAE} and Compatibility AU-ROC are well known metrics used to evaluate an outfit compatibility model \cite{Han:2017aa,McAuley:2015:StylesSubstitutes}. Both these approaches involve creating negative items corresponding to each item of an outfit. To test performance at various levels of difficulty, we generate two types of negative items, \emph{soft negatives} where negative sampling is done from existing categories; and \emph{hard negatives} where we sample negatives from more fine-grained categories such as tops, t-shirts, heels etc. For each outfit, 5 replications for negative sampling are done and the mean metric values are reported. Note that the fine-grained category information is not used for training.
\vspace{-4mm}
\begin{table}[h]
\caption{\small{Distribution of curated outfits across different styles}} \label{tab:outfit_stats}
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}
\hline
& \textbf{Party} & \textbf{Outdoor} & \textbf{Summer} & \textbf{Formal} & \textbf{Athleisure} & \textbf{Winter} & \textbf{Casual} & \textbf{Celebrity} & \textbf{Total} \\ \hline
\# of Train Outfits & 8183 & 6280 & 7061 & 5136 & 16232 & 16028 & 5194 & 5424 & 69538 \\ \hline
\# of Valid Outfits & 1174 & 1001 & 1204 & 840 & 1981 & 2135 & 791 & 808 & 9934 \\ \hline
\# of Test Outfits & 3018 & 1937 & 2551 & 1648 & 2506 & 4695 & 2034 & 1480 & 19869 \\ \hline
\end{tabular}
\end{adjustbox}
\end{table}
\vspace{-4mm}
\noindent{\bf Implementation details:}
We used ResNet18 as the CNN backbone to extract visual features in both the modules of SATCORec. We do not train the entire ResNet18 but instead only the last convolutional block and an additional fully connected layer. Embeddings are of size 64 as is conventional in other state-of-the-art compatibility learning methods \cite{Lin:2020:AmazonFOCIR,Vasileva:2018:LTAE}.
Inside SE-Net, we use the SAB Set Transformer \cite{Lee:2019:SetTransformer} with hidden dimension $d_z = 32$ and 2 heads. We use 2 fully connected MLP layers for classification. An Adam optimizer \cite{adam} with mini batches of 128 outfits and a learning rate of $5 \times 10^{-5}$ is used. Note that, we have trained and frozen the SE-Net module separately. We used the Adam optimizer again to train SCA Net with a mini-batch size of 32 triplets, learning rate of $1 \times 10^{-5}$ and 5 subspaces. The \emph{Attention network} first transforms the concatenated one-hot-encoded category and the style representations to 32 dimensions each using a single fully connected layer and then concats the two to pass it to 2 fully connected layers which output the 5 subspace attention weights. The margin within the triplet loss was set to 0.3 and the weights for $\mathcal{L}_{compat}, \mathcal{L}_{stylecompat}$ and $\mathcal{L}_{Style}$ were set to $1, 0.5$ and $0.05$ respectively.
\noindent{\bf Baselines: }
We compare the performance of SATCORec against that of state-of-the-art techniques on the basis of the multiple metrics to demonstrate its efficacy in style conditional outfit generation and compatibility learning. Note that we use the same CNN backbone and embedding size for all the baselines. Additionally, the same 6 categories have been used for all the methods, even for those requiring fine-grained category information. The following are used as baselines (a). \textbf{CSA-Net \cite{Lin:2020:AmazonFOCIR}}, (b). \textbf{Type Aware \cite{Vasileva:2018:LTAE}}, (c). \textbf{TransNFCM \cite{yang2019transnfcm}}, (d). \textbf{Theme Matters \cite{Lai:2020:ThemeMatters}}, (e). \textbf{BPR-DAE \cite{Song:2017:BPR-DAE}}.
For each of the methods we follow the same architecture parameters which the paper specifies. Except {\bf Type aware}, whose code was available, we have implemented all of the baselines from scratch. For {\bf Theme Matters} we have first taken the type aware code and built upon it as is defined in the paper. In \textbf{BPR-DAE}, the method is specified for only for 2 categories, and we extend it for outfits with multiple items.
\vspace{-2mm}
\begin{table}[h]
\caption{\small{Comparison of compatibility learning for the baselines and SATCORec variations. We compute FITB and compatibility AU-ROC with hard and soft negatives separately. The style entropy for each methods are also tabulated. Using parameters or random sample from outfit style specific Gaussian is clearly the leader with respect to compatibility measures.}}
\label{tab:fitb-compatauc-metrics}
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{FITB}} & \multicolumn{2}{c|}{\textbf{Compat. AU-ROC}} & \multirow{2}{*}{\textbf{Entropy}} \\
\cline{2-5}
& \textbf{HN } & \textbf{SN } & \textbf{HN} & \textbf{SN} & \\
\hline
TypeAware & $30.7 \pm 0.17$ & $34.85 \pm 0.25$ & $52.62 \pm 0.06$ & $55.51 \pm 0.21$ & 0.49\\ \hline
BPR-DAE & $31.16 \pm 0.15$ & $31.21 \pm 0.12$ & $55.83 \pm 0.09$ & $55.76 \pm 0.08$ & 0.43\\ \hline
TransNFCM & $31.53 \pm 0.17$ & $36.47 \pm 0.33$ & $51.84 \pm 0.07$ & $57.78 \pm 0.08$ & 0.50\\ \hline
Theme Matters & $38.53 \pm 0.17$ & $63.2 \pm 0.21$ & $85.4 \pm 0.15$ & $93.85 \pm 0.1$ & 0.61\\ \hline
CSA-Net & $53.14 \pm 0.17$ & $67.05 \pm 0.25$ & $94.42 \pm 0.03$ & $96.3 \pm 0.03$ & 0.48 \\ \hline
SATCORec-r & ${\bf 53.32} \pm 0.18$ & $66.63 \pm 0.15$ & $94.47 \pm 0.02$ & $95.99 \pm 0.04$ & ${\bf 1.09}$ \\ \hline
SATCORec-p & $52.06\pm 0.10$ & ${\bf 67.31} \pm 0.14$ & ${\bf 94.78} \pm 0.02$ & ${\bf 96.47}\pm 0.02$ & $0.97$ \\ \hline
SATCORec-(p+g) & $46.56\pm 0.05$ & $61.03 \pm 0.17$ & $88.41\pm 0.02$ & $90.10 \pm 0.02$ & $0.78$ \\ \hline
SATCORec-(r+$g_m$) & $47.61 \pm 0.12$ & $60.70\pm 0.06$ & $88.88\pm 0.06$ & $91.34\pm0.02$ & $0.12$ \\ \hline
SATCORec-($p_m$+$g_m$) & $49.73\pm0.05$& $63.02\pm 0.11$& $90.96\pm 0.05$& $92.25 \pm 0.02$ & $0.63$\\
\hline
\end{tabular}
\end{adjustbox}
\end{table}
\vspace{-8mm}
\subsection{Compatibility Experiment:} \label{sec:comp-exp}
FITB and compatibility AU-ROC are computed separately on the hard and soft negative datasets for variations of SATCORec and the baselines and presented in table~\ref{tab:fitb-compatauc-metrics}. A preliminary sweep of the results clearly differentiates the performance of Theme Matters, CSA-Net and SATCORec variations from the rest. CSA-Net is based on subspace based attention mechanism, which is the state-of-the-art in learning outfit item compatibility, and SATCORec makes use of the same framework. It is surprising that Theme Matters performs better than TypeAware since both have the same compatibility learning framework. This performance bump is caused due to these methods incorporating complete outfit loss in their learning \cite{Lin:2020:AmazonFOCIR}.
SATCORec-p is the best performing model in the group, winning in 3 out of 4 cohorts. We think that the outfit-level Gaussian parameters capture sufficient information about the parent style of the outfit as well as variations within. The random sampling of the space can also capture the basic information of a style category, resulting in the healthy performance of SATCORec-r. The other variations do not perform well, probably because of ignoring individual or overall uncertainty.
\vspace{-4mm}
\subsection{Style Experiments}
\label{sec:style-exp}
Given that our methods show better performance than others in compatibility learning, we now compare their performance vis-a-vis style. We look at two specific style comparison metrics and discuss a characteristic that our method has, but is absent in style-independent methods. Statistical comparisons for our metrics and further qualitative results will be added over time in this link: \url{https://harshm121.github.io/project_pages/satco_rec.html}.
\vspace{-4mm}
\subsubsection{Style Entropy:}
A user would get maximum utility if her top-wear can be part of outfits belonging to a large number of style categories, i.e.\ the portal is able to recommend from a wide range of styles. Say given an anchor item, we want to recommend a total of $n$ outfits from $k$ styles. SATCORec, using the style-handle, can produce a ranked lists of outfits conditioned on each of the $k$ styles. We choose the top $\floor{n/k}{}$ or $\ceil{n/k}{}$ outfits from each style specific list. Style independent methods will get its top-$n$ outfits as per the general compatibility rank, thus oblivious to their reference styles. We use the entropy measure on style to compare the final lists. A higher entropy would mean that the compatibility framework is not restrictive to a single or small number of styles. For this, we select the list of all those outfits which have the same anchor item, but belong to different styles. From this list, we pick those instances where SATCORec is able to correctly predict the items of an outfit given a style. We then choose the top outfit from each each style, thus forcing $n=k=6$, and present the result in table~\ref{tab:fitb-compatauc-metrics}, column \emph{Entropy}. Again, SATCORec-r (slighlty better) and SATCORec-p outperform all other methods, implying that they are able to recommend outfits corresponding to most of the styles feasible for the anchor item. On manual inspection we also find that style-independent methods are biased towards the most prevalent style in the training data set. Henceforth, we will consider only the top performing variations, SATCORec-r and SATCORec-p.
\vspace{-4mm}
\subsubsection{Style-specific selection accuracy and ranking:} SACTORec-r is also seen to be superior in some other metrics we compute like MRR, Avg rank etc. Table~\ref{tab:child-selection-ranking-results} - \emph{Metric} captures this for the three style-dependent methods. Given a method, we have taken each outfit and calculated the compatibility scores conditional on all the available styles. We record the outfit rank corresponding to the style it actually belongs compute the metrics based on them. Fig \ref{fig:overview} presents an example for an anchor top-wear and the style conditional compatibility scores for each of the outfits comprising only of bottom-wear. We see that the scores are highest (top ranked) for the style in which the outfit actually belongs.
\vspace{-4mm}
\subsubsection{Other Metrics:} We make use of the list of outfits used in the calculation of style entropy again to understand the efficacy of the algorithms. Note that this list has outfits from different styles but common anchor item. For each such anchor item, conditional on the style, we check the top-1 accuracy of selecting the right child item in the outfit. To understand \emph{accuracy}, we refer again to Fig \ref{fig:overview}, where accuracy for Bottomwear1 equals 1 since the inferred rank corresponding to actual style is lowest. Table~\ref{tab:child-selection-ranking-results} - \emph{Parent-Child} shows the results for various parent-child category combinations. Here SATCORec-p performs much better, although when we were checking column-wise ranking, it was behind SATCORec-r.
\vspace{-4mm}
\begin{table}[h]
\caption{\small{The upper section of the table contains metrics on outfit ranks conditional on style while the lower section provides the percentage of correct selection of compatible item for anchor items with outfits across various styles.}}
\label{tab:child-selection-ranking-results}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
& & \textbf{SATCORec-r} & \textbf{SATCORec-p}& \textbf{Theme Matters}\\ \hline
\multirow{4}{*}{\textbf{Metric}} & MRR of correct style & \textbf{0.8844} & 0.7676 & 0.6213 \\
\cline{2-5}
& Correct style on 1st rank & \textbf{80.94} & 59.36 & 42.37 \\
\cline{2-5}
& Correct style in top 3 ranks & 95.00 & \textbf{95.10} & 76.51 \\
\cline{2-5}
& Avg rank of the correct style & \textbf{1.4} & 1.7 & 2.5\\ \hhline{|=|=|=|=|=|}
\multirow{4}{*}{\textbf{Parent-Child}} & Topwear - Bottomwear & 66.74 & \textbf{77.33} & 50.32 \\
\cline{2-5}
& Bottomwear - Topwear & 72.02 & \textbf{86.65} & 57.92 \\
\cline{2-5}
& Topwear - Footwear & 65.79 & \textbf{75.97} & 59.73 \\
\cline{2-5}
& Bottomwear - Footwear & 69.81 & \textbf{80.13} & 62.79\\\hline
\end{tabular}
}
\end{table}
\vspace{-4mm}
\noindent\textbf{Style-Specific fine-grained category selection in outfit generation:}
For each style, there can be multiple child-items which may match an anchor item, however, a good recommendation system would mostly output the items which differentiate the outfit from other styles. To check this phenomenon, for each style, we determine the most discriminating child-items \cite{tf-idf}, in terms of fine-grained categories e.g. \emph{skirt} is a fine-grained category in bottomwears which most prominently shapes a casual style. Note that this is different from the most popular item across styles, say for example \emph{jeans}. We posit that a superior algorithm would more frequently output such discriminative categories as a likely-match for a style. Style specific and overall results are shown in Table~\ref{tab:generated_outfit_analysis}, we see in almost all the cases, SATCORec's output chooses discriminative fine-grained categories significantly higher number of times than the other baselines.
\begin{table}[h]
\caption{\small{Comparison of style-specific fine-grained categories chosen by different methods.}}
\label{tab:generated_outfit_analysis}
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}
\hline
\textbf{Method} & \textbf{Party} & \textbf{Outdoor} & \textbf{Summer} & \textbf{Formal} & \textbf{Athleisure} & \textbf{Winter} & \textbf{Casual} & \textbf{Celeb} & \textbf{Overall} \\ \hline
TypeAware & 28.33 & 29.22 & 10.24 & 33.54 & 19.52 & 18.10 & 2.67 & 15.92 & 19.30 \\ \hline
BPR-DAE & 28.19 & 17.07 & 17.74 & 36.26 & 31.64 & 29.05 & 23.42 & 19.05 & 25.64\\
\hline
TransNFCM & 12.78 & 25.72 & 3.09 & 23.84 & 30.01 & 21.21 & 0.00 & 27.86 & 18.36 \\ \hline
CSA-Net & 34.63 & 26.79 & 13.98 & 35.44 & 28.69 & 26.94 & 11.00 & 27.11 & 25.38 \\ \hline
Theme Matters & 34.26 & 24.20 & 7.48 & 24.68 & 14.21 & 30.05 & 18.00 & 9.95 & 21.39 \\ \hline
SATCORec-r & \textbf{50.56} & \textbf{32.12} & 19.84 & 45.78 & \textbf{38.65} & 39.31 & 18.17 & 25.62 & \textbf{34.27} \\ \hline
SATCORec-p & 38.59 & 21.89 & \textbf{23.06} & \textbf{47.26} & 37.18 & \textbf{40.92} & \textbf{24.09} & \textbf{28.09} & 32.96\\
\hline
\end{tabular}
\end{adjustbox}
\end{table}
\noindent\textbf{Blending of Styles:}
We have also checked the ability of SATCORec to generate outfits that are a linear combinations of different styles. We observe a smooth blending of the styles, also a higher (lesser) weight of a particular style (in the linear combination) results in the presence of more (less) items resembling that style in the generated outfits (Figure \ref{fig:transition_outfit_example_1}).
We will provide an web-based app along with the final version of the paper if accepted where a user would be able to explore different such combinations.
\vspace{-4mm}
\begin{figure}[ht]
\centering
\includegraphics[width=0.75\linewidth]{images/outfit_transition_example1.png}
\caption{\small{Here we demonstrate the ability of our method to mix styles in outfit generation. Given the anchor item from top-wear, the top and bottom rows correspond to outfits generated from the two very separate styles: \emph{Party} and \emph{Formal}. The outfits in between them are generated by passing a weighted style vector for each of those two styles, thereby creating a nice blend.
}}
\label{fig:transition_outfit_example_1}
\end{figure}
\vspace{-10mm}
\section{Conclusion}
The novelty of the paper lies in developing a Style-Attention-based Compatible Outfit recommendation and generation framework, SATCORec,
utilizing high-level categories. SATCORec employs a Style-Compatibility-Attention Network~-~SCA Net and a Style Encoder Network~-~SE-Net. The SE-Net uses the Set Transformer to extract outfit style features, which is used to provide style-specific sub-space attention to individual items. The extensive style experiments establish the power of SATCORec in recommending with high accuracy a broader collection of compatible outfits across different styles to users.
More interestingly, SATCORec chooses items which can make a pronounced style statement.
Since in this paper we have focused on compatibility and employed a traditional beam search for outfit generation, an immediate future work would be to explore more sophisticated generation algorithms.
\bibliographystyle{splncs04}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,752
|
Q: Is there a way to add individual files to Ubuntu One, not just directories? I have a lot of files in a lot of different places on my computer I would like to be able to access from Ubuntu1. I just don't want to add all the directories. Is there a way to add just one file from a directory to U1 without adding the whole directory -or could I persuade the code-wizards to make it possible ?
A: You can probably create a link in your Ubuntu One folder that refers to the file in the other folder, if you want to avoid copying the file.
A: Same question with the right answer here:
Does Ubuntu One follow symlinks if synchronizing a folder?
in Ubuntu One v 2.0.0, soft links are ignored and hard links are treated unreliably (completely ignored for me, but some people have had success).
A: Well... copying the file.. Why didn't I think of that, that might be a possible solution. I was thinking in lines of doing the same thing to files as you do to directories, to add them to U1. But if that's not possible copying might do the trick...
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,501
|
Q: sed replace two double quotes and keep text Text to be processed :
""text""
"text"
""
output desired :
"text"
"text"
""
Tried with :
echo -e '""text""\n"text"\n""' | sed -e 's/"".*""/".*"/g'
But obviously no luck.
Cheers,
A: Based on your sample input, you could just use this:
sed '/^""$/! s/""/"/g' file
On lines which don't only contain two double quotes, globally replace all pairs of double quotes with one double quote.
A: $ sed 's/"\("[^"]*"\)"/\1/' file
"text"
"text"
""
A: You want to use a backreference (something that refers to a previously matched part) in your sed command :
echo -e '""text""\n"text"\n""' | sed -E -e 's/""(.*)""/"\1"/g'
Here are the modifications I did :
*
*I grouped what was inside the double-quote in the matching pattern with (...)
*I referenced that matching group in the replacement pattern with \1
*I told sed to use -Extended regex so I wouldn't have to escape the grouping parenthesis
A: Could you please try following awk command and let me know if this helps you.
awk -v s1="\"" '!/^\"\"$/{gsub(/\"\"/,s1)} 1' Input_file
A: Another approach
cat file | sed 's/"//g' | sed 's/^/"/g' | sed 's/$/"/g'
Details
sed 's/"//g' remove all double quote
sed 's/^/"/g' put a double quote at the beginning of each line
sed 's/$/"/g' put a double quote at the end of each line
Note
This approach will work only if there is one word per line, like in your example.
A: Answer by Aaron is also fine but back-referencing is costlier in general. Why not use a simpler command:
echo -e '""text""\n"text"\n""' | sed 's/""*/"/g'
Regex to replace more than one double quote to single one.
A: echo -e '""text""\n"text"\n""' |awk '/""text""/{gsub(/""/,"\42")}1'
"text"
"text"
""
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,047
|
Q: apportable : No module named argparse error on mac i just installed apportable from here : http://www.apportable.com/users/1928
now , after installing successfully it gives this error :
Traceback (most recent call last):
File "/Users/macintosh/.apportable/SDK/bin/apportable", line 3, in
import argparse
ImportError: No module named argparse
i can understand it's import issue .. but i am not able to resolve this .. can anyone help on this !! i have tried this for many times & still same error .. what could be the possible resolution to this.
please help !
A: You may have an old python. You should have version 2.7+:
$ python --version
Python 2.7.2
Or old MacOS X. Apportable requires Mac OS X 10.7+.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 245
|
Funnymen: A Novel
Ted Heller
SIGMUND "ZIGGY" BLISSMAN isn't the best-looking, sanest boy in the world. Far, far from it. But this misfit child of a failed husband-and-wife vaudeville team has one (and only one) thing going for him: He can crack people up merely by batting his eyelashes.And Vittorio "Vic" Fontana, the son of a fisherman, is a fraud. Barely able to carry a tune or even stay awake while attempting to, the indolent baritone (if that's what he is) has one thing going for him: Women love to look at him.On their own, they're failures. But on one summer night in the Catskills, they step onstage and together become the funniest men -- and the hottest act -- in America."Funnymen" is the wildly inventive story of Fountain and Bliss, the comedy duo that delighted America in the 1940s and '50s. Conceived as a fictional oral biography and filled with more than seventy memorable characters, "Funnymen" details the extraordinary careers of two men whose professional success is never matched in their personal lives. The two men fight constantly with their managers, their wives, their children, their mistresses, and those responsible for their success: each other. The stories recounted about Vic and Ziggy -- and the truths Heller reveals about human ambition, egotism, and friendship -- make "Funnymen" a wild ride of a novel that is also a rare and imaginative masterpiece of storytelling.
The Sinister Mr. Corpse
The Complete Drive-In: Three Novels of Anarchy, Aliens and the Popcorn King (Drive-In, Books 1-3)
Dave Cameron and the Extraterrestrial
that way. "The nerve of that fuck!" he said. "Yeah, I know," I told him. "But what exactly is he saying?" "All kinds of things, Mick! He says that I thought it was unprofessional that he didn't go on when he had a cold!" "But didn't you think it was?" I asked him. "But it turns out he never had a cold! He even says that here. I would've thought it was unprofessional if he said he had a cold and he didn't have one, but I didn't know that till now. He says there was a rumor that I had someone
New York under the name of Jane Q. Doakes—Arnie come up with that name. She didn't know this though . . . I mean, Morty Geist is telling us to muffle this thing big time and Lulu's got enough worries right now, right, what with it being her first kid. So we get her her own room and the nurses and residents keep calling her Mrs. Doakes. But she's so out of it she don't even notice. Once in a while she'd ask if Vic was comin' and I told her I didn't know. Hey, I knew! I knew that Vic was three
part of the "Hollywood kids" scene. I didn't go to parties with Frank Sinatra's or Liz Taylor's kids. I didn't go driving around with Tuesday Weld or Ann-Margret or Peter Fonda or anyone like that. As a matter of fact I never went to parties and rarely drove around with anyone. But I don't want to give the wrong impression. I saw the way Vic Fountain was with Vince, and Dad was not like that with me. He wasn't distant, he wasn't unaffectionate. He was there for me when he could be and was very
asked me out to Schrafft's one day and they began talking to each other about how funny and talented Ziggy was and how rich he was going to be. I finished my ham and mayonnaise on white bread and as we were leaving, Sally said to me, "So, Janie, are you seeing anybody?" "I have a few beaus, Sally," I said. I began to rattle off the names of my admirers: Keenan Maynard, Jimmy Hetfield, Mitchell George (of the Connecticut Georges). "Why don't you and Ziggy maybe go out on a date? Have dinner
he'd pretend to be hurt. But after a few minutes he'd just rear back and let loose a haymaker. Even "Steady" Eddie Teller got in the ring but General Woodling told Hunny to not hurt this guy, he was important, so Hunny played nice. (Not the first time he took a dive.) Woodling was sitting with this blond dish, the same broad we'd seen in the jeep a few days before. This girl could've been a movie star, all the bits and pieces were in the right place. But she wasn't having such a great time right
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,877
|
namespace ui_devtools {
class UIElementWithMetaData : public UIElement {
public:
UIElementWithMetaData(const UIElementWithMetaData&) = delete;
UIElementWithMetaData& operator=(const UIElementWithMetaData&) = delete;
~UIElementWithMetaData() override;
// UIElement:
std::vector<UIElement::ClassProperties> GetCustomPropertiesForMatchedStyle()
const override;
void GetVisible(bool* visible) const override;
void SetVisible(bool visible) override;
bool SetPropertiesFromString(const std::string& text) override;
void InitSources() override;
protected:
UIElementWithMetaData(const UIElementType type,
UIElementDelegate* delegate,
UIElement* parent);
// Returns the metadata for the class instance type for this specific element.
virtual ui::metadata::ClassMetaData* GetClassMetaData() const = 0;
// Returns an opaque pointer for the actual instance which this element
// represents.
virtual void* GetClassInstance() const = 0;
// Returns the layer for the given element if one exists. Returns null if no
// layer is currently available.
virtual ui::Layer* GetLayer() const;
};
} // namespace ui_devtools
#endif // COMPONENTS_UI_DEVTOOLS_VIEWS_UI_ELEMENT_WITH_METADATA_H_
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,954
|
I know how important it is to find the perfect piece of jewellery that can match your favorite dress or find a piece of jewellery made with your favorite gemstone. Or even give your loved one the most perfect gifts she ever dreamed of!
If you can't find the perfect item for you, please do contact me for a custom order and a quote. I love custom orders so if you have any thoughts or ideas I would be more than happy to discuss them with you, just get in contact!
Planning a wedding is a hard and stressful thing to do and also is choosing your wedding favours. Even if you want to give a little pressie to the brides maids or give something special to your guests, I am here to listen to your ideas. I am more than happy to help you chose the right colours to match your set and also offer you an inside view on the beads offers. We can decide together if you want Swarovski crystals, pearls or something else. Do not hesitate to contact me for a professional advice and a affordable quote!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,071
|
# This file is copied to spec/ when you run 'rails generate rspec:install'
ENV["RAILS_ENV"] ||= 'test'
require File.expand_path("../../config/environment", __FILE__)
require 'rspec/rails'
require 'rspec/autorun'
require 'capybara/poltergeist'
# Requires supporting ruby files with custom matchers and macros, etc,
# in spec/support/ and its subdirectories.
Dir[Rails.root.join("spec/support/**/*.rb")].each {|f| require f}
Capybara.javascript_driver = :poltergeist
Capybara.ignore_hidden_elements = true
RSpec.configure do |config|
# ## Mock Framework
#
# If you prefer to use mocha, flexmock or RR, uncomment the appropriate line:
#
# config.mock_with :mocha
# config.mock_with :flexmock
# config.mock_with :rr
# Remove this line if you're not using ActiveRecord or ActiveRecord fixtures
#config.fixture_path = "#{::Rails.root}/spec/fixtures"
# If you're not using ActiveRecord, or you'd prefer not to run each of your
# examples within a transaction, remove the following line or assign false
# instead of true.
#config.use_transactional_fixtures = true
# If true, the base class of anonymous controllers will be inferred
# automatically. This will be the default behavior in future versions of
# rspec-rails.
config.infer_base_class_for_anonymous_controllers = false
# Run specs in random order to surface order dependencies. If you find an
# order dependency and want to debug it, you can fix the order by providing
# the seed, which is printed after each run.
# --seed 1234
config.order = "random"
config.before :each do
setup_api_stubs!
Rails.cache.clear
end
config.after :each do
Timecop.return
teardown_api_stubs!
end
config.include ApiStubbing
config.include SessionSteps, type: :feature
config.include DomElements, type: :feature
end
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 8,805
|
Pan Docs - also known as GAMEBOY.TXT or GBSPEC.TXT - is an old document dating back to early 1995, originally written by Pan of Anthrox. It has been one of the most important references for Game Boy hackers, emulators and homebrew developers during the last 25 years.
ADDRESS1.PCX, one of the diagrams attached to the first version, released January 28th, 1995
After its release (1995-2008), it received a number of revisions, corrections and updates, maintaining its TXT format. This folder provides a historical archive of those versions.
In 2008, a wikified version (using Martin Korth's 2001 revision as a baseline) has been published. The document was split into different articles and it continued being maintained and updated in that form.
In 2020, after the discussion in this RFC we migrated the last updated version to plain Markdown and made github.com/gbdev/pandocs the new home of this resource, where it can receive new public discussions and contributions, maintain its legacy and historical relevance, while making use of modern tools and workflows to be visualized and distributed.
From 2020 to May 2021 we used VuePress to render the markdown files as web pages.
Since May 2021, we rely on mdBook.
We are releasing everything (content, sources, code, figures) under the CC0 license (Public Domain).
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,032
|
Güiripa est l'une des quatre paroisses civiles de la municipalité de San Casimiro, dans l'État d'Aragua au Venezuela. Sa capitale est Güiripa.
Géographie
Démographie
Hormis sa capitale Güiripa, la paroisse civile possède plusieurs localités :
Paroisse civile dans l'État d'Aragua
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,719
|
Spree::Core::Engine.add_routes do
root :to => 'home#index'
resources :products, :only => [:index, :show]
get '/locale/set', :to => 'locale#set'
# non-restful checkout stuff
patch '/checkout/update/:state', :to => 'checkout#update', :as => :update_checkout
get '/checkout/:state', :to => 'checkout#edit', :as => :checkout_state
get '/checkout', :to => 'checkout#edit' , :as => :checkout
populate_redirect = redirect do |params, request|
request.flash[:error] = Spree.t(:populate_get_error)
request.referer || '/cart'
end
get '/orders/populate', :to => populate_redirect
get '/orders/:id/token/:token' => 'orders#show', :as => :token_order
resources :orders, :except => [:new, :create, :destroy] do
post :populate, :on => :collection
end
get '/cart', :to => 'orders#edit', :as => :cart
patch '/cart', :to => 'orders#update', :as => :update_cart
put '/cart/empty', :to => 'orders#empty', :as => :empty_cart
# route globbing for pretty nested taxon and product paths
get '/*id/all-products', :to => 'taxons#show', :as => :nested_taxons
get '/unauthorized', :to => 'home#unauthorized', :as => :unauthorized
get '/content/cvv', :to => 'content#cvv', :as => :cvv
get '/content/*path', :to => 'content#show', :as => :content
get '/cart_link', :to => 'store#cart_link', :as => :cart_link
end
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,213
|
Vida de Constantino, o Grande (; ) é um panegírico escrito em grego em honra do imperador romano por Eusébio de Cesareia no . Ele nunca foi concluído devido a morte do autor em 339.
O trabalho fornece aos estudiosos uma das mais compreensíveis fontes para as políticas religiosas do reinado do imperador, ao mesmo tempo que, apropriando-se do tema, Eusébio expôs vários de seus interesses religiosos como a apologética, bem como um relato semi-biográfico de Constantino. Sua fiabilidade como um documento histórico tem sido questionada por vários historiadores, notadamente Timothy Barnes, devido a seus motivos questionáveis e estilo de escrita.
Resumo
Dividido em quatro livros, Vida de Constantino começa com uma declaração que Constantino é imortal. Essa abertura estabelece o tom do resto da obra, uma glorificação e deificação geral do imperador e sua obra na Terra. O trabalho avança no tempo de Constantino sob o imperador Diocleciano . Constantino é contrastado com o tirânico Diocleciano, cuja perseguição aos cristão e governo opressor acentua a preservação de Constantino como um forte cristão e um homem justo. Esta seção também estabelece a metáfora abrangente na obra, com Eusébio comparando Constantino a Moisés. Eusébio sugere que era o testamento de Deus elevar Constantino a imperador, como um aliviador do tormento cristão no império.
Após concluir a introdução, Eusébio passa a tratar dos empreendimentos militares de Constantino do resto do Livro I e começo do Livro II. O primeiro deles, a campanha contra Magêncio , onde há a cena mais famosa da obra, a Visão de Constantino. Estação seção gerou ampla controvérsia, pois há muita desconfiança sobre a validade da história. Eusébio alega que ele ouviu a história da boca do imperador, contudo grande parte dos estudiosos modernos concordam que a história é uma distorção dos fatos ou completamente fabricada. O mesmo relato é frequentemente comparado ao de Lactâncio, que fornece uma descrição radicalmente diferente da mesma história. Eusébio então segue em frente para descrever a campanha militar seguinte, a guerra contra Licínio . Eusébio contribui para o obscurecimento de Licínio, que era pró-cristão, que fora iniciado por Constantino como propaganda imperial para justificar a agressão contra ele.
O trabalho transita das campanhas militares para as medidas religiosas do imperador. O resto do Livro II termina com o delinear dos problemas religiosos enfrentados por Constantino. O Livro III é focado amplamente com o estabelecimento construtivo de Constantino desses problemas. A seção inclui o único relato contemporâneo contínuo do Primeiro Concílio de Niceia, bem como a peregrinação para Bordéus. O Concílio de Niceia tem sido examinado atentamente pelos estudiosos de viés, contudo, pois Eusébio esteve muito envolvido nas políticas do concílio. O restante do livro lida com as leis eclesiásticas do imperador, com Eusébio empenhando sua atenção na exibição de Constantino em uma luz extremamente cristã, construindo locais sagrados e alegadamente destruindo templos pagãos. A maioria das cartas imperiais de Constantino aparecem no Livro III.
O Livro IV dedica-se amplamente a tratar da vida pessoal e realizações finais de Constantino, concluindo com sua morte. Muito da obra é devotada à ilustração da piedade de Constantino. Sua viagem para a Pérsia é retratada em uma temática apologética cristã universal, suas leis proibindo o culto aos ídolos de sua própria imagem e a reiteração da supressão do culto e sacrifícios aos ídolos. Como o trabalho conclui, Eusébio faz muito esforço para descobrir um Constantino pessoal, tomando tempo para descrevê-lo como um notável falante e pregador público, bem como um ouvinte. Próximo da morte dele, Eusébio foca na força mental e espiritual de Constantino, bem como sua força física, ajudando a concluir o retrato de um homem quase divino. O panegírico termina com a morte do imperador, seu funeral, e a sucessão do trono.
Tratamento a Constantino
O tratamento de Eusébio a Constantino gerou muita controvérsia em torno do texto. O uso de Eusébio do estilo panegírico resulta em um tratamento extremamente generoso do imperador que tem sido notado por suas intenções pouco objetivas. Timothy Barnes nota que o autor claramente omite relatos e informação para retratar o biografado numa luz favorável. Eusébio avança a ideia do direito divino de Constantino, como se ele fosse imperador devido ao desenho de Deus, e é o imitador de Deus na Terra. A narrativa de Eusébio constrói Constantino como um enviado, de modo a terminar a perseguição dos cristãos sob o Império Romano, e assegurar a correta adoração de Deus. O Veículo de Eusébio para esta narrativa é a metáfora, com ele explicitamente retratando o imperador na imagem de Moisés.
Fontes
As fontes conhecidas de Eusébio para a confecção de sua obra são oito textos legais, 42 referências bíblicas e oito referências literárias. Eusébio frequentemente refere-se a seus trabalhos anteriores, precisamente 42 vezes ao longo do panegírico, notadamente a História Eclesiástica e a Oração Tricenaliana (Laus Constantini). A História Eclesiástica tem muitos documentos imperiais e cartas de Constantino, alguns repetindo sua aparência na Vida de Constantino. Eusébio frequentemente cita sua obra e os documentos imperiais; contudo, por vezes ele faz citações sem indicar a fonte, frequentemente para ajudar a construir sua narrativa do imperador como um enviado.
Credibilidade
Estudiosos céticos alegam que o casamento entre os estilos panegírico e bibliográfico mistura lenda com fatos, fazendo o texto totalmente não confiável. De fato, enquanto muitos aceitam o trabalho como fiável no geral, alguns estudiosos modernos alegam que o texto não está sem seus pontos de interrogação, especialmente concernente aos motivos e vieses de Eusébio. Eusébio consistentemente negligencia informação relevante. Ele também se engaja na politização de vários assuntos, notadamente a campanha contra Licínio e o Concílio de Niceia. No primeiro caso, o autor esforça-se para manchar a reputação de Licínio, colocando-o como um apoiante dos pagãos e um quebrador da trégua, alegações historicamente duvidosas.
Eusébio foi um participante do Concílio de Niceia e suas motivações em escrever sobre o assunto no qual esteve ativo deve ser abordado com cautela. Eusébio também tem grande dor em descrevendo-se como muito próximo do imperador, quando na verdade o oposto é mais provável. Barnes nota que o encontro Eusébio e Constantino foi uma ocorrência rara, pois o primeiro não residia próximo da capital, nem tinha um acesso especial ao imperador, como ele alega na Vida de Constantino. Em vez disso, Barnes alega que antes do Concílio de Niceia, Eusébio pode ter visto uma vez o imperador, em uma grande multidão de pessoas. Não foi até 25 anos depois que Eusébio encontrou-se com ele, no Concílio de Niceia. Após o concílio, entretanto, o contato pessoal foi esporádico e melhor. Mesmo a troca de correspondências foi infrequente.
Relevância histórica
A Vida de Constantino permanece como o mais importante trabalho para examinar o reinado de Constantino. Apenas uma seleta quantidade de relatos pagãos do reinado existem ou tem sido descobertas, com apenas um panegírico pagão conhecido. Enquanto Eusébio tem um claro viés pró-cristão, a Vida de Constantino também fornece vários assuntos seculares perspicazes que foram descobertos fora da obra. Todavia, apesar da significância moderna, ela foi amplamente obscura nos séculos IV e V, e não alcançou popularidade até muito depois na história.
Bibliografia
Livros do século IV
Obras de Eusébio de Cesareia
Constantino, o Grande
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 8,224
|
Hairline crack
acrylic glass and hair
Installation dimensions variable depending on number of sections installed. Each section 30.0 x 100.0 x 3.5cm. :
1-15 - 15 sections, 30 x 100 x 3 cm, each front sheet/tube of perspex
1-15 - 15 sections, 30 x 100 x 0.5 cm, each backing sheet of perspex
Signature & date
Not signed. Not dated.
Purchased with funds provided by the Young Friends of the Art Gallery Society of New South Wales 1993
20th-century galleries (lower level 1)
© Julie Rrap. Courtesy Roslyn Oxley9 Gallery
Julie Rrap, who was based in Europe in the late 1980s and early 1990s, returned to Australia in 1992 to exhibit 'Hairline crack' in the 9th Biennale of Sydney. From a distance, the artwork resembles a black line drawn on the wall, evoking, perhaps, the work of Sol LeWitt, Mel Bochner or other artists associated with minimalism. On closer inspection, however, it is quickly discovered that the line is in fact made from an unruly excess of human hair.
The work might be seen to meditate on the tension between the organic and the synthetic or between order and chaos. The perfectly straight, level line reveals itself to be disrupted by something organic and unpredictable; a part of our bodies associated with beauty that is also cut and discarded.
Shown in 6 exhibitions
Strangers in Paradise, National Museum of Modern and Contemporary Art, Seoul, 05 Nov 1992–04 Dec 1992
Strangers in Paradise, Art Gallery of New South Wales, Sydney, 23 Jul 1993–12 Sep 1993
The boundary rider: 9th Biennale of Sydney, Bond Stores 3/4, Sydney, 15 Dec 1992–14 Mar 1993
Exhibition of 1993 Biennale works, Orange Regional Gallery, Orange, 21 May 1993–20 Jun 1993
Julie Rrap: body double, Museum of Contemporary Art, Australia, 30 Aug 2007–28 Jan 2008
20th-Century galleries Rehang (Lower Level 1), Art Gallery of New South Wales, Sydney, 20 Aug 2022–2023
Art Gallery of New South Wales, Great gifts, great patrons: an exhibition celebrating private patronage of the Gallery , Sydney, 1994. no catalogue numbers
Anthony Bond, Contemporary: Art Gallery of New South Wales Contemporary Collection , 'Imagining the body', pg.246-289, Sydney, 2006, 286, 287 (colour illus.).
Anthony Bond and Victoria Lynn, AGNSW Collections , 'Contemporary Practice - Here, There, Everywhere ...', pg. 229-285, Sydney, 1994, 244 (colour illus.).
Anthony Bond OAM and Yvonne Kennedy, The boundary rider: 9th Biennale of Sydney , Sydney, 1992, 202, 203 (colour illus.).
Victoria Lynn, Art and Australia (Vol. 31, No. 2) , 'Minimalism and its shadows: girding the grid', pg. 234-243, Sydney, Summer 1994, 234-235 (colour illus., detail), 240, 243.
Victoria Lynn, Look , 'Identity and the Body', pg. 12-13, Heidelberg, Sep 1994, 13.
Julie Rrap, Strangers in Paradise - Contemporary Australian Art to Korea , 'Promiscuity and Statistics', pg.66-69, Seoul, 1992, 68 (colour illus.), 95, 101. cat.no. 22
Wayne Tunnicliffe, Look , 'Past/Present/Future: the importance of collecting contemporary work and Contempo's contribution', pg.14-15, Sydney, Apr 2003, 15 (colour illus.). illustration is a detail
Judith White, art lovers: the story of the Art Gallery Society of New South Wales 1953-2013 , 'Chapter 5: Be part of the art 1988-2000', pp. 111-138, Sydney, 2013, 115, 232 (colour illus.).
Other works by Julie Rrap
Persona and shadow: puberty Julie Rrap 1984 189.2011
Non-portraits (Julie Rrap) Julie Rrap 1990-1992 190.2011
Non-portraits (Wim Delvoye) Julie Rrap 1990-1992 191.2011
Myth - a - register (Cut the cord, tie the knot, sever the lifeline) Julie Rrap 1983 260.1983
See all 10 works
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,031
|
Q: read excelsheet in azure uploaded as a blob I am using FileUpload control of asp.net and uploading the excel with some data. I can't save it in some folder. I can have stream of excel sheet file or I can have Blobstream after uploading excel as a blob. Now I want to convert that excel sheets 1st sheet to datatable so how shall I do that? I am using C# .NET. I don't want to use Interop library. I can use external libraries. Oledb connection is getting failed since I don't have any physical path of excel as a data source. I tried following links:
1) http://www.codeproject.com/Articles/14639/Fast-Excel-file-reader-with-basic-functionality
2) http://exceldatareader.codeplex.com/
Please help.
A: Depending on the type of Excel file you can use the examples you posted or go for the OpenXML alternative (for xlsx files): http://openexcel.codeplex.com/
Now, the problem with the physical path is easy to solve. Saving the file to blob storage is great. But if you want, you can also save it in a local resource to have it locally. This will allow you to process the file using a simple OleDb connection. Once you're done with the file, you can just delete it from the local resource (it will still be available in the blob storage since you also uploaded it there).
Don't forget to have some kind of clean up mechanism in case your processing fails. You wouldn't want to end up with a disk filled with temporary files (even though it could take a while before this happens).
Read more on local resources here: http://msdn.microsoft.com/en-us/library/windowsazure/ee758708.aspx
A: You should use OpenXML SDK which is an officially suggested way of working with MS Office documents - http://www.microsoft.com/download/en/details.aspx?id=5124
A: I first created local storage as per the link:
http://msdn.microsoft.com/en-us/library/windowsazure/ee758708.aspx
suggested by Sandrino above. Thanks Sandrino for this. Then I used oledb connection and it gave me an error "Microsoft.Jet.Oledb.4.0 dll is not registered". Then I logged on to the azure server and in the IIS changed app pool configuration for 32-bit. To change app pool to 32-bit refer the following link:
http://blog.nkadesign.com/2008/windows-2008-the-microsoftjetoledb40-provider-is-not-registered-on-the-local-machine/
A: The approach you followed is not the correct one, as you said you logged on to azure and changed, the VM which is running on azure is not the permanent one for you. For any updates you are going to get new VM machine. you might have to find turn around for this, instead of modifying manually. You can make use of the startup tasks in your azure app. See the link below it may help you.
http://msdn.microsoft.com/en-us/library/gg456327.aspx
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 7,640
|
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package secondassignmentclient;
import java.io.StringReader;
import javax.swing.text.Document;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import javax.xml.xpath.XPath;
import javax.xml.xpath.XPathConstants;
import javax.xml.xpath.XPathExpression;
import javax.xml.xpath.XPathExpressionException;
import javax.xml.xpath.XPathFactory;
import org.w3c.dom.NodeList;
import org.xml.sax.InputSource;
/**
*
* @author roberto
*/
public class XPathEvaluator {
public static NodeList getNodes(String source, String query) throws Exception {
InputSource input_source = new InputSource(new StringReader(source));
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
org.w3c.dom.Document document = db.parse(input_source);
XPathFactory xpathFactory = XPathFactory.newInstance();
XPath xpath = xpathFactory.newXPath();
NodeList nl = (NodeList) xpath.evaluate(query, document, XPathConstants.NODESET);
return nl;
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,480
|
{"url":"https:\/\/danielhall.me\/blog\/ansible-pattern-downloading-artifacts\/","text":"Often in Ansible I find myself needing to download an artifact. Whether it is a piece of software from GitHub or a build of some internal application that needs to be installed. One thing that is often important in these cases is speed of rollback. If something goes wrong with a new version I want the rollback to be fast, and preferable using preexisting files, and not downloading everything again.\n\nHere is an example of an application that has binaries and needs to be extracted. It assumes that the version is already set.\n\n---\n- name: Calculate Settings\nset_fact:\nrelease_dir: \"{{ base_dir }}\/releases\/{{ app_version }}\"\nbinary_dir: \"\/usr\/local\/bin\"\n\n\nHere we are simply calculating the variables we will use later. This makes each step simpler. I use pattern this often when there are lots of paths to deal with.\n\n- name: Create Directories\nfile:\npath: \"{{ item }}\"\nstate: directory\nwith_items:\n- \"{{ release_dir }}\"\n\n\nCreate the directories for the downloaded artifacts and releases. Ansible will create all the sub directories required on the way to these.\n\n- name: Download the Artifact\nget_url:\ndest: \"{{ artifact_file }}\"\n\n\nHere we do the actual download of the artifact. You might want to add owner, group and mode to here to choose the mode of the downloaded files.\n\n- name: Extract Artifact to Release Directory\nunarchive:\ncopy: no\nsrc: \"{{ artifact_file }}\"\ndest: \"{{ release_dir }}\"\n\n\nExtract the artifact, because we downloaded it on the remote host we set copy to no to prevent Ansible trying to find the file locally and copy it to the managed host. You may want to add other options to the unachive module to set the permissions you want.\n\n- name: Update the Link to the Latest Release\nfile:\nsrc: \"{{ release_dir }}\"\n\n\nHere we atomically change the binaries that are in use. If anything before this step fails, we wont run this step, and once this step runs all the binaries change to the new version immediately. When rolling back to an existing version this is the only step that will need to run.\n\n- name: Make the Link to the latest binaries\nfile:\nsrc: \"{{ latest_link }}\/bin\/{{ item }}\"\ndest: \"{{ binary_dir }}\/{{ item }}\"","date":"2018-11-16 16:34:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.21038152277469635, \"perplexity\": 4039.5708650190895}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-47\/segments\/1542039743105.25\/warc\/CC-MAIN-20181116152954-20181116174954-00127.warc.gz\"}"}
| null | null |
{"url":"https:\/\/dsp.stackexchange.com\/questions\/69068\/pole-locations-of-butterworth-filter","text":"# Pole locations of Butterworth filter\n\nI am reading Proakis book \"DSP using Matlab\", 3rd edition.\n\nI am reading chapter 8, section 8.3, p. 402, and I am confused regarding the equation of poles (roots of denominator of system function) eq 8.47.\n\nHow this equation has been derived from eq 8.46 especially highlighted line how the right most side of eq 8.47 $$\u03a9_ce^{j(\\pi\/2N)(2k+N+1)}$$ has been derived from left side $$(-1)^{1\/2N} j\u03a9_c$$.\n\n\u2022 You may check nth roots of unity for inspiration. Jul 13 '20 at 14:22\n\nAssuming that you understand the left-hand side of Eq. $$(8.47)$$, for understanding the right-hand side you need to know that $$-1=e^{j\\pi}$$, and that $$e^{j2k\\pi}=1$$. So in order to obtain all $$2N$$ roots of $$(-1)^{\\frac{1}{2N}}$$ you rewrite $$-1$$ as\n\n$$-1=e^{j\\pi}e^{j2\\pi k}\\tag{1}$$\n\nfrom which you get\n\n$$(-1)^{\\frac{1}{2N}}=e^{j\\frac{\\pi}{2N}}e^{j\\frac{2\\pi k}{2N}}=e^{j\\frac{\\pi}{2N}(1+2k)},\\qquad k=0,1,\\ldots,2N-1\\tag{2}$$\n\nAnd since $$j=e^{j\\pi\/2}$$, the final result is\n\n$$\\Omega_cj(-1)^{\\frac{1}{2N}}=\\Omega_ce^{j\\pi\/2}e^{j\\frac{\\pi}{2N}(1+2k)}=\\Omega_ce^{j\\frac{\\pi}{2N}(1+2k+N)},\\qquad k=0,1,\\ldots,2N-1\\tag{3}$$\n\nIf this is all new to you, you should really read up on complex numbers. They are fundamental in understanding many concepts in signal processing.\n\nPS: There is a typo in the equations in the book. The numerator on the right-hand of Eq. $$(8.46)$$ should have $$\\Omega_c$$ instead of $$\\Omega$$. Also, the left-hand side of Eq. $$(8.47)$$ should have $$\\Omega_c$$ instead of $$\\Omega$$.","date":"2021-10-25 11:47:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 18, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9440493583679199, \"perplexity\": 223.99461752750761}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323587659.72\/warc\/CC-MAIN-20211025092203-20211025122203-00137.warc.gz\"}"}
| null | null |
#241-45 – Complete Set of Dollar-Value Columbians On Registered Covers
Complete Set of Dollar-Value Columbians On Registered Covers
Own the five dollar-value Columbian Issue singles – each on their own large format registered covers. They were sent on consecutive days from the Stafford National Bank of Dover, New Hampshire, to National Exchange Bank in Boston. The covers include purple registered mail straight-line markings, red registered mail record numbers and original Stafford National Bank red wax seals on the reverse.
The covers have a striking appearance and represent a true rarity both for the genuine use of dollar-value Columbians on cover and as a complete dollar-value matching set!
Famed Columbian Stamps Issued
US #230-45 – Save time and money on complete Columbian collection – available in multiple conditions.
On January 1, 1893, the majority the Columbian stamps were first placed on sale in large cities. The Columbians are some of America's most famous and sought-after stamps, and are considered the first US commemorative stamps.
The Columbian stamps were produced to promote the World's Columbian Exposition, which was to be held in Chicago, Illinois, from May 1 to October 30, 1893. The exposition was a world's fair celebrating the 400th anniversary of Christopher Columbus's arrival in the New World. The Columbians were the first US stamps ever issued to promote a commercial event and the first American commemoratives.
At the time of the planning for these stamps, the American Bank Note Company held the US postage stamp contract. A special contract had to be negotiated for the Columbians because of their larger size. The contract allowed the printer to charge 17¢ per thousand stamps, significantly more than the 7.45¢ per thousand they charged for the 1890 definitives. The series was originally planned to contain 15 stamps, but the 8¢ stamp was issued in March because of a change in registration fees.
US #230-36 – Get the first seven Columbians for as little as $46.50.
Fifteen stamps were placed on sale on January 1, 1893, in New York City and Boston. Most other post offices across the country were closed that day, so they began their sales on January 2. In March, an 8¢ stamp was issued to meet a new registration fee.
US #241-45 – Complete set of dollar-value Columbians on Registered Covers
Unlike any other stamps before them, the Columbians created a worldwide phenomenon. As popular as they were, the Columbian stamps were also controversial. Collectors eagerly awaited the series, forming long lines to purchase the stamps. Yet many were frustrated by the price of owning the complete series. The total value of the stamps was $16.34, which is comparable to paying about $500 in today's wages. Adding to the high cost is the fact that the nation was experiencing a depression at this time. As a result, few could afford the higher value stamps – the series included the first US postage stamps with face values over 90¢. Some postal clerks refused to sell Columbian stamps because demand far exceeded supply.
US #231c – This "Broken Hat" variety occurred during printing, when a break developed in the printing press's transfer roll. The flaw caused a piece to be missing from the hat of the foreground figure to the left of Columbus.
As a consequence, used Columbian stamps were selling for close to face value in 1893 – even as mint stamps were officially on sale. The craze for Columbian stamps was even more pronounced in Europe, where collectors hounded American tourists and begged for stamps from their mail. A corner of Hamburg's stock exchange was devoted to trafficking Columbian stamps. On August 11, 1893, The New York Times reported these transactions were conducted "as carefully as they handled the highest gilt-edged securities."
US #233a – Rare color error – only about 100 known to exist!
The Columbians were on sale at post offices until April 1894. These stamps would be the final issue printed by a private firm before the Bureau of Engraving and Printing took over stamp production for decades.
The Columbians were America's first commemorative stamps, making them an important part of philatelic history. So important that stamp author Max Johl said that the series' degree of completion is often the "yardstick by which a US collection is measured." The series also included the first US stamps to picture a woman – Queen Isabella, who sponsored Columbus' expeditions. The Columbians are among the most sought-after of all US stamps.
US #2624-29 – Collection of six 1992 souvenir sheets created from the same original dies as the legendary Columbians!
The Columbian Special Delivery Stamp
The US #E3 Special Delivery stamp was not issued for the expo, but is still considered part of the Columbian Series. When the Columbian stamps were issued in January 1893, the 1¢ stamp (#230) was printed in the same blue shade as the 10¢ Special Delivery stamp of the time. To avoid confusion, the Special Delivery stamp was printed in orange using the same design – creating #E3.
US #E3 – The orange Special Delivery stamp that's considered part of the Columbians.
This stamp was printed from January 24, 1893, to January 5, 1894. After that, the stamp was once again produced with blue ink, though stocks of the orange stamp were used up before the reissue of another blue one.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,021
|
Dolly Nampijinpa Daniels (1936–2004) was an Australian Aboriginal ritual leader, Warlipiri speaker, renowned artist, and land -rights advocate for the Warlipiri people of the Northern Territory.
Early life
Dolly Nampijinpa Daniels was born in 1936 at Warlukurlangu, north-west of Alice Springs in the Northern Territory. She was born in the Australian bush and maintained a spiritual connection to her homeland throughout her life. The meaning of home for Daniels encompassed both the geographical and social landscapes of her home country.
Daniels lived for many years as a traditional nomad, hunting with her family, before moving with her husband to Mt. Doreen station after the death of her father and then onto Yuendumu, an area approximately 300 km north-west of Alice Springs. Alongside a number of Warlipiri people, she was forcibly removed from these lands by authorities and trucked to Lajamanu, another government settlement, but eventually made the journey back to Yuendumu where they settled.
Personal life
Daniels was an active member in the Yuendumu community, a proud Warlipiri speaker and was recognised as 'boss' for the women's ceremonies in her area. She was once described by Galarrwuy Yunupingu as a person "that doesn't talk much, but had a strong presence and knows who she is and what she's doing". Whilst Daniels was a traditional Warlpiri woman, she also showed an early interest in promoting Warlpiri culture beyond the confines of the settlement . Through marriage and family ties Daniels was able to gain fluency in languages and rituals beyond Yuendumu. She went on to use this extensive knowledge to not only educate the Warlipiri, but numerous non-Aboriginal researchers as well, helping several people in the process.
Art
Daniels began painting in the 1980s with the anthropologist Francoise Dussart. She began by painting ancestral designs on acrylic canvas in a style that has become known as Aboriginal 'dot' painting. Her paintings adhere very strongly to traditional templates for painting, but creativity can be seen in the handling of the painting, arrangement of the motifs and size and placement of the dots. Her work is made most distinguishable due to their bright colours and intricate patterns. Her works celebrate Australian Aboriginal Dreaming and culture and thoroughly invoked the Warlipiri concepts of 'country', 'home' and 'camp' in her work. She painted both her own and her father's dreamings and states "it is our story - Aboriginal people's story". She was part of the South Australian Museum's Yuendumu.
In her mission to promote Warlpiri culture, Daniels helped found and subsequently chair the Warlukurlangu Artists Association and Art Centre, which continues to thrive as one of the longest running and most successful Aboriginal-owned art centres in Central Australia.
Exhibitions
Daniels' first exhibited her work in 1985 at the Araluen Arts Centre in Alice Springs. It was from here that she gained world-renowned acclaim. Her major exhibitions and collaborations are listed below :
"Warlukurlangu Arts", 1986 onwards, Yuendumu collaboration.
"Yuendumu: Paintings out of the Desert", March 1988, South Australian Museum, Adelaide, South Australia.
"Dreamings: Art of Aboriginal Australia", 1988, New York, Los Angeles, Melbourne and Adelaide.
"L'été Australien" 1990, Musée Fabre, Montepellier, France.
"Frames Of Reference: Aspects of Feminism and Art", 1991.
"Top Heavy" Sutton Gallery, May 1993, Dolly Nampijinpa Daniels/Anne Mosey collaboration.
"Biennale Celebration" , 1993, Sydney.
Aratjara Indigenous Art, Kunstsammlung Norhrheinwestphalein, Düsseldorf , 1993.
"Ngurra" (camp/home/country), 1994, Dolly Nampijinpa Daniels/Anne Mosey Collaboration.
A selection of her work is permanently on exhibition in the National Gallery Of Victoria , AM Gallery, Warlukurlangu Arts Centre and the National Museum of African and Oceanic Art, Paris.
Influences
One of the major influences on Daniels' art has been collaborator, Anne Mosey, also a renowned artist. The pair first met in Central Australia in 1989 and exchanged ideas on modes of representing country and the cultural meaning behind such practices. They have collaborated on several internationally recognised projects, whilst still remaining faithful to their own cultures. Daniels' family have also had an influential role in her life, particularly her younger sister Evelyn who often painted with Daniels before her death.
Land rights activism
Daniels was also a proud land rights activist who was particularly interested in the rights of Indigenous Australian people and their lands. She fought strongly for the rights of Aboriginal Australians to have access to their land in order to live a traditional and sustainable way of life. In her work she maintained strong ties to the Central Land Council and was a key participant in the famous land claims of 1976 and 1984, which returned large tracts of the Central Australian desert back to the Warlpiri people.
Daniels and her team lodged a complaint in December 2000 regarding the area around New Haven Pastoral Station in the case "Nelson v Northern Territory of Australia FCA 1343". In December 2010 it was decided by the federal court that the request would be accepted in the form of a free agreement, as outlined by the team, the area held importance to the Jipalpa- Wintijaru, Pikilyi, Yarripilngu, Karrinyarra and Winparrku landholding groups, to which Daniels had ties to.
Publications and Community Service
Daniels wrote a dreaming narrative, titled "The Magic Fire of Warlukurlangu", which was published by Kingswood Working Title Press in 2003. The book was aimed towards primary school aged children and retells a traditional tale of Daniels' Dreaming area. This book has been used to educate non-Indigenous children of the Dreamings and their significance.
In her service to the community, Daniels was a loyal member of the Yuendumu night patrol, as locals were concerned about the way in which Aboriginal issues become processed through bureaucracies.
Death
Dolly Nampijinpa Daniels died of cancer in November 2004 surrounded by her kin.
References
1936 births
2004 deaths
Australian Aboriginal artists
Australian women artists
Warlpiri people
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,156
|
La Copa de las Naciones UCI sub-23 2009, fue la tercera edición del calendario ciclístico creado por la Unión Ciclista Internacional para corredores menores de 23 años.
Estuvo compuesto por seis carreras (una menos que en la edición anterior, al suspenderse el Giro de las Regiones), tres de ellas por etapas y 3 de un día. Los puntos obtenidos en las mismas dieron como ganador a Francia quedando Alemania y Dinamarca en segundo y tercer lugar respectivamente.
Resultados
Clasificación
Referencias
Copa de las Naciones UCI sub-23
Ciclismo en 2009
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,379
|
package org.jbpm.demo.rewards.example;
public class Example_7_DeploymentService {
public static void main(String[] args) {
// DeploymentUnit deploymentUnit = new KModuleDeploymentUnit();
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 487
|
\section{Snippets}
\section[Introduction]{Introduction}\label{sectionIntro}
Computation, information, and networks are three concepts that are of major importance in the contemporary world, where the social impacts of pervasive network dynamics in our digital life, big data, and the increasing power of data analysis bridge the gap between complex systems science and the everyday dynamics of our lives.
Long have been a common sense
that natural systems can be understood as being organized as networks of many interacting
units \cite{Lewis2009}.
For example, interacting molecules in living cells, nerve cells in the brain, computers in a telecommunication network, and socially interacting people.
It is intuitive to break a system into two constitutive realms: that of individual
components, e.g., the laws an atom is subjected to, how a real-world computer works, or how a human being thinks or behaves;
and that of the nature of the connections (or interactions), e.g., the internet communication protocols, the vehicles' characteristics in a transportation network, or the type of human friendships.
However, there is a third realm, sometimes overlooked, that is also important in order to determine the system's functioning: the realm of patterns of connections \cite{Newman2010}.
In other words, beyond the mere fact that a system is composed of parts and that these parts are working in interaction, the patterns, structures or topological properties of a network may play a significant---if not dominant---role in the dynamics of the entire system.
Indeed,
recent advances in complex network theory indicate that this third notion may be more than just a representation scheme or metaphor \cite{Barabasi2016,Lewis2009}.
Triggered by the availability of large amounts of data on large real-world networks, combined with fast computer power even on scientists' desktops, the field is reaching consensual saturation point, being called by the umbrella term \emph{network science} \cite{Zarate2019nat,Barabasi2016}, and plays a central role in data science in general.
Applications ranges from internet communication protocols, epidemics, prevention of computer viruses, fail-safe computer networks engineering, regulatory
circuits of the genome, and ecosystems \cite{Barabasi2016}.
Rooted in graph theory, e.g., from Euler's work to Erdős–Rényi (ER) random graphs, the investigation of complex networks highlights the pervasive presence of heterogeneous structural characteristics of real-world networks \cite{Barabasi2016}.
This is the case of the \emph{small-world effect} \cite{Lewis2009}, where the average shortest path distance, or mean geodesic distance, between any pair of vertices increases up to a logarithmic term of the network size.
While popularly known from the ``six degrees of separation'' phenomenon in social science, the \emph{small-world network} gained a more formal mathematical ground after the Watts–Strogatz model in which, in addition to the short mean geodesic distance, the generated networks have e.g. a high clustering coefficient (i.e., the tendency of vertex neighbors to be connected to other neighbors) \cite{Lewis2009}.
Regarding the heterogeneity of vertex degrees, another commonly found characteristic is a fat-tailed (or heavy-tailed) distribution, for example when the vertex degree distribution follows a power-law, as in the Barabási-Albert (BA) model (aka scale-free networks), and not a Poisson distribution like in the traditional ER model \cite{Barabasi2016}.
This way, in consonance with the pursuit of a theory for evolutionary, computational, dynamical, and informational aspects in complex systems \cite{Mitchell2009}, the study of general and unifying models for the emergence of complexity and network topological properties keeps attracting the interest of the researchers of network science, data science, and complex systems science
\cite{Michail2018}.
In this direction,
information-theoretic approaches have been demonstrating fundamental contributions with the purpose of defining, detecting, or modeling the presence of systemic properties, such as emergence, complexity, and self-organization, in systems with stochastic dynamics \cite{Prokopenko2009}.
Moreover, not only in computable systems, but also as refinements of more traditional statistical approaches, recent advances have been highlighting the algorithmic-informational perspective and showing new fundamental results on: open-endedness and evolutionary systems \cite{Chaitin2012,Hernandez-Orozco2018,Hernandez-Orozco2018a}; network complexity \cite{Zenil2018a,Abrahao2018dextendedarxiv2020}; machine learning and causality \cite{Zenil2019}; cybernetics and control theory \cite{Zenil2019d,Zenil2019c}; and emergence of complexity in networked systems \cite{Abrahao2017publishednat,Abrahao2018publishednat}.
Following this latter approach, we present in this article an investigation of network topological conditions that trigger a phase transition in which algorithmic networks eventually begin to produce an unlimited amount of average emergent algorithmic complexity as the population size grows toward infinity.
These topological conditions can be any property that reflects a strong diffusion power through the network, such as the small-diameter phenomenon \cite{Abrahao2017publishednat} or a classical case of scale-free network \cite{Abrahao2018publishednat}.
Within the context of networked computable systems, we demonstrate the existence of emergence that is proved to be irreducible to its individual parts, universal, and independent of any arbitrarily fixed observer.
\section{A model for networked computable systems}\label{sectionModel}
In this section, we present the general mathematical model for the study of networked machines, which can share information with each other across their respective network while performing their computations.
The model is defined in a general sense in order to allow future variations, to add specificities and to extend the model presented, while still being able to formally grasp a mathematical analysis of systemic features like the emergence of information and complexity along with its related phenomena and, this way, proving theorems.
It was introduced in \cite{Abrahao2016bnat,Abrahao2017publishednat} and we have studied other variations of this first model with a static scale-free network topology in \cite{Abrahao2018publishednat} and with a modified communication protocol to synergistically solve mathematical problems in \cite{Abrahao2019nat}.
In the present article we will focus on the model in \cite{Abrahao2017publishednat}.
The main idea behind the general model is that a population of formal theoretical machines can use communication channels over the graph's edges.
Thus, the graph topology causes this population to be networked.
Once the elements of the population start to exchange information, it forms an overarching model for a system composed of interacting subsystems.
Following this general approach, one can understand these mathematical models as a merger of algorithmic (and statistical) information theory and complex networks, while theoretically combining fundamental notions from distributed computing, multiagent systems, adaptive complex systems, game theory, and evolutionary biology.
We refer to such models as \emph{algorithmic networks} \cite{Abrahao2016bnat,Abrahao2017publishednat}.
So, algorithmic networks are networks of algorithms in the precise sense where the nodes of the network are computable systems.
For the present purposes, one may consider each node as a program of a universal Turing machine, which justifies calling either the nodes or the elements of the population of an algorithmic network as \emph{nodes/programs}.
Aiming at a wider range of different network configurations, we ground our formalism on multiaspects graphs (MAG) as presented in \cite{Wehmuth2016b}.
In this way, one can mathematically represent extra aspects or dimensions that could appear in complex networks. It has been shown that the MAG abstraction enables one to formally represent and computationally analyze networks with additional representational structures.
For having additional dimensions in which the nodes belong (or are ascribed to), e.g., time instants or layers, such networks are called \emph{multidimensional networks} (or high-order networks): for example,
dynamic (i.e., time-varying) networks \cite{Costa2015a,Abrahao2018dextendedarxiv2020}, multilayer networks \cite{Kivela2014}, and dynamic multilayer networks \cite{Wehmuth2018}.
Moreover, MAG abstraction facilitates network analysis by showing that their aspects can be isomorphically mapped into a classical graphs \cite{Wehmuth2016b}.
In a broad sense, one can think of an algorithmic network as a theoretical multidimensional network-distributed computing model in which each node (or vertex) computes using the shared information through the network.
The computation of each node may be seen in a combined point of view or taken as individuals. Respectively, nodes/programs may be computing using network's shared information to solve a common purpose \cite{Abrahao2019nat}---as the classical approach in distributed computing---or, for example, nodes may be ``competing'' with each other---as in a game-theoretical perspective, which we employ in this article (see Section~\ref{sectionBBIG}).
For the present purposes, we are interested in the average fitness (or payoff), and its related emergent complexity that may arise from a process that increases the average fitness.
\begin{definition}\label{BdefAN}
We define an \emph{algorithmic network} $ \mathfrak{N} = (\mathscr{G}, \mathfrak{P}, b)$ upon a population of theoretical machines $\mathfrak{P}$, a multiaspect graph $\mathscr{G}=(\mathscr{A},\mathscr{E})$ and a function $b$ that causes aspects of $\mathscr{G}$ to be mapped into properties of $\mathfrak{ P }$, so that a vertex in $\mathrm{V}(\mathscr{G}) $ corresponds one-to-one to a theoretical machine in $\mathfrak{ P }$ and the communication channels through which nodes can send or receive information from its neighbors are defined precisely by (composite) edges in
$ \mathscr{G} $.
\end{definition}
The MAG $\mathscr{G}$, as previously defined in \cite{Wehmuth2016b}, is directly analogous to a graph, but replacing each vertex by a $n$-tuple, which is called the composite vertex.
Note that a graph is a particular case of a MAG that has only one aspect (i.e., only one node dimension).
A \emph{population} $\mathfrak{P}$ is a sequence (or multiset) with elements taken from $L$ in which repetitions are allowed, where $L$ is the language on which the chosen theoretical machine are running.
A \emph{communication channel} between a pair of elements from $\mathfrak{P}$ is defined in $\mathscr{E}$ by a composite edge (whether directed or not) linking this pair of nodes/programs.
A directed composite edge (or arrow) determines which node/program sends an output to
another node/program, which in turn takes this information as input. An undirected composite edge
(or line) may be interpreted as two opposing arrows.
We say an element $ o_i \in \mathfrak{P} $ is \emph{networked} \textit{iff} there is $
\mathfrak{N} $ such that $o_i$ is running as a node of $ \mathfrak{N}
$, where $ \mathscr{E} $ is non-empty.
That is, there must be at least one composite edge connecting two elements of the algorithmic network.
We say $o_i$ is \emph{isolated} otherwise.
We say that an input $ w \in L $ is a \emph{network input} \textit{iff} it is the only external source of information every node/program receives and it is given to every node/program before the algorithmic network begins any computation.
A \emph{node cycle} in a population $ \mathfrak{ P } $ is defined as a node/program returning an output, which, in the particular studied model \cite{Abrahao2017publishednat} described in Section~\ref{sectionBBIG}, is equivalent to a node completing a halting computation.
If this node cycle is \emph{not} the last node cycle, then its respective output is
called a \emph{partial output}, and this partial output is shared (or not, which depends on
whether the population is networked or isolated) with the node's neighbors, accordingly
to a specific information-sharing protocol (if any).
On the other hand, if the node cycle is the last one, then its output is called a \emph{final output}.
Our formalism enables one to represent a wide range of variations of algorithmic networks with the purpose of modeling a particular problem that may arise from a networked complex system.
For example, the networked population may be synchronous or asynchronous, have a set of information-sharing strategy or none, a randomly generated population or a fixed one, with communication costs or without them, etc.
In addition, the network topology that determines the communication channels may be dynamical, with weighted edges, multilayer etc.
In particular, all models considered hereafter, as described in Section~\ref{sectionBBIG}, are synchronous (i.e., there are communication rounds that every node must respect at the same time), have a fixed information-sharing strategy (i.e., a communication protocol), have a randomly generated population of programs, and no communication cost is considered.
\section{Local fitness optimization in the Busy Beaver imitation game}\label{sectionBBIG}
Now, we explain a particular case of algorithmic network defined by a very simple local rule (i.e., a rule each node follows with respect to its immediate neighbors) that optimizes the fitness value of each node individually.
Then, later on in this article, we will discuss the impacts on the global behavior of the algorithmic network that this simple rule of communication produces.
The main idea of the model in \cite{Abrahao2017publishednat} is as follows:
take a randomly generated set of programs;
they are linked, constituting a dynamic network which is represented by a time-varying graph (or a multiaspect graph with two aspects);
each node/program is trying to return the ``best solution'' it can;
and eventually one of these nodes/programs end up being generated carrying beforehand a ``best solution'' for the problem in question;
this ``best solution'' is spread through the network by a diffusion process in which each node is limited to only imitate the fittest neighbor if, and only if, its shared information is ``better'' than what the very node can produce (see the imitation-of-the-fittest protocol below).
Indeed, a possible interpretation of the diffusion described to the above is \emph{average optimization through diffusion} in a random sampling.
Whereas optimization through selection in a random sampling may refer e.g. to evolutionary computation or genetic algorithms, optimization here is obtained in our model in a manner that a best solution also eventually appears, but is diffused over time in order to make every individual as averagely closer to the best solution as they can.
Therefore, the underlying goal of this process would be to optimize the average fitness of the population by expending the least amount of diffusion time (or communication rounds).
As in \cite{Chaitin2012,Abrahao2015}, we use the \emph{Busy Beaver function} $ BB(N) $ as our complexity measure of \emph{fitness}.
A function $ BB(N) $, where $ BB : \, \mathbb{N} \to \, \mathbb{N} $, returns the largest integer that a program $ p \in \mathbf{L_U} $ with length $ \leq N $ can output.
Naming larger integers relates directly to increasing algorithmic complexity \cite{Chaitin2012}.
Thus, the ``best solution'' assumes a formal interpretation of fittest final output (or payoff).
The choice of the word ``solution'' for naming larger integers now strictly means a solution for the Busy Beaver problem.
Also note that several uncomputable problems are equivalently reduced to the Busy Beaver one, including the halting problem.
In addition, the Busy Beaver function offers other immediate advantages in measuring the complexity of the fitness value.
For example, it grows faster than any computable function, while being scalable (i.e., every fitness value alone can be eventually reached by some individual computable system);
integers being fitness values is universal with respect to Turing machines, while the values themselves are totally dependent on the nodes' initial conditions or context;
the value of $ BB(N) $ is incompressible, i.e., an arbitrary universal Turing machine needs at least, except for a constant, $N$ bits of information to calculate the value of $ BB(N) $.
This way, with a fixed fitness function that works as a universal parameter for every node/program's final (and partial) output, it makes sense to have an interpretation of these running algorithmic networks in \cite{Abrahao2017publishednat} as playing a \emph{networked Busy Beaver game}:
during the node cycles, each node is trying to use the information shared by its neighbors to return the largest integer it can.
The larger the final output integer, the better the payoff (or fitness).
We employ the term \emph{protocol} as an abstraction of its usage in distributed computing and telecommunications.
A protocol is understood as a set of rules or algorithmic procedures that nodes must follow at the end of each node cycle when communicating.
It can be seen as the strategy or ``rules'' for communications under a game-theoretical perspective, and
within this context an algorithmic network can be interpreted as playing a game in which each
node is trying to return the ``best solution'', or the best fitness value, it can.
In our studied model, we want to investigate one of the
simplest, computationally cheapest, or ``worst'' ways that networked nodes can take advantage of its neighbors' information sharing and compare with the best that isolated nodes can do alone.
Hence, we oblige the networked nodes to follow the \emph{imitation-of-the-fittest protocol} (IFP), which is a decidable procedure in which a networked node compares its neighbors' partial outputs and propagates the program of the neighbor that have output the largest integer.
But it only does so if, and only if, this integer is larger than the one that the very node has output in first place.
This way, the networked population is in fact limited to simple imitation:
it is a game with a single strategy, i.e., the IFP, if the node is networked;
and a single strategy, i.e., doing the best (without any specified protocol or strategy) the node can alone, if the population is not networked.
Therefore, we say such an algorithmic network is playing a \emph{Busy Beaver imitation game} (BBIG) \cite{Abrahao2017publishednat}.
\section{Expected emergent open-endedness from universal complexity measures}\label{sectionEOE}
Now, the question is:
how much more algorithmic complexity can this diffusion process generate on the average compared with the best nodes/programs could do if isolated?
Toward an answer to this question, a comparison between the algorithmic complexity of what a node/program can do when networked and the algorithmic complexity of the best a node/program can do when isolated gives the \emph{emergent algorithmic complexity} of the algorithmic network.
Instead of asking about how much complexity is gained by systems over time, as in evolutionary biology and artificial life, we are focusing on another akin question:
how much complexity is gained by systems when the number of parts increases?
Or, more specifically in our case, how much more emergent algorithmic complexity arises on the average when the number of nodes increases?
Once we are restricted to only dealing with networked computable systems, the functioning of these systems occurs in a totally deterministic way.
And more than that, they are computable, i.e., for any one of them there is a Turing machine that, given the environmental conditions or context as input, can always completely determine their next behavior or state from a previous behavior or state.
In this way, algorithmic information theory (AIT) sets foundational results from which one directly obtains an irreducible information content measure \cite{Chaitin2004} of a mathematical object being generated by a computable process;
this object may be e.g. the output of a machine or the future state of a computable system.
More precisely, the quantification of irreducible information content can be stated in bits and is given by the (unconditional) \emph{algorithmic complexity} of an object $x$, i.e., the length of the shortest program that outputs $x$ when this program is running on an arbitrarily chosen universal Turing machine.
In addition, algorithmic complexity is a quantity that is invariant---therefore, irreducible or incompressible---for any other computable process that can generate $x$, except for an additive constant: that is, the two quantities of complexity can only differ by an additive constant for any $x$ and this constant only depends on the choice of the machine and the computable process, so that the constant does not depend on $x$.
On the other hand, algorithmic complexity is an optimal information content measure.
In other words, one can also show that there is a universally maximal recursively enumerable probability semimeasure $\mu$ for the space of all encoded objects such that the time-asymptotic approximation to the probability $\mu(x)$ of occurrence of $x$ is always larger than (except for a multiplicative constant) any other time-asymptotic approximation to the probability $\mu'(x)$ of occurrence of $x$.
More formally, for any recursively enumerable probability semimeasure $\mu'$ for the space of all encoded objects, there is a multiplicative constant $c$, which does not depend on the object $x$, such that, for every $x$, one has that
$ c \, \mu(x) \geq \mu'(x) $ holds.
And this result holds even if one has zero knowledge about the actual probability of occurrence of $x$.
Indeed, one can already note that such zero-knowledge characteristic differs from traditional statistical inference methods, where it is in general assumed that the stochastic random source is, at least, stationary and ergodic.
As one of the main and most profound results in AIT, the \emph{algorithmic coding theorem}, one can show that the probability of $x$ be generated by any possible randomly generated (prefix) Turing machine, the above universally maximal probability semimeasure $\mu(x)$, and the probability of occurrence of the shortest program that generates $x$ are in fact three equivalent values, except for a multiplicative constant that does not depend on $x$.
Thus, at least for the realm of deterministic computable processes, algorithmic complexity is a measure of information content that is irreducible/incompressible and universal, in the sense that it is invariant on the choice of the object at stake and any computable process of measuring the irreducible information content of $x$ equivalently agrees (up to object-independent constant) about the value.
It is a mathematically proven ``bias toward simplicity'' for the space of all generative computable processes.
Not only for the unconditional form of algorithmic complexity, the same phenomenon also holds for the conditional algorithmic complexity, i.e., the length of the shortest program that generates $y$ given the input $x$.
This way, algorithmic complexity appears as an auspicious mathematical form of information content measure, specially for those computable systems whose behavior is dependent on the information received from the environment:
the algorithmic complexity of $y$ given $x$ is a value that is, at the same time, totally dependent on the input (i.e., the initial conditions or previous context), irreducible, and universal.
Therefore, as desirable, quantifying an emergence of complexity in computable systems from a direct comparison between the algorithmic complexity of the networked/interacting case (i.e., $y$) and the isolated case (i.e., $x$) gives a value that is irreducible and universal, although might vary only if the system's environment in which this comparison took place changes.
We follow a consensual abstract notion of \emph{emergence}
\cite{DOttaviano2004,Prokopenko2009}
as a systemic feature or property that appears only if the system is analyzed (theoretically or empirically) as a ``whole''.
Thus, the algorithmic complexity (i.e., an irreducible number of bits of information) of a node/program's final output when networked\footnote{ That is, interacting with other parts of the system.} minus the algorithmic complexity of a node/program's final output when isolated formally defines an irreducible quantity of information that \emph{emerges} with respect to a node/program that belongs to an algorithmic network.
We call it as \emph{emergent algorithmic complexity} (EAC) of a node/program \cite{Abrahao2017publishednat}.
Consequentially, note that if a system is analyzed as a separated\footnote{ The subparts do not need to be necessarily apart from each other, but each part in this case would be taken as an object of investigation where no information enters or exits anyway.} collection of ``subparts'', the EAC of a node/program will be always $0$.
Note that this quantity of bits may be $0$ or negative in some cases.
Therefore, this measure of emergent algorithmic complexity may also be suitable for measuring the cases where algorithmic complexity was ``lost'' when the system is networked.
We leave the study of such degenerate cases as an important future research and, for the present purposes, we are only interested in the situations in which EAC is positive.
A distinction is crucial: the EAC of a node/program must be not confused with the EAC of the \emph{entire} algorithmic network. Measuring the emergent algorithmic complexity of the algorithmic network taking into account every node/program ``at the same time'' is---as our intuition demands to be---mathematically different from looking at each individual final output's algorithmic complexity. For example, one may consider the algorithmic information of each node/program combined (in a non-trivial way) with the algorithmic information of the network's topology.
This relies upon the same distinction between the joint algorithmic complexity of $x$ and $y$ and the algorithmic complexity of each one taken separately.
The sum may not always match the joint case \cite{Chaitin2004}.
Within the framework of algorithmic networks, this ``whole'' emergent algorithmic complexity compared with each individual node can be formally captured by the \emph{joint} algorithmic complexity of each node/program's final output when networked minus the \emph{joint} algorithmic complexity of each node/program's final output when isolated.
That is, the algorithmic complexity of the networked population's output as a whole minus the algorithmic complexity of the isolated population's output as a whole.
An initial step in the direction of tackling this problem is already mentioned in \cite{Abrahao2019nat}.
Analyzing this systemic property is not part of the scope of the present article and it will be a necessary future research, not only in the context of networked computable systems, but it is also an open problem for multivariate stochastic processes \cite{Lizier2018}.
Thus, instead of investigating the \emph{joint} or \emph{global} EAC of an algorihtmic network, one may look for a mean value of EAC for all nodes/programs.
That is, we are focusing the \emph{local} EAC.
The \emph{average} (local) \emph{emergent algorithmic complexity} of a node/program (AEAC) is defined by the mean on all nodes/programs' (and possible network's topologies) EAC.
It gives the average emergent complexity of the nodes/programs' respective fitnesses (or, in a game-theoretical interpretation, payoffs) in a networked population, once there is a fitness function that evaluates final outputs.
Larger positive values of AEAC mean that a node/program needs more irreducible information on the average than it already contains, should it try to compute isolated what it does networked.
A system with a larger AEAC ``informs'' or ``adds'' more information to its parts on the average.
As the model described in Section~\ref{sectionBBIG} is an algorithmic network in which the population of machines is randomly generated from a stochastic process of independent and identically distributed (i.i.d) random variables under a self-delimiting program-size probability distribution, we can refer to the average EAC as \emph{expected} emergent algorithmic complexity (EEAC).
Therefore, both terms, \emph{average or expected}, can be used interchangeably hereafter.
Note here that, whereas the initial network input is completely arbitrary and the algorithmic network itself in the model described in Section~\ref{sectionBBIG} is a deterministic and computable distributed system (once the population of nodes/program is given), the initial generation of the nodes/programs of each algorithmic network is given by a stochastic i.i.d. process.
Thus, each of these algorithmic networks are deterministic (computable) processes, while the infinite process that results from increasing the size of the algorithmic networks and running them is a mixed process (i.e., partially deterministic and partially stochastic).
Another important concept that came from complex systems science, specially from artificial life and evolutionary computation, is \emph{open-endedness}.
It is commonly defined in evolutionary computation and evolutionary biology as the inherent potential of a evolutionary process to trigger an endless increase of distinct systemic behavior capabilities
\cite{Adams2017,Hernandez-Orozco2018}.
Thus, if an infinite space of distinct computable capabilities is eventually covered, this will necessarily lead to an unbounded increase of algorithmic complexity \cite{Hernandez-Orozco2018}.
This means that, in the long run, it will eventually appear an organism that is as complex as one may want.
Given a certain complexity value as target, one would just need to wait a while in order to appear an organism with a larger complexity than (or equal to) the target value---no matter how large this value is.
In turn, this implies that an infinite number of different organisms tends to appear in the evolutionary path after an infinite amount of successive mutations, bringing us equivalently back to the initial definition of open-endedness.
In fact, within the framework of \emph{metabiology}, as shown in \cite{Chaitin2012,Abrahao2015,Chaitin2018}, there is a cumulative\footnote{ Which allows organisms to recall its predecessors. } evolution model that reaches $N$ bits of algorithmic complexity after---realistic fast---$ \mathbf{ O }( N^2 ( \log(N) )^2 ) $ successive algorithmic mutations on one organism at the time---whether your organisms are computable \cite{Chaitin2012}, sub-computable \cite{Abrahao2015,Abrahao2016} or hyper-computable \cite{Abrahao2015}.
Metabiology is a transdisciplinary field based on evolutionary biology and algorithmic information theory that proposes a metatheoretical approach to the open-ended evolution of computable systems \cite{Chaitin2018,Chaitin2014nat}.
Moreover, it is shown in \cite{Hernandez-Orozco2018} and experimentally supported in \cite{Hernandez-Orozco2018a} that the model introduced in \cite{Chaitin2012} satisfies the requirements for \emph{strong} open-ended evolution.
Thus, if one is restricted to the case of evolutionary computation in general\footnote{ That is, taking into account not only those with bounded computational resources, but also those with unbounded computational resources (like Turing machines).} computable systems, open-endedness is then stricly related to an endless increase of algorithmic complexity or irreducible information.
And, since we are studying networked computable systems, we follow this algorithmic and universal approach to open-endedness in which undecidability and irreducibility plays a central role \cite{Hernandez-Orozco2018}.
What we have found is that, within the theory of algorithmic networks, open-endedness also appears in a similar fashion.
However, it emerges as an akin, but formally distinct, phenomenon to open-ended evolution (OEE): instead of achieving an unbounded quantity of algorithmic complexity over time (or successive mutations), an unbounded quantity of emergent algorithmic complexity is achieved as the population size increases indefinitely.
Since it is a property that emerges depending on the amount of parts of a system only when these parts are interacting somehow (e.g., exchanging information) and this new quantity of algorithmic complexity/information is irreducible/incompressible with respect to the programs that governs the functioning of the respective isolated parts, this unbounded increase of EAC arises, by definition, as an emergent property.
So, we refer to it as \emph{emergent open-endedness} (EOE) \cite{Abrahao2017publishednat}.
As discussed before, since we are dealing only with the local EAC and not with the global (or joint) EAC, then a more accurate term would be \emph{local emergent open-endedness}.
For the sake of simplifying our nomenclature, we choose to omit the term ``local'' in this article.
Furthermore, in the case of an increase in the average EAC for every node/program, we refer to it as \emph{average (local) emergent open-endedness} (AEOE).
And, since the population is randomly generated, we refer to AEOE as \emph{expected (local) emergent open-endedness} (EEOE).
We showed in \cite{Abrahao2017publishednat}
that there are network topological conditions and simple communication protocols that trigger EEOE as the randomly generated populations grows toward infinity.
In particular, a model of algorithmic networks for which we proved that it occurs is the one described in Section~\ref{sectionBBIG}; and the network topological conditions can be a strong diffusion power, so that larger fractions of the network are quickly covered by any signal spread by any node, or the presence of the small-diameter phenomenon, which guarantees that the entire network is covered under a small amount of hops, steps, or (in the case of synchronous algorithmic networks) communication rounds.
As shown in \cite{Abrahao2017publishednat},
these conditions caused the EEAC to increase as one may want, should the population size increases sufficiently.
And this occurs even if, for an arbitrarily large (but finite) population size, the EEAC is $0$ or negative.
The networked ``side of the equation'' of the EAC relies only on the simple imitation of the fittest neighbor, while the ``isolated side'' is free of any strategies or protocol so that each node can perform/compute without any restriction.
Thus, we are estimating the emergent algorithmic complexity that arises from a ``worst'' networked case compared with the ``best'' isolated nodes can do alone.
So, if in this worst-case scenario the EAC has increasingly positive integer values, then the EEAC (which is an average-case scenario lower bounded by the worst case) will behave the same way.
More precisely, the expected emergent open-endedness phenomenon tells us that, for large enough population sizes, the probability that these algorithmic networks have a larger AEAC tends to $1$.
The main idea behind the proof is that, given that such conditions are satisfied, there will be a trade-off between the number of communication rounds and the average density of networked nodes with the maximum fitness, so that there is an optimum balance between these two quantities in which, if a large enough average density of these nodes is achieved in a sufficiently small number of communication rounds, then EEOE is triggered.
\section{Emergence of unpredictable and irreducible data: \\ discussion, open problems, and future work}
EEOE is in fact a phenomenon that reflects a phase transition of complexity, in particular, an emergence of algorithmic complexity, with deep implications to the investigation of networked complex systems or any distributed processing of data:
for example, either for designing or engineering artificial computer networks; or analyzing real-world networks of complex system in which each node represents a system that is capable (allegedly) of performing some kind of computation, e.g., biological organisms or humans.
Note that our results show the existence of a phase transition in which, for a critical stage (in the case, a large enough population), the network will change its networked behavior (in comparison to the isolated one) so drastically that it will be impossible for any of the nodes/programs to compute (or computably predict) its own networked behavior.
This is the reason we call this transition as an \emph{expected emergent complexity phase transition}: an algorithmic complexity phase transition that is guaranteed to occur in the asymptotic limit, giving rise to the emergence of irreducible information solely by the fact the population of nodes/programs is networked.
In the case fitness (or payoff) is somehow connected to the complexity of the player's strategy, algorithmic networks theory is a theoretical model for future investigation of game-theoretical consequences of randomly generated arbitrary computable strategies for players without interaction in comparison to networked players' strategies.
Now, take for example real-world networks, such as ecosystems or human societies, where each element is an information processing system \cite{Mitchell2009,Prokopenko2009,Michail2018} that can send and receive information from each other.
Remember that the studied communication protocol in Section~\ref{sectionBBIG} is in fact one of the ``worst'' local rules of individual behavior that is capable of increasing the fitness with respect to its neighbors.
Then, assume for a moment that those real-world networks are composed of nodes/systems with a high enough computational power---indeed, a plausible suposition at least for nodes representing human beings---, so that they eventually begin to perform better than their neighbors in terms of an arbitrarily chosen fitness measure (which may assume unbounded, but reachable, values).
In addition, also assume the entire network is embedded into an ``environment'' that is capable of always ascribing fitness values to nodes.
Thus, now we know there are some network topological conditions, e.g., a strong diffusion power or the small diameter, that eventually enable some algorithmic networks to reach a phase transition point in which EEOE is triggered.
From a computational analysis perspective, EEOE immediately implies that, although graph-topological, structural, or connection-pattern modeling (or predictions) could be made by computational methods for network analysis and machine learning, modeling or predictions by artificial intelligence would be eventually unattainable or intractable with respect to the information content processed by the nodes.
This may be a desirable property for computer networks design, if one is aiming at the networked information processing being relatively uncomputable, or encrypted, to isolated nodes.
Moreover, if one is trying to take advantage of the network-distributed computation with the purpose of computing problems at a higher computational class, the EEOE phenomenon could be harvested from synergistic variations of the communication protocols.
This mathematical phenomenon may be also fruitful for explaining
synergistic behavior found in Nature and societies and
why some network topological properties seem to be favored in biological networks.
Indeed, algorithmic synergy was already shown to exist in networked resource-unbounded computable systems with a slight modification into the IFP \cite{Abrahao2019nat}.
Future research in this direction will be interesting for developing resource-bounded versions and, therefore, more realistic network-distributed computing models and architectures.
On the other hand, EEOE may be a property one is avoiding in order to keep the computer network processing power under control or below a certain degree of complexity.
Such an emergent phenomenon would impose a necessary limit for data analysis in those networks displaying EEOE, if the computational power of the observer is at the same level of the nodes---therefore, also including the case where the observer is one of the nodes.
For any arbitrarily chosen formal theory, or computer program, that an external observer chooses as framework, there will be a critical stage in which the network displays EEOE and any attempt to predict the networked behavior of the nodes will start to be relatively uncomputable (i.e., belonging to a higher level at a computational hierarchy).
In particular, as one can directly obtain from algorithmic information theory (AIT) \cite{Chaitin2004}, the networked behavior will be unpredictable in precise terms of an increasing quantity of bits that are incompressible by any recursive/computable procedure based on the chosen framework, if the observer only a priori knows the behavior of the nodes when isolated.
In other words, the emergent behavior is eventually\footnote{ As the network size increases.} non deducible---even in principle---from the parts for any above described external observer.
Thus, we say EEOE is an \emph{asymptotic observer-independent emergent phenomenon}.
If the observer is part of network that is displaying EEOE, such an unpredictability may be actually magnified, since the observer in this case would only know its own behavior (or maybe also its immediate neighbor's) when isolated.
More than new emergent irreducible information from other individuals in the network appearing to the node/observer, the networked behavior of the very node/observer would appear to itself as emergent with respect to the isolated case (or with respect to a previous initial stage where the respective network computing didn't start yet).
Within the abstract realm of algorithmic networks, future research on this \emph{reflexive emergence} of complexity (i.e., an emergence of complexity that arises from the comparison of the interacting behavior of a agent with the isolated behavior of the same agent) may be fruitful for investigating the presence of a process of algorithmic-informational autonomy \cite{Villalobos2018} as being emergent from the networking interaction with the environment (i.e., the rest of the algorithmic network in which the node/system is part of).
In both cases, i.e., either as a desirable or an undesirable emergent property, the investigation of network topological properties and local rules of interactions that are capable of triggering EEOE, such as in \cite{Abrahao2017publishednat,Abrahao2018publishednat,Abrahao2019nat}, seems to be a fruitful line of research in the intersection of complex systems science, theoretical computer science, complex networks theory, and information theory.
\footnotesize
\bibliographystyle{abntex2-num}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,977
|
Mark W. Stiles Unit is a Texas Department of Criminal Justice men's prison located in an unincorporated area of Jefferson County, Texas, near Beaumont. The unit, located along Farm to Market Road 3514, is southeast of downtown Beaumont. The approximately unit is co-located with the Gist Unit and the LeBlanc Unit.
The unit opened in June 1993. The unit serves as the University of Texas Medical Branch hub site for treatment of HIV and other infectious diseases. As a result, the Stiles facility houses many HIV positive prisoners. A hospice for prisoners with HIV opened at Stiles in 1997.
The unit has offered Buddhist meditation classes since 2003.
In 2011 the metal products plant closed; its operations were consolidated to the plants at the Coffield Unit and the Powledge Unit.
Notable prisoners
Elmer Wayne Henley
John Curtis Dewberry
References
External links
"Stiles Unit." Texas Department of Criminal Justice
Rodriguez, Brenda. "For ill prisoners, time at Beaumont's 'death camp' can be for life." San Antonio Express-News. September 14, 1997. 14A.
Prisons in Jefferson County, Texas
Buildings and structures in Jefferson County, Texas
1993 establishments in Texas
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,264
|
Interesting story here in Wired about how Overstock.com used Mahout to build and deploy a recommendation engine to replace RichRelevance, thereby saving $2 million in annual fees.
Overstock joins an elite list of companies who are monetizing Mahout. including Adobe, Amazon, AOL, Buzzlogic, Foursquare, Twitter and Yahoo.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,484
|
Pilar Castro Parrilla (Madrid, 12 de outubro de 1970 é uma atriz espanhola.
Estudou na Escuela de Arte Dramático de Cristina Rota, e é popular graças à televição.
Filmografia
Ovejas negras (1990), de José María Carreño.
Historias del Kronen (1995), de Montxo Armendáriz.
Taxi (1996), de Carlos Saura.
El ángel de la guarda (1996), de Santiago Matallana.
El conductor (1998), de Jorge Carrasco.
Casting (1998), de Fernando Merinero.
Cuarteto de La Habana (1999), de Fernando Colomo.
La mujer más fea del mundo (1999), de Miguel Bardem.
Segunda piel (1999), de Gerardo Vera.
Descongélate! (2003), de Dunia Ayaso e Félix Sabroso.
Días de fútbol (2003), de David Serrano.
La suerte dormida (2003), de Ángeles González Sinde.
Muertos comunes (2004), de Norberto Ramos del Val.
El asombroso mundo de Borjamari y Pocholo (2004), de Juan Cavestany e Enrique López Lavigne.
Los 2 lados de la cama (2005), de Emilio Martínez Lázaro.
Volver (2006), de Pedro Almodóvar.
Los aires difíciles (2006), de Gerardo Herrero.
Días de cine (2007), de David Serrano.
Gente de mala calidad (2007), de Juan Cavestany.
Siete minutos (2009)
Gordos (2009)
Curta-metragem
Pulp Ration (Ración de pulpo) (1996), de José María Benítez.
Making of 'Atraco' (1997), de Carlos Molinero.
Cien maneras de hacer el pollo al txilindrón (1997), de Kepa Sojo.
Road Movie (1997), de Norberto Ramos del Val.
No sé, no sé (1998), de Aitor Gaizka.
Agujetas en el alma (1998), de Fernando Merinero.
¿Qué hay de postre? (2000), de Helio Mira.
Dos más (2001), de Elias Leon Siminiani.
Looking for Chencho (2002), de Kepa Sojo.
Test (2007), de Marta Aledo e Natalia Mateo.
9 (2010), de Candela Peña.
El premio (2011), de Elías León Siminiani.
TV
Calle nueva (1997-1998)
A las once en casa (1998)
Al salir de clase (2000-2001)
Maneras de sobrevivir (2005)
Los Serrano (2007)
Cuestión de sexo (2007-2009)
Prêmios
Premios Goya, melhor atriz não protagonista, 2009, candidatura
Unión de Actores melhor atriz não protagonista, 2009
Festival de Málaga: Biznaga de Plata a la melhor atriz, 2008, 2011
Ligações externas
IMDb
Castro, Pilar
Castro, Pilar
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 8,264
|
{"url":"https:\/\/archive.softwareheritage.org\/browse\/content\/sha1_git:da63a84680e1876cb6c020f55703c6f7d19360f3\/?path=ae923b6d68fbb296280ab89e5474aba9223ee34d\/vignettes\/bayes_factors.Rmd","text":"bayes_factors.Rmd\n---\ntitle: \"Bayes Factors\"\noutput:\ngithub_document:\ntoc: true\nfig_width: 10.08\nfig_height: 6\nrmarkdown::html_vignette:\ntoc: true\nfig_width: 10.08\nfig_height: 6\ntags: [r, bayesian, bayes factors]\nvignette: >\n\\usepackage[utf8]{inputenc}\n%\\VignetteIndexEntry{Bayes Factors}\n%\\VignetteEngine{knitr::rmarkdown}\neditor_options:\nchunk_output_type: console\nbibliography: bibliography.bib\n---\n\nThis vignette can be referred to by citing the package:\n\n- Makowski, D., Ben-Shachar M. S. \\& L\u00fcdecke, D. (2019). *Understand and Describe Bayesian Models and Posterior Distributions using bayestestR*. Available from https:\/\/github.com\/easystats\/bayestestR. DOI: [10.5281\/zenodo.2556486](https:\/\/zenodo.org\/record\/2556486).\n\n---\n\n{r setup, include=FALSE}\nlibrary(knitr)\noptions(knitr.kable.NA = '')\nknitr::opts_chunk$set(echo = TRUE) knitr::opts_chunk$set(comment=\">\")\noptions(digits=2)\nset.seed(5)\n\n\nThe adoption of the Bayesian framework for applied statistics, especially in social or psychological sciences, seems to be developing in two distinct directions. One of the key topics marking their separation is their opinion about the **Bayes factor**. In short, some authors (*e.g.*, the \"Amsterdam school\", led by [Wagenmakers](https:\/\/www.bayesianspectacles.org\/)) advocate its use and emphasize its qualities as a statistical index, while others point to its limits and prefer, instead, the precise description of posterior distributions (using [CIs](https:\/\/easystats.github.io\/bayestestR\/reference\/hdi.html), [ROPEs](https:\/\/easystats.github.io\/bayestestR\/reference\/rope.html), etc.).\n\nbayestestR does not take a side in this debate, rather offering tools to help you in whatever analysis you want to achieve. Instead, it strongly supports the notion of an *informed choice:* **discover the methods, try them, understand them, learn about them, and decide for yourself**.\n\nHaving said that, here's an introduction to Bayes factors :)\n\n# Bayes Factors\n\n**Bayes factors (BFs) are indices of *relative* evidence of one \"model\" (a data generating process) over another**, which are used in Bayesian inference as alternatives to classical (frequentist) hypothesis testing indices. In the Bayesian framework, a Bayes factor can also be thought of as the quantity by which some *prior* belief about the relative odds of two models are updated in light of the observed data.\n\nAccording to Bayes' theorem:\n\n$$P(M|D) = \\frac{P(D|M)P(M)}{P(D)}$$\n\nThen by comparing two models, we get:\n\n$$\\frac{P(M_1|D)}{P(M_2|D)} = \\frac{P(D|M_1)}{P(D|M_2)} \\times \\frac{P(M_1)}{P(M_2)}$$\nWhere the middle term is the Bayes factor:\n$$BF_{12}=\\frac{P(D|M_1)}{P(D|M_2)}$$\nBayes factors indicate which of the compared models provide a better fit (or better describes) the observed data. These are usually computed as the ratio of marginal likelihoods of two competing hypotheses \/ models, but as we can see from the equation above, they can also be computed by dividing the posterior-odds by the prior-odds. Importantly, **Bayes factors cover a wide range of indices and applications**, and come in different flavors.\n\n## Savage-Dickey density ratio Bayes factor\n\nThe ***Savage-Dickey density ratio*** can be used to answer the question:\n\n> **Given the observed data, is the null more, or less probable?**\n\nThis is done by comparing the density of the null value between the prior and posterior distributions, and is an approximation of a Bayes factor against the (point) null model:\n\n> \"[...] the Bayes factor for H0 versus H1 could be obtained by analytically integrating out the model parameter theta. However, the Bayes factor may likewise be obtained by only considering H1, and dividing the height of the posterior for theta by the height of the prior for theta, at the point of interest.\" [@wagenmakers2010bayesian]\n\nLet's use the Students' Sleep data, and try and answer the question: ***given the observed data, is it more or less likely that the drug (the variable group) has no effect on the numbers of hours of extra sleep (variable extra)?***\n\n{r sleep_boxplot, echo=FALSE, message=FALSE, warning=FALSE}\nlibrary(ggplot2)\nlibrary(dplyr)\n\nggplot(sleep, aes(x = group, y = extra, fill= group)) +\ngeom_boxplot() +\ntheme_classic()\n\n\nThe **bloxplot** suggests that the 2nd group has a higher number of hours of extra sleep. By how much? Let's fit a simple [Bayesian linear model](https:\/\/easystats.github.io\/bayestestR\/articles\/example1_GLM.html).\n\n{r rstanarm_disp, eval=FALSE, message=FALSE, warning=FALSE}\nlibrary(rstanarm)\n\nmodel <- stan_glm(extra ~ group, data = sleep)\n\n\n{r rstanarm_fit, echo=FALSE, message=FALSE, warning=FALSE}\nlibrary(rstanarm)\n\njunk <- capture.output(model <- stan_glm(extra ~ group, data = sleep))\n\n\nWe can use as.data.frame on this model to extract the posterior distribution related to the effect of group2, and the get_priors from the insight package to see what the prior distribution was used:\n\n{r prior_n_post, message=FALSE, warning=FALSE, results='hide'}\nposterior <- as.data.frame(model)$group2 insight::get_priors(model) {r prior_table, echo=FALSE, message=FALSE, warning=FALSE} knitr::kable(insight::get_priors(model)) For the group2 parameter, the prior that was used was a **normal distribution** of **mean** (location) 0 and **SD** (scale) 5.044799. We can simulate this prior distribution as follows: {r message=FALSE, warning=FALSE} library(bayestestR) prior <- distribution_normal(length(posterior), mean = 0, sd = 5.044799) We can now plot both the prior: {r prior_n_post_plot, echo=FALSE, message=FALSE, warning=FALSE} # f_post <- suppressWarnings(logspline::logspline(posterior)) # f_prior <- suppressWarnings(logspline::logspline(prior)) # x_lims <- range(c(posterior, prior)) * 0.7 # ggplot() + # aes(x = 0, y = 0) + # stat_function(aes(color = \"Posterior\"), fun = function(x) logspline::dlogspline(x,f_post), xlim = x_lims, # size = 1) + # stat_function(aes(color = \"Prior\"), fun = function(x) logspline::dlogspline(x,f_prior), xlim = x_lims, # size = 1) + # geom_vline(aes(xintercept = 0), linetype = \"dotted\") + # labs(x = 'group2', y = 'density', color = '') + # theme_classic() + # theme(legend.position = c(0.2,0.8)) + # NULL # Using \"see\" bayesfactor_savagedickey(data.frame(group2 = posterior), data.frame(group2 = prior)) %>% plot() + theme(legend.position = c(0.2,0.8)) Looking at the distributions, we can see that the posterior is centred at r round(median(posterior), 2). But this does not mean that an effect of 0 is necessarily less probable. To test that, we will use bayesfactor_savagedickey! ### Compute the Savage-Dickey's BF {r savagedickey, message=FALSE, warning=FALSE} test_group2 <- bayesfactor_savagedickey(posterior = posterior, prior = prior) test_group2 This BF indicates **likelihood of an effect of 0 (the point-null effect model) is r round(test_group2$BF[1],2) less probable given the data**. In other words, a null effect model is 1\/r round(test_group2$BF[1],2) = r 1\/round(test_group2$BF[1],2) time more likely than a model with an effect! Thus, although the centre of distribution has shifted, it is still quite dense around the null.\n\nNote that **interpretation guides** for Bayes factors can be found [**here**](https:\/\/easystats.github.io\/report\/articles\/interpret_metrics.html#bayes-factor-bf).\n\n### Directional test\n\nWe can also conduct a directional test if we have some prior hypothesis about the direction of the effect:\n\n{r prior_n_post_plot_one_sided, echo=FALSE, message=FALSE, warning=FALSE}\n\n# f_post <- suppressWarnings(logspline::logspline(posterior[posterior > 0], lbound = 0))\n# f_prior <- suppressWarnings(logspline::logspline(prior[prior > 0], lbound = 0))\n# x_lims <- c(0,max(c(posterior,prior))) * 0.7\n# ggplot() +\n# aes(x = 0, y = 0) +\n# stat_function(aes(color = \"Posterior\"), fun = function(x) logspline::dlogspline(x, f_post), xlim = x_lims, size = 1) +\n# stat_function(aes(color = \"Prior\"), fun = function(x) logspline::dlogspline(x, f_prior), xlim = x_lims, size = 1) +\n# geom_vline(aes(xintercept = 0), linetype = \"dotted\") +\n# labs(x = 'group2', y = 'density', color = '') +\n# theme_classic() +\n# theme(legend.position = c(0.8,0.8)) +\n# NULL\n\n# Using \"see\"\nbayesfactor_savagedickey(data.frame(group2 = posterior),\ndata.frame(group2 = prior),\ndirection = \">\") %>%\nplot() +\ntheme(legend.position = c(0.8,0.8))\n\n\n\n{r savagedickey_one_sided, message=FALSE, warning=FALSE}\ntest_group2_right <- bayesfactor_savagedickey(posterior = posterior, prior = prior, direction = \">\")\ntest_group2_right\n\n\nAs we can see, given that we have an *a priori* assumption about the direction of the effect (*that the effect is positive*), **the presence of an effect is r round(test_group2_right$BF[1],2) times more likely than the absence of an effect**. This indicates that, given the observed data, the posterior mass has shifted away from the null value, giving some evidence against the null (note that a Bayes factor of r round(test_group2_right$BF[1],2) is still considered quite [weak evidence](https:\/\/easystats.github.io\/report\/articles\/interpret_metrics.html#bayes-factor-bf)).\n\n### Testing all model parameters\n\nAlternatively, we could also pass our model directly as-is to bayesfactor_savagedickey to simultaneously test all of the model's parameters:\n\n{r}\nbayesfactor_savagedickey(model)\n\n\n## Comparing models\n\nBesides comparing distributions, Bayes factors can also be used to compare whole models. In these cases they can answer the question:\n\n> **Given the observed data, which model is more likely?**\n\nThis is usually done by computing the marginal likelihoods of two models. In such a case, the Bayes factor is a measure of relative evidence between the two compared models. Note that the compared models *do not* need to be nested models (see brms2 and brms3 below).\n\n### Bayesian models (brms and rstanarm)\n\n**Note: In order to compute the Bayes factors for models, non-default arguments must be added upon fitting:**\n\n- brmsfit models **must** have been fitted with save_all_pars = TRUE\n- stanreg models **must** have been fitted with a defined diagnostic_file.\n\nLet's first fit 5 Bayesian regressions with brms to predict Sepal.Length:\n\n{r brms_disp, eval=FALSE, message=FALSE, warning=FALSE}\nlibrary(brms)\n\nm0 <- brm(Sepal.Length ~ 1, data = iris, save_all_pars = TRUE)\nm1 <- brm(Sepal.Length ~ Petal.Length, data = iris, save_all_pars = TRUE)\nm2 <- brm(Sepal.Length ~ Species, data = iris, save_all_pars = TRUE)\nm3 <- brm(Sepal.Length ~ Species + Petal.Length, data = iris, save_all_pars = TRUE)\nm4 <- brm(Sepal.Length ~ Species * Petal.Length, data = iris, save_all_pars = TRUE)\n\n\nWe can now compare these models with the bayesfactor_models function, using the denominator argument to specify which model all models will be compared against (in this case, the constant model):\n\n{r brms_models_disp, eval=FALSE}\ncomparison <- bayesfactor_models(m1, m2, m3, m4, denominator = m0)\ncomparison\n\n\n{r brms_models_print, echo=FALSE, message=FALSE, warning=FALSE}\n# dput(comparison)\n\ncomparison <- structure(list(Model = c(\"Petal.Length\", \"Species\", \"Species + Petal.Length\",\n\"Species * Petal.Length\", \"1\"), BF = exp(c(102.551353996205,\n68.5028425810333, 128.605282540213, 128.855928380748, 0))), class = c(\"bayesfactor_models\", \"see_bayesfactor_models\",\n\"data.frame\"), row.names = c(NA, -5L), denominator = 5L, BF_method = \"marginal likelihoods (bridgesampling)\")\ncomparison\n\n\nWe can see that the full model is the best model - with $BF_{\\text{m0}}=9\\times 10^{55}$ compared to the null (intercept only). We can also change the reference model to the main effect model:\n\n{r update_models, message=FALSE, warning=FALSE}\nupdate(comparison, reference = 3)\n\n\nAs we can see, though the full model is the best, there hardly any evidence it is preferable to the main effects model.\n\n> **NOTE:** In order to correctly and precisely estimate Bayes Factors, you always need the 4 P's: **P**roper **P**riors <sup>([1](https:\/\/doi.org\/10.1016\/j.jmp.2015.08.002), [2](https:\/\/doi.org\/10.1080\/01621459.1995.10476572), [3](https:\/\/doi.org\/10.1016\/S0304-4076(00)00076-2))<\/sup>, and a **P**lentiful **P**osterior <sup>([4](https:\/\/doi.org\/10.1007\/s11336-018-9648-3))<\/sup>.\n\n### The BIC approximation for Frequentist Models\n\nInterestingly, we can also compute Bayes factors for frequentist models! This is done by comparing BIC measures, and also allows for comparing non-nested models [@wagenmakers2007practical]. Let's try it out on **mixed models**:\n\n{r lme4_models, message=FALSE, warning=FALSE}\nlibrary(lme4)\n\nm0 <- lmer(Sepal.Length ~ (1 | Species), data = iris)\nm1 <- lmer(Sepal.Length ~ Petal.Length + (1 | Species), data = iris)\nm2 <- lmer(Sepal.Length ~ Petal.Length + (Petal.Length | Species), data = iris)\nm3 <- lmer(Sepal.Length ~ Petal.Length + Petal.Width + (Petal.Length | Species), data = iris)\nm4 <- lmer(Sepal.Length ~ Petal.Length * Petal.Width + (Petal.Length | Species), data = iris)\n\nbayesfactor_models(m1, m2, m3, m4, denominator = m0)\n\n\n## Inclusion Bayes factors via Bayesian model averaging\n\nInclusion Bayes factors answer the question:\n\n> **Given the observed data, how much more likely are models with a particular effect, compared to models without that particular effect?**\n\nIn other words, on average - are models with effect $X$ better than models without effect $X$?\n\nLets use the brms example from above:\n\n{r inclusion_brms, message=FALSE, warning=FALSE, eval=TRUE}\nbayesfactor_inclusion(comparison)\n\n\nIf we examine the interaction term's inclusion Bayes factor, we can see that across all 5 models, a model with the interaction term (Species:Petal.Length) is 5 times more likely than a model without the interaction term.\n\nWe can also compare only matched models - i.e., a model without effect $A$ will only be compared to models *with* effect $A$, but not with models with higher-level interaction. (See explanation for why you might want to do this [here](https:\/\/www.cogsci.nl\/blog\/interpreting-bayesian-repeated-measures-in-jasp).)\n\n{r inclusion_brms2, message=FALSE, warning=FALSE, eval=TRUE}\nbayesfactor_inclusion(comparison, match_models = TRUE)\n\n\nIn this case, it did not change the inclusion Bayes factors by much (by did change the prior and posterior effect probabilities).\n\n### Comparison with JASP\n\nbayesfactor_inclusion is meant to provide Bayes Factors across model averages, similar to JASP's *Effects* option. Lets compare the two:\n\n#### Compared across all models\n\n{r JASP_all, message=FALSE, warning=FALSE, eval=TRUE}\nlibrary(BayesFactor)\n\nToothGrowth$dose <- as.factor(ToothGrowth$dose)\n\nBF_ToothGrowth <- anovaBF(len ~ dose*supp, ToothGrowth)\n\nbayesfactor_inclusion(BF_ToothGrowth)\n\n\n{r JASP_all_fig, echo=FALSE, message=FALSE, warning=FALSE}\nknitr::include_graphics(\"https:\/\/github.com\/easystats\/bayestestR\/raw\/master\/man\/figures\/JASP1.PNG\")\n\n\n#### Compared across matched models\n\n{r JASP_matched, message=FALSE, warning=FALSE, eval=TRUE}\nbayesfactor_inclusion(BF_ToothGrowth, match_models = TRUE)\n\n\n{r JASP_matched_fig, echo=FALSE, message=FALSE, warning=FALSE}\nknitr::include_graphics(\"https:\/\/github.com\/easystats\/bayestestR\/raw\/master\/man\/figures\/JASP2.PNG\")\n\n\n#### With Nuisance Effects\n\nWe'll add dose to the null model in JASP, and do the same in R:\n\n{r JASP_Nuisance, message=FALSE, warning=FALSE, eval=TRUE}\nBF_ToothGrowth_against_dose <- BF_ToothGrowth[3:4]\/BF_ToothGrowth[2] # OR:\n# update(bayesfactor_models(BF_ToothGrowth), subset = c(4,5), reference = 3)\nBF_ToothGrowth_against_dose\n\nbayesfactor_inclusion(BF_ToothGrowth_against_dose)\n\n\n{r JASP_Nuisance_fig, echo=FALSE, message=FALSE, warning=FALSE}\nknitr::include_graphics(\"https:\/\/github.com\/easystats\/bayestestR\/raw\/master\/man\/figures\/JASP3.PNG\")\n\n\n# References","date":"2021-09-22 19:45:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6794005036354065, \"perplexity\": 8549.981800563952}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780057388.12\/warc\/CC-MAIN-20210922193630-20210922223630-00520.warc.gz\"}"}
| null | null |
\section{Introduction}
\label{Introduction}
In this work, we propose a novel neural-network-based framework called a sentiment-aspect attribution module (SAAM) to solve the problem of document level multi-aspect sentiment analysis (MASA).
The proposed SAAM module can be trained using a set of documents tagged with overall and aspect ratings. During inference, SAAM employs \emph{latent sentiment-aspect attribution} (LSAA) mechanism, where it assigns a latent aspect distribution to each sentence while also estimates their sentiment scores. The estimated latent aspect distribution and sentiment scores for each sentence of a document are then pooled together to estimate the document-level review ratings.
To our knowledge, the proposed idea is the first neural network model capable of discovering both sentiment and aspect information at the sentence level with only document-level aspect-rating labels. Moreover, the framework we introduced in this work is not a single, specific neural network architecture. Instead, it is an add-on component that can be added to other popular neural network architectures to support MASA and LSAA.
\begin{figure}[t]
\begin{mdframed}
\footnotesize
$\bullet$ {\color{negativeRed} Definitely not a 5 star resort I'm dumbfounded that this hotel gets good reviews and is so highly rated. \texttt{[1.23, Value]}}
$\bullet$ {\color{negativeRed} It's decidedly a 3 star property, not 5 stars as indicated. \texttt{[-0.04, Service]}}
$\bullet$ {\color{negativeRed} The rooms are very dated and run down, old crappy beds and pillows, an old tv and overall poorly maintained. \texttt{[-2.97, Room]}}
$\bullet$ {\color{negativeRed} The whole property is pretty run down and old–looking. \texttt{[-0.47, Location]}}
$\bullet$ {\color{negativeRed} The food is subpar, not one meal I had would be called great. \texttt{[-2.23, Service]}}
$\bullet$ {\color{negativeRed} The service is uneven and the staff is poorly trained and uninformed. \texttt{[-2.23, Service]}}
$\bullet$ {\color{positiveGreen} The beach is great, it's the only redeeming factor. \texttt{[1.27, Location]}}
$\bullet$ {\color{negativeRed} However the resort is a 1-hour taxi trip from the airport. \texttt{[1.68, Location]}}
\vspace{0.2cm}
\begin{tabular}{llll}
\textbf{Overall:} & \score{2}{5} & Value: & \score{1}{5} \\
Room: & \score{1}{5} & Location: & \score{4}{5} \\
Cleanliness: & \score{2}{5} & Service: & \score{2}{5} \\
\end{tabular}
\end{mdframed}
\caption{A sample hotel review with user submitted ratings shown beneath. Sentiment scores and aspects assigned to sentences by our model in brackets.}
\label{hotel sample}
\end{figure}
\begin{figure}[t]
\begin{mdframed}
\footnotesize
$\bullet$ {\color{negativeRed} This beer is yellow, fizzy, and clearly meant for washing dirt out of your mouth after mowing the lawn. \texttt{[1.035, Appearance]}}
$\bullet$ {\color{negativeRed} I'm not even sure it's good for that. \texttt{[3.245, Taste]}}
$\bullet$ {\color{negativeRed} It's definitely yellow and fizzy, with no head to speak of, and zero lacing. \texttt{[-1.27, Appearance]}}
$\bullet$ {\color{negativeRed} It almost smells like a loaf of bread, and nearly tastes the same. \texttt{[4.255, Aroma]}}
$\bullet$ {\color{negativeRed} It's very earthy and grainy with nary a hop to be found. \texttt{[3.58, Taste]}}
$\bullet$ {\color{negativeRed} Man, I love me some Caldera, but I would rather drink a Bud Light than this on a hot summer day. \texttt{[1.845, Appearance]}}
$\bullet$ {\color{negativeRed} Sorry guys, but this beer gets an F. \texttt{[1.495, Taste]}}
\vspace{0.2cm}
\begin{tabular}{llll}
\textbf{Overall:} & \score{1}{5} & & \\
Appearance: & \score{1}{5} & Taste: & \score{1}{5} \\
Palate: & \score{1}{5} & Aroma: & \score{1.5}{5} \\
\end{tabular}
\end{mdframed}
\caption{A sample beer review with user submitted ratings shown beneath. Sentiment scores and aspects assigned to sentences by our model in brackets.}
\label{beer sample}
\end{figure}
Fig.\ref{hotel sample} and Fig.\ref{beer sample} show two sample reviews from the hotel review and beer review datasets. The latent aspect and sentiment score of each sentence estimated using our SAAM framework are displayed in brackets. We build 4 variations of our SAAM framework (two classification and two regression) to demonstrate the possibilities available. We stack these 3 variations of SAAM on top of a CNN \cite{kim2014convolutional} and a GRU-based RNN to demonstrate the framework's ability to generalize, as well as compare the performance between them.
Experimental results on the TripAdvisor hotel review dataset and BeerAdvocate beer review dataset show the effectiveness of the proposed approaches by showing performance improvement over corresponding base models as well as other baselines on several metrics for classification and regression variations of the MASA task. Additionally, we evaluate our model's ability of attributing aspect label to each sentence of a document by using manually labeled data as well as a heuristic keyword approach. We publish these processed datasets as well as sentence aspect labeling to better promote researches in this novel task.
In the last section of this paper, we explore a novel capability of the proposed model --- extracting sentiment snippets for each aspect along with a score rating. This is a useful by-product of the model and can be helpful in other tasks such as summarization.
\section{Backgrounds and Motivations}
\label{sec:bg}
\subsection{Related Works}
In \cite{wang2010latent}, the multi-aspect rating task was performed using generative modeling. Later, in \cite{wang2011latent}, a unified generative model for rating analysis was proposed that did not require explicit aspect keyword supervision. However, the model does not utilize aspect ratings of a document but instead uses overall ratings to discover latent aspects and estimate ratings on each aspect. A supervised LDA-like \cite{mcauliffe2008supervised} scheme was proposed in \cite{titov2008joint} and later in \cite{lu2011multi} that regressed the local and global topics (aspects) of reviews with the overall and aspect ratings for each review. Ranking algorithms are designed to either identify important aspects \cite{yu2011aspect} or aspect rating prediction without discovering them \cite{snyder2007multiple}. Another model used document-level multi-aspect ratings as a form of ``weak supervision'' to uncover sentence aspects. While it was quite successful in the sentence aspect attribution task, its primary purpose was not intended for estimating sentiment rating \cite{mcauley2012learning}.
There are many research efforts around SemEval 2015 and 2016 ABSA dataset \cite{pontiki2016semeval}. In these datasets, both aspect and sentiment polarity labels are available at both sentence and document levels. As such, the works such as \cite{tang2016effective,ruder2016hierarchical} address a somewhat different problem than the one in this paper. Both of these works utilize the sentence level labels that are not truly available in real-world review datasets and require labor-intensive labeling. Furthermore, the datasets include an extensive set of aspect categories. In contrast, real-world datasets such as the TripAdvisor review dataset have a fixed small set of aspects (roughly 3 - 5) that users rate on at the document level. These small differences in problem setting ultimately lead to very different solutions and models, and we believe both problem settings have their values.
In recent years, deep learning-based models have dramatically changed the field of natural language processing and significantly improved the performance of document classification \cite{yang-etal-2016-leveraging, boumber2018experiments, zhang2020birds}, Machine Translation \cite{bahdanau2016neural}, and Language Modeling \cite{devlin2019bert}. Convolutional neural networks (CNN) \cite{kim2014convolutional}, recurrent neural networks such as LSTM/GRU \cite{tang2016effective, ruder2016hierarchical} and more recently, pre-trained Bert based architecture have been proposed to solve the problem of sentiment analysis, and these models have significantly advanced state-of-the-art. Pre-trained transformer-alike architectures such as BERT with an extra task-specific layer are fine-tuned on domain reviews for aspect extraction and sentiment classification separately \cite{xu-etal-2019-bert}.
\subsection{Why SAAM}
Most of these previously mentioned deep learning classification models are built on top of some form of \textit{base models} (also called \textit{encoders}) that can take in an embedded sequence of text, and connect them to one or several layers of fully connected layers (also called \textit{decoders} or \textit{classification heads}) to ultimately estimate the probability distribution at the document level. While much progress is made in improving these base models' expressiveness, little attention was paid to the connection between token level outputs of the base models and the final prediction outputs. These token-to-doc connections are often either done by max-pooling/average-pooling \cite{kim2014convolutional,jeremy2018} or directly use the last/first token's output embedding as the document level embedding \cite{ruder2016hierarchical, xu-etal-2019-bert}.
We believe these existing token-to-doc connection schemes are not expressive enough and can become an information bottleneck in both the training and inferencing stage. In comparison, our SAAM framework provides an expressive connection between each sentence and the document level outputs. In doing so, the SAAM framework can further estimate the latent aspect distribution in each sentence, along with their sentiment rating score. Such fine-grained analysis capability, which we refer to as LSAA, provides more insight into the data because typical document-level sentiment classification or regression is a unison of sentiments expressed in various sentences across different aspects.
Secondly, our model only requires overall and aspect document-level ratings during the training stage, which can be acquired by most online review systems that use formats similar to those illustrated in Fig.\ref{hotel sample} and \ref{beer sample}. This means that architecture does not require any sentence-level aspect or sentiment supervision and can be easily applied to most existing review datasets and systems.
Lastly, by assigning each sentence to a proper aspect, the SAAM framework's LSAA capability will allow the generation of aspect-specific sentiment snippets. This feature is similar to a summarization system, where the summarization is based on choosing the relevant sentence under different latent aspects. These three major differences not only allow our model to improve upon current MASA methods, but also take into account variations of sentiment analysis tasks under different perspectives.
\section{Sentiment-Aspect Attribution Module}
\label{models}
\subsection{Problem Formulation}
Formally, we will refer to the text content part of a review simply as \textit{review} in the remaining part of this paper, and denote a single review using $r$. We use $s_i$ to refer to the $i$th sentence of a document and document is thus consisting of $\left|s\right|$ number of sentences. The set of factors that can be evaluated and rated by a reviewer are referred to as aspects, denoted using $A$. And $\lvert A \rvert$ is used to denote the cardinality of set $A$.
For example, the hotel review data we are working with:
$$A=\left\{Value,Room,Location,Cleanliness,Service\right\}$$
The actual overall rating and aspect ratings associated with a review $r$ are denoted as $R_{overall} (r)$ and $R_{aspects} (r)$. To correspond to the 5-star rating scheme, we assume that overall rating is scalar and aspect ratings is a vector consisting of $\lvert A \rvert$ number of elements:
$R_{overall} (r) \in \left\{1,2,3,4,5\right\}$ and $R_{aspects} (r) \in \left\{1,2,3,4,5\right\}^{\left|A\right|}$
\subsection{SAAM Classification--1 (SAAM-C1)}
\label{C1}
\begin{figure}[t]
\includegraphics[width=\linewidth]{2020_Draw_v3.png}
\caption{Architecture of SAAM Classification - 1}
\label{fig:C1}
\end{figure}
The first variation of the SAAM classification model estimates the overall rating class using all features generated from all sentences by the convolution layer or the GRU cell directly. Each sentence's features are also passed into a fully connected softmax layer to estimate the 5-class rating distribution of each sentence, correspondingly. There is one such layer for every sentence in an input $r$ while the weights are shared. We refer to these layers as \textit{Rating Score Layers}. Another set of weights are used to estimate the aspect distribution of each sentence. We refer to these layers as \textit{Aspect Attribution Layers}. The resulting aspect distributions at the \textit{Aspect Attribution Layers} are then used to scale the rating scores from the \textit{Rating Score Layer} of each sentence, such that sentences with a high probability of belonging to a specific aspect exert a stronger influence on the ultimate aspect rating distributions at the document level. All scaled rating scores are then summed up for each aspect to estimate the final rating class for each aspect. The structure of this SAAM variation, together with the underlying K-CNN base, is visualized in Fig.\ref{fig:C1}.
More formally, given any base model such as a CNN or GRU and an input document $r$, we should be able to generate vector representation $\bm{t}$ of dimension $d$ for each sentence of the document. SAAM utilizes these sentence-level feature vectors generated by base networks to estimate latent distributions and ultimately sentiments of the document. For CNNs, these sentence representations are usually generated using max-pooling the filter activations along the sentence length dimension. For RNNs, this embedding can be obtained by using the final output at the last token. While for Bert-based models, the sentence embedding is usually generated by averaging all outputs along the sequence dimension or using the outputs of the \texttt{[CLS]} token. Thus, for an input document with $\left|s\right|$ number of sentences, a matrix $\bm{u}$ of dimension $|s| \times d$ can be obtained. This process is illustrated on the left side of Fig.\ref{fig:C1}.
To obtain a probability distribution of overall rating label for the entire review, all of the features in $\bm{u}$ are then passed into a fully connected softmax layer, the \textit{Overall Rating Layer}. Weights that corresponding to overall rating are labeled with a superscript $o$. This operation is shown in Fig.\ref{fig:C1} where a green arrow is marked with Eq.\ref{eq:C1_L_Overall}.
\begin{align*}
\label{eq:C1_L_Overall} L_{overall} (r)=softmax(\bm{W}^o \cdotp \bm{u} + \bm{b}^o ) \numberthis
\end{align*}
Like we discussed in the beginning of this section, to estimate the rating distribution of other aspects and carry out LSAA, feature values extracted from each sentence are fed into a \textit{rating score layer} and an \textit{aspect attribution layer}. For sentence $s_i$, the rating scores (un-normalized distribution) of sentence $s_i$ over $|C|$ rating classes are calculated by
\begin{align*}
\label{eq:11} score\left(s_i\right)=\left(\bm{W}^a \bm{t}_i + \bm{b}^a \right) \numberthis \\
\textrm{where} \quad \bm{W}^a \in \mathbb{R}^{d \times |C|} \quad \textrm{and} \quad \bm{b}^a \in \mathbb{R}^{|C|}
\end{align*}
In the case of a 5 star rating scheme, the above $|C|$ would equal to 5. This operation is demonstrated in Fig.\ref{fig:C1} where \textit{rating score layer} of each sentence is shown in yellow, 4 of such layer are drawn.
On the other hand, regarding the \textit{aspect attribution layer}, for a review of total $|A|$ aspects, we actually calculate the aspect attribution for sentence $s_i$ over $|A|+1$ aspects. The reason for this additional last element in each vector of attribution distribution, which we referred to as \textit{attribution to other-aspect}, is designed to relax the restriction for the model to some extent. It essentially allows the attribution process to ignore rating scores of some sentences if it deems necessary. Empirically, this structure does make the optimization process faster and allow the models to give a better result.
\begin{align*}
\label{eq:12} aspect(s_i) = softmax(\bm{W}^r \bm{t}_i + \bm{b}^r ) \numberthis \\
\textrm{where} \quad \bm{W}^r \in \mathbb{R}^{d \times (|A|+1)} \quad \textrm{and} \quad \bm{b}^r \in \mathbb{R}^{(|A|+1)}
\end{align*}
Notice in Eq.\ref{eq:11} and Eq.\ref{eq:12}, the same $\bm{W}^a$ and $\bm{W}^r$ are shared across all sentences. Four \textit{aspect attribution layers} are shown in Fig.\ref{fig:C1} marked using blue color.
Here, computing $aspect\left(s_i\right)$ should result in a vector $\mathbb{R}^{\left|A\right|+1}$ with the first $\left|A\right|$ elements represents how strong the sentence $s_i$ is associated to each aspect. We use $aspect\left(s_i\right)_{\left[1:|A|\right]}$ to denote these first $|A|$ elements. On the other hand, the last element of each aspect attribution, denoted as $aspect\left(s_i\right)_{\left[\lvert A \rvert + 1\right]}$, will not be associated with any of the actual aspect. We refer to it as the \textit{attribution of other-aspect}. As we will later explain, this additional attribution dimension gives the model the flexibility to determine if some sentences does not belong to any of the given aspects.
The first $|A|$ elements of \textit{aspect attribution layer} then distribute output from \textit{rating score layer} into respective aspects. More specifically, the scaled score for aspect $j$ of sentence $s_i$ would be equivalent to
\begin{align*}
\label{eq:13} scaledScore\left(s_i\right)^j = aspect\left(s_i\right)_{\left[j\right]} \cdotp score(s_i) \numberthis
\end{align*}
In Fig.\ref{fig:C1} this process is marked red. Notice how $scaledScore\left(s_i\right)$ is also equivalent to an outer product of the previous two layers, resulting in an matrix of size $\mathbb{R}^{(|A|+1) \times |C|}$, where row $j$ of this matrix is $scaledScore\left(s_i\right)^j$.
Lastly, these scaled scores for all sentences in a review are summed up element-wise per aspect. A softmax is then applied to the resulting vector to determine the distribution over rating classes for each aspect of document $r$:
\begin{equation} \label{eq:14} L_{aspect}^j(r) = softmax\left( \sum_{i=1}^{|s|} scaledScore(s_i)^j \right) \numberthis \end{equation}
This is shown in the bottom right corner of Fig.\ref{fig:C1} marked using light blue. We recall that the $ L_{aspect} $ only contains the rating distribution of aspects --- it does not include overall rating distribution. Since rating distribution for overall is directly evaluated using all sentence features $ \bm{u} $ at the \textit{Overall Rating Layer}. Also, it is worth noting that the distribution $L^{|A|+1}_{aspect}(r)$ is not used for estimating any label of the input document; it is the result of \textit{attribution of other-aspect} and hence disregarded.
\subsection{SAAM Classification--2 (SAAM-C2)}
\label{C2}
The second variation of the classification model is very similar to the first one. The only difference in this case is that we do not use a separate weight $\bm{W}^o$ to directly estimate the overall rating distribution. Instead, overall rating is predicted in a similar manner to other aspects, utilizing the sentence aspect attribution process. More specifically, this means for each sentence $s_i$, an \textit{aspect attribution layer} of size $\left|A\right|+2$ is used instead
\begin{align*}
\label{eq:15}
aspect\left( s_i \right) = softmax \left( \bm{W}^r \bm{t}_i + \bm{b}^r \right) \numberthis \\
\textrm{where} \quad \bm{W}^r \in \mathbb{R}^{(d) \times \left(|A|+2\right)} \quad \textrm{and} \quad \bm{b}^r \in \mathbb{R}^{\left(|A|+2\right)}
\end{align*}
Naturally, to estimate the overall rating of a review, we use the $\left(|A|+1\right)$th element of attribution layer: $aspect\left(s_i\right)_{\left[\left|A\right|+1\right]}$ to scale sentence level rating scores towards overall rating. These scores are then summed together and normalized using a softmax operation similar to SAAM-C1.
The main advantage of this modification over SAAM-C1 is the significant reduction in the size of parameters as the original overall weight matrix $\bm{W}^0$ is large. Classification-2 can thus use less memory and potentially less prone to overfitting. Moreover, this scheme can estimate the latent aspect attribution towards the overall aspect, if such information is indeed a point of interest.
\subsection{SAAM Regression (SAAM-R)}
\label{R1}
Apart from the more traditional rating classification task, we also present a variation of the SAAM where output layers are changed to real value regression for rating scores instead, while retaining the sentiment-aspect attribution mechanism. In this setting, the 5-star rating distribution is translated to a real value in the range of 1 to 5.
The regression variation of the architecture is architecturally similar to the first version of the classification model. We still connect all the features from all sentences to the output layer for overall score. However, in this case the overall output of the network is no longer a distribution over the rating classes, but a score without non-linearity. In addition to that, the score is normalized using the sentence count of the corresponding document:
\begin{align*}
\label{eq:18}
L_{overall} (r)= \frac{(\bm{W}^o \bm{u} + b^o )}{|s|} \quad \textrm{where} \quad \bm{W}^o \in \mathbb{R}^{|s| \times d} \numberthis
\end{align*}
Similarly, for each sentence we have a scalar score
\begin{align*}
\label{eq:19}
score\left(s_i \right)=\bm{W}^a \bm{t}_i + b^a \numberthis
\end{align*}
On the other hand, \textit{aspect attribution layer} is kept the same as Classification-1 Eq.\ref{eq:12} in this paradigm. The sentence-level scalar score of sentence $s_i$ is then scaled by multiplying with aspect weights. So for aspect $j$, this is calculated by
\begin{equation} \label{eq:21} scaledScore \left( s_i \right)^j = aspect\left( s_i \right)_{\left[ j \right]} \times score\left( s_i \right) \end{equation}
This operation results in a total of $|A|$ scalar score for each sentence, with each value corresponding to one of the aspects. And the final score for aspect $j$ is calculated by
\begin{equation} \label{eq:22} L_{aspect}^j (r) = \frac{\sum_{i=1}^{|s|} scaledScore\left( s_i \right)^j }{\sum_{i=1}^{|s|} aspect\left( s_i \right)_{[j]} } \end{equation}
Notice the regression scores for aspects are normalized differently compared to the overall score as shown in Eq.\ref{eq:18}: the scoring for each aspect is normalized with the total probability assigned to that aspect by the attribution layer, instead of the number of sentences in the corresponding review. This normalization makes the aspect scoring process equivalent to a weighted average of sentence aspect scoring, with attribution distribution being the weights.
This difference in normalization is due to the overall score being designed to be an average of sentence scores - it would be problematic if a longer review with a high number of positive sentences goes above the 1-5 score range - assuming the padding sentences getting scores close to 0. On the other hand, attribution layer has the capability to ``throw away'' scores from the padding sentences when calculating the aspect scores by assigning them a 100\% weight on \textit{attribution of other-aspect}, i.e. $ aspect\left( s_i \right)_{\left[ |A|+1 \right]}$. Hence the sentence count normalization is no longer needed.
\subsection{Intuitions}
\begin{figure}[t!]
\includegraphics[width=\linewidth]{optimization_2.png}
\caption{Optimization of attribution layer}
\label{fig:opti}
\end{figure}
We provide a simplified example regarding how latent attribution can discover the correct aspect given enough examples. Fig.\ref{fig:opti} shows two training documents, each containing only one sentence, being processed using a simplified SAAM-R model. We picked the regression model and only two aspects, Service and Room, for easier demonstration. However, the idea should be able to generalize to other variants as well as more aspects.
Blue boxes are internal parameters produced by SAAM's Rating Scoring Layer and Attribution Layer (marked as \textit{sentiment} and \textit{attribution} respectively). Green boxes are output values resulting from the element-wise product of the former two layers. Lastly, orange boxes are ground truth scores.
Recall that for SAAM-Regression, the document-level aspect rating predictions are calculated by
\begin{equation} \hat{y} = sentiment \otimes attribution \end{equation}
and the loss can be expressed simply as: $ \label{eq:loss} loss = (\hat{y} - y) ^ 2 $ .
For the first sentence ``Good Service'', assuming the sentiment layer produces the correct score but the attribution layer is wrong by attributing all the sentiment scores into room aspect. When optimizing the sentiment layer and attribution layer using gradient descend, the gradients' directions are indicated using orange arrows for each blue value. As readers can see, the attribution layer will be slightly adjusted towards the correct attribution, which is service 100\% and room 0\%.
In the second example, ``Bad Service'', assuming the attribution layer this time produces the correct aspect distribution, but the sentiment layer mistakenly produced a very high sentiment score. In this case, the gradient descent will pass through the attribution layer and decrease the sentiment score.
Although, in both examples, the gradient descent process has produced some side effects: the sentiment layer in the first case and attribution layer in the second case were optimized in the wrong direction. But ultimately given enough examples and training steps the system should converge correctly. Imagine a third sentence ``Good Room'' with correct ground truth, in which circumstances the optimization process should have only one possible solution in the blue boxes.
\section{Experiments and Evaluations}
\label{experiment}
\subsection{Data}
\label{data}
We use the TripAdvisor hotel review dataset from \cite{wang2010latent} and BeerAdvocate data previously used in works such as \cite{mcauley2012learning} to examine the performance of our framework.
The TripAdvisor dataset consists of 108,891 reviews across 1,850 hotels. Each hotel review in the original raw data is associated with one overall rating and five aspect ratings - Value, Room, Location, Cleanliness, Service. For our experiment, only reviews with more than three sentences, and all of the five aspects rated were selected. Out of the 14,906 reviews that meet the above requirements, 75\%, 25\% of the documents were selected as training and testing sets, respectively. 1000 reviews were picked from the training set as a development set for tuning hyper-parameters. After determined the parameters, the models were re-trained using all training samples available.
Similar parsing and selection process was also applied to the BeerAdvocate dataset, and 100,000 beer reviews were selected for our experiment. Aspects associated with each beer review include Appearance, Taste, Palate, and Aroma. Among these, aspect Palate can be roughly understood as ``mouthfeel''. The advantage of this dataset is that its aspects are more independent of each other --- whereas, in the case of hotel reviews, value, room, and cleanliness are often strongly correlated. This property of the BeerAdvocate dataset allows us to better evaluate the sentence-level aspect attribution process's correctness. Similar to the TripAdvisor dataset, 75\%, 25\% of the documents were selected as training and testing set, respectively, and 1,000 reviews were picked as a development set for tuning model parameters.
We will use the TripAdvisor hotel review dataset to evaluate our classification modules SAAM-C1 and SAAM-C2, while using the BeerAdvocate beer review dataset to evaluate our regression variant of the module SAAM-R. This is due to both this paper's space constraint as well as the fact that BeerAdvocate review ratings are in 0.5 increments.
\subsection{Evaluation of Document-Level MASA}
Because our proposed SAAM is an add-on module that can be combined with almost all modern neural network architectures, the document level sentiment analysis performance of the complete model (base + SAAM) is determined by both components. In the following experiments, we opt to use two representative models as base models to better highlight the characteristics of SAAM.
The first base model is K-CNN proposed in \cite{kim2014convolutional}. In its original form, the model uses a total of 300 convolutional filters (100 of each size) to extract features from each review. Then a fully connected softmax is applied on top of that to estimate the label probability distribution. We trained a separate model for each aspect of the reviews as baselines. We then replace the fully connected layers at the end of K-CNN with our SAAM to demonstrate that we can improve the performance of the overall model.
Furthermore, we have also included a version of the CNN, which we refer to as Expanded CNN (E-CNN), to demonstrate that the performance improvement we observed from using SAAM is not merely due to an increase in the number of parameters. Specifically, in this baseline, reviews are also divided into sentences. Each sentence is then passed to the CNN layer to generate 300 dimension embedding. All of the features generated from sentences are then concatenated and passed to a fully connected softmax layer for classification. Notice this is similar to how the overall rating is estimated in the SAAM-C1 scheme (\ref{C1}) we proposed, shown in Eq.\ref{eq:C1_L_Overall}. A total of $|A|$ such fully connected softmax layers in the model to concurrently train and estimate all aspects.
The second base model we chose is a GRU based RNN \cite{cho2014learning}. Similar to CNN, we set the hidden state vector to 300 and trained a separate model for each aspect of the dataset as baselines. We then replace the final fully-connected layers with our SAAM to demonstrate its flexibility and performance improvement over the base models.
We have also included 3 classification baselines Hierarchical LSTM \cite{ruder2016hierarchical}, Doc2Vec \cite{le2014distributed} and SVM \cite{joachims1999making} for TripAdvisor hotel review classification task, and 2 regression baselines Linear Regression and SVM regression \cite{joachims1999making} for BeerAdvocate beer review regression task. We included these referencing baselines to help readers interpret the difficulty of our task and dataset and use them as a benchmark for estimating the expressiveness of our proposed modules. The Hierarchical LSTM proposed in \cite{ruder2016hierarchical} consists of two levels of LSTM networks: one working at the word level to generate sentence embedding vectors, another takes these sentence embedding vectors as input and estimates sentiment polarity for each sentence in a document. To adapt this model as one of our baselines, we modify the model by concatenating the last output vectors from the sentence-level bi-directional LSTM and feeding the resulting vector to several dense layers, where each layer corresponds to one of the aspects.
We also note that the additional computational time required to train SAAM is not significant, as the additional parameter matrix $W^a$ and $W^r$ is relatively small. We tested all SAAM variants on one Nvidia Titan RTX GPU; the increase in training time was around 10\% to 30\% longer compared to base model CNN and RNN.
\subsection{MASA Results}
\label{hotel result}
\begin{table*}[]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{lllllllllllllll}
\thickhline
& \multicolumn{2}{c}{} & \multicolumn{2}{c}{Aspect 1} & \multicolumn{2}{c}{Aspect 2} & \multicolumn{2}{c}{Aspect 3} & \multicolumn{2}{c}{Aspect 4} & \multicolumn{2}{c}{Aspect 5} & Avg & Avg \\
& \multicolumn{2}{c}{Overall} & \multicolumn{2}{c}{Value} & \multicolumn{2}{c}{Room} & \multicolumn{2}{c}{Location} & \multicolumn{2}{c}{Cleanliness} & \multicolumn{2}{c}{Service} & \multicolumn{2}{l}{} \\
\multirow{-3}{*}{} & Acc & MSE & Acc & MSE & Acc & MSE & Acc & MSE & Acc & MSE & Acc & MSE & Acc. & MSE \\
\hline
K-CNN & \cellcolor[HTML]{C0C0C0}58.0 & 0.715 & \cellcolor[HTML]{C0C0C0}50.8 & 0.943 & \cellcolor[HTML]{C0C0C0}45.1 & 1.061 & \cellcolor[HTML]{C0C0C0}44.8 & 1.302 & \cellcolor[HTML]{C0C0C0}47.5 & 0.995 & \cellcolor[HTML]{C0C0C0}50.3 & 1.319 & \cellcolor[HTML]{C0C0C0}47.70 & 1.124 \\
E-CNN & \cellcolor[HTML]{C0C0C0}58.6 & 0.600 & \cellcolor[HTML]{C0C0C0}49.9 & 0.883 & \cellcolor[HTML]{C0C0C0}41.8 & 1.135 & \cellcolor[HTML]{C0C0C0}42.9 & 1.107 & \cellcolor[HTML]{C0C0C0}46.1 & 1.076 & \cellcolor[HTML]{C0C0C0}48.6 & 1.224 & \cellcolor[HTML]{C0C0C0}45.86 & 1.085 \\
CNN+SAAM-C1 & \cellcolor[HTML]{C0C0C0}58.3 & 0.706 & \cellcolor[HTML]{C0C0C0}\textbf{51.6} & 0.888 & \cellcolor[HTML]{C0C0C0}\textbf{47.2} & \textbf{0.985} & \cellcolor[HTML]{C0C0C0}44.7 & 1.308 & \cellcolor[HTML]{C0C0C0}\textbf{50.2} & 1.042 & \cellcolor[HTML]{C0C0C0}\textbf{51.6} & \textbf{1.138} & \cellcolor[HTML]{C0C0C0}\textbf{49.06} & 1.072 \\
CNN+SAAM-C2 & \cellcolor[HTML]{C0C0C0}58.0 & 0.62 & \cellcolor[HTML]{C0C0C0}\textbf{51.8} & \textbf{0.803} & \cellcolor[HTML]{C0C0C0}\textbf{48.2} & \textbf{0.906} & \cellcolor[HTML]{C0C0C0}\textbf{45.3} & 1.166 & \cellcolor[HTML]{C0C0C0}\textbf{49.3} & \textbf{0.927} & \cellcolor[HTML]{C0C0C0}\textbf{51.0} & \textbf{1.039} & \cellcolor[HTML]{C0C0C0}\textbf{49.12} & \textbf{0.968} \\
\hline
RNN & \cellcolor[HTML]{C0C0C0}58.2 & 0.647 & \cellcolor[HTML]{C0C0C0}51.4 & 0.891 & \cellcolor[HTML]{C0C0C0}44.9 & 1.158 & \cellcolor[HTML]{C0C0C0}43.5 & 1.467 & \cellcolor[HTML]{C0C0C0}45.9 & 1.214 & \cellcolor[HTML]{C0C0C0}48.4 & 1.209 & \cellcolor[HTML]{C0C0C0}48.72 & 1.098 \\
RNN+SAAM-C1 & \cellcolor[HTML]{C0C0C0}56.6 & 0.722 & \cellcolor[HTML]{C0C0C0}\textbf{54.9} & \textbf{0.772} & \cellcolor[HTML]{C0C0C0}\textbf{49.0} & \textbf{0.976} & \cellcolor[HTML]{C0C0C0}\textbf{45.8} & 1.407 & \cellcolor[HTML]{C0C0C0}\textbf{49.8} & \textbf{1.041} & \cellcolor[HTML]{C0C0C0}\textbf{51.5} & \textbf{1.100} & \cellcolor[HTML]{C0C0C0}\textbf{51.27} & 1.003 \\
RNN+SAAM-C2 & \cellcolor[HTML]{C0C0C0}\textbf{60.2} & \textbf{0.625} & \cellcolor[HTML]{C0C0C0}\textbf{54.1} & \textbf{0.824} & \cellcolor[HTML]{C0C0C0}\textbf{49.5} & \textbf{0.969} & \cellcolor[HTML]{C0C0C0}\textbf{46.6} & \textbf{1.279} & \cellcolor[HTML]{C0C0C0}\textbf{50.4} & \textbf{1.021} & \cellcolor[HTML]{C0C0C0}\textbf{52.3} & \textbf{1.052} & \cellcolor[HTML]{C0C0C0}\textbf{52.19} & \textbf{0.962} \\
\hline
Hi-LSTM & \cellcolor[HTML]{C0C0C0}61.6 & 0.533 & \cellcolor[HTML]{C0C0C0}54.7 & 0.751 & \cellcolor[HTML]{C0C0C0}46.4 & 1.029 & \cellcolor[HTML]{C0C0C0}44.8 & 1.216 & \cellcolor[HTML]{C0C0C0}47.1 & 1.052 & \cellcolor[HTML]{C0C0C0}48.7 & 1.234 & \cellcolor[HTML]{C0C0C0}50.5 & 0.969 \\
Doc2Vec & \cellcolor[HTML]{C0C0C0}54.1 & 0.829 & \cellcolor[HTML]{C0C0C0}47.8 & 1.087 & \cellcolor[HTML]{C0C0C0}42.3 & 1.305 & \cellcolor[HTML]{C0C0C0}44.7 & 1.439 & \cellcolor[HTML]{C0C0C0}45.1 & 1.291 & \cellcolor[HTML]{C0C0C0}47.3 & 1.585 & \cellcolor[HTML]{C0C0C0}45.44 & 1.341 \\
SVM & \cellcolor[HTML]{C0C0C0}29.2 & 1.892 & \cellcolor[HTML]{C0C0C0}35.5 & 2.368 & \cellcolor[HTML]{C0C0C0}33.9 & 2.368 & \cellcolor[HTML]{C0C0C0}8.4 & 9.010 & \cellcolor[HTML]{C0C0C0}32.5 & 1.917 & \cellcolor[HTML]{C0C0C0}33.3 & 2.375 & \cellcolor[HTML]{C0C0C0}28.72 & 3.608 \\
\thickhline
\end{tabular}
}
\caption{Performance of proposed SAAM classification variants against corresponding base models and other baselines, experimented on TripAdvisor hotel review dataset.}
\label{table:hotel result}
\end{table*}
Table \ref{table:hotel result} shows the results of classification variants SAAM-C1, SAAM-C2 on top of CNN and GRU-RNN compared against the base version of these two models, evaluated on TripAdvisor testing set. Both prediction Accuracy (Acc) and Mean Squared Error (MSE) are calculated for overall and other five aspects. To calculate MSE, the predicted classes are regarded as real values when calculating. On the right-most two columns, averaged five aspect Accuracy and MSE are calculated for easier comparison. We use bold texts to highlight statistically significant performance improvement of SAAM applied models over their corresponding base models.
From Table \ref{table:hotel result}, we note that our proposed SAAM-C1 and SAAM-C2 models can provide a consistent performance improvement over their corresponding base model counterparts. More specifically, stacking SAAM-C1 and SAAM-C2 on top of CNN and RNN on average improves the aspect sentiment classification accuracy by 2 to 3 percent. In certain aspects such as \textit{Room} and \textit{Cleanliness}, the improvements in accuracy are as much as 5 percent. We also note that there is little to no improvements on the \textit{Overall} rating classification. One reason for this could be that overall sentiment classification is relatively easy, as the model does not need to learn aspect specific feature combinations, and reviewer behavior is more consistent for overall ratings.
As a reference, Hi-LSTM provides an additional 1 to 4 percent of accuracy when comparing with base versions of RNN. This is likely due to the additional expressiveness offered by the second layer of LSTM, which can selectively pass through sentence-level features to document level output to allow more accurate distribution estimation. In other words, the performance advantage of Hi-LSTM can be attributed to its more expressive sentence-to-document connection. As a comparison, after combining SAAM-C1 and C2 with the RNN base model, the performance gaps between RNN and Hi-LSTM have been eliminated and, in some cases, reversed, indicating that SAAM significantly improves the expressiveness and information flow from sentence level to document level.
\begin{table*}[]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{lllllllllllll}
\thickhline
\multirow{2}{*}{} & \multicolumn{2}{c}{Overall} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Aspect 1\\ Appearance\end{tabular}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Aspect 2\\ Taste\end{tabular}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Aspect 3\\ Palate\end{tabular}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Aspect 4\\ Aroma\end{tabular}} & \multicolumn{2}{c}{Average} \\
& MSE & R2 & MSE & R2 & MSE & R2 & MSE & R2 & MSE & R2 & MSE & R2 \\
\hline
E-CNN & 0.267 & 0.423 & 0.228 & 0.325 & 0.260 & 0.454 & 0.239 & 0.425 & 0.258 & 0.408 & 0.246 & 0.403 \\
CNN+SAAM-R & 0.264 & 0.431 & \textbf{0.208} & \textbf{0.386} & \textbf{0.207} & \textbf{0.564} & 0.220 & 0.471 & \textbf{0.219} & \textbf{0.498} & \textbf{0.213} & \textbf{0.480} \\
\hline
RNN & 0.256 & 0.448 & 0.209 & 0.383 & 0.231 & 0.514 & 0.237 & 0.429 & 0.243 & 0.445 & 0.235 & 0.443 \\
RNN+SAAM-R & \textbf{0.228} & \textbf{0.508} & 0.195 & 0.424 & \textbf{0.182} & \textbf{0.617} & \textbf{0.202} & \textbf{0.514} & \textbf{0.199} & \textbf{0.542} & \textbf{0.201} & \textbf{0.521} \\
\hline
Linear Regr & 0.307 & 0.338 & 0.255 & 0.246 & 0.266 & 0.440 & 0.287 & 0.308 & 0.285 & 0.346 & 0.273 & 0.335 \\
SVM & 0.272 & 0.414 & 0.226 & 0.332 & 0.235 & 0.505 & 0.253 & 0.391 & 0.252 & 0.421 & 0.242 & 0.412 \\
\thickhline
\end{tabular}
}
\caption{Performance of proposed SAAM regression variants against corresponding base models and other baselines, experimented on BeerAdvocate beer review dataset.}
\label{table:beer result}
\end{table*}
Table \ref{table:beer result} shows the performance of our SAAM regression models (SAAM-R) based on CNN and RNN, as well as base models and other referencing baselines evaluated on BeerAdvocate beer review data testing set. We use bold texts to highlight statistically significant performance improvement of SAAM applied models over their corresponding base models. Once again, we can see by adding our SAAM regression module on top of the base model, we can significantly reduce the error when comparing to base models. Among all aspects, we observed that base models CNN and RNN have relatively poor performance on aspect \textit{Taste} and \textit{Aroma}, which may be caused by languages describing these two aspects being very similar. The attribution mechanism in our model can alleviate this issue by redirecting the latent sentence-level sentiment to the correct aspect.
\subsection{Evaluation of Latent Sentence-Level Aspect Attribution}
One of our SAAM framework's key advantages is that it can leverage the latent aspect attributed to each sentence and organically combine them. In this section, we evaluate the LSAA facet of our models. We let two human labelers manually label 1,000 sentences with aspects in each of the datasets. For both of the dataset, the set of possible labels included names of the existing aspects and an additional label ``\textit{none}'' --- which indicates the labeler thinks the sentence is not related to any of the aspects. The labeling from two labelers achieved a Cohen's Kappa agreement score of 0.66, indicating significant agreement but not perfect. On the other hand, the beer review dataset shows a better agreement score of 0.70, reinforcing our observation that aspects in beer review are more independent and unambiguous.
In addition to the human labelers, we take advantage of the review format many reviewers follow in the BeerAdvocate dataset as another set of ground truth. More specifically, many reviewers on BeerAdvocate use ``A:'', ``S:'', ``M:'' and ``T:'' to signify the beginning of corresponding review segments\footnote{``A'' for Appearance; ``S'' for Smell, corresponding to Aroma aspect; ``M'' for Mouthfeel, corresponding to Palate aspect; ``T'' for Taste.}. We selected around 16,000 sentences that have these prefixes and marked them with corresponding correct labels.
\begin{table}[]
\centering
\begin{tabular}{lccccc}
\hline
\hline
& Hotel 1 & Hotel 2 & Beer 1 & Beer 2 & Beer Keywd \\
CNN+C1 & 0.32 & 0.35 & - & - & - \\
CNN+C2 & 0.48 & 0.47 & - & - & - \\
CNN+R & - & - & 0.63 & 0.61 & 0.87 \\
\hline
GRU+C1 & 0.46 & 0.50 & - & - & - \\
GRU+C2 & 0.55 & 0.52 & - & - & - \\
GRU+R & - & - & 0.68 & 0.64 & 0.95 \\
\hline
\hline
\end{tabular}
\caption{Evaluation of our SAAM framework's ability to estimate latent sentence aspects. Accuracy is reported against two independent human labelings on both datasets and a keyword based labeling method on BeerAdvocate dataset.}
\label{table:aspect}
\end{table}
To obtain the sentence-level latent aspects attributed by SAAM, we looked at the estimated latent aspect distribution ($aspect\left( s_i \right)$) for that sentence. If the dominant value of the learned aspect distribution dovetailed with the aspect labeled by the labeler, it is considered as a correct attribution. Table \ref{table:aspect} shows these evaluation results of SAAM-C1, SAAM-C2 on the hotel review dataset, and SAAM-R on the beer review dataset. We can observe that almost all model combinations can attribute sentences to aspects at reasonably high accuracy. Among these, the regression model based on GRU is the best performer on this task. Moreover, we can see the regression models demonstrate even stronger agreement with the keyword-based labeling. This means the models have successfully learned these keywords and are using them as strong signals when conducting latent aspect attribution.
It is worth noting that due to inherent overlapping between aspect categories, reviewer subjectivity, and vague nature of some of the aspects, this LSAA task is non-trivial. Considering it is a latent variable and there are 4 to 5 potential classes, the above results indicate good performance.
\subsection{Snippet Extraction}
In addition to estimating latent aspect distribution, SAAM can also estimate the latent sentiment distribution ($score\left( s_i \right)$) for each sentence. We believe there is much exciting opportunity for information extraction by combining this latent information discovered through LSAA. This section demonstrates one interesting possible application of review snippet extraction per aspect, inspired by several existing review summarization work such as \cite{li2010structure}. Particularly, it is interesting for those cases where the overall review rating is positive while one aspect is evaluated negatively (or vice-versa) and whether our model is able to explain the discrepancy. Here, we show some qualitative results using SAAM-R to provide an intuitive understanding of this application and SAAM framework.
\vspace{0.3cm}
\noindent
\textbf{Review 1, Overall 5 Stars:} ``\textit{spent 5 days at excellence at Punta Cana, most of the people who work at the hotel were very pleasant \ldots }'' \\
Sentiment snippet for Service aspect via the lowest sentiment score:
\begin{itemize}
\item \textit{``internet service was not available in the room and barely in the lobby area''} \textbf{[Service, -2.89]}
\end{itemize}
\vspace{0.3cm}
\noindent
\textbf{Review 2, Overall 1 star:} ``\textit{I do not know where to start. the roaches in the room, the rude waiters, bartenders, front desk, the dead flies that stayed on our friends' mirror the entire stay, the average at best food \ldots }'' \\
Sentiment snippet for Location and Cleanliness aspects via the highest sentence score:
\begin{itemize}
\item \textit{``the beach was fabulous''} \textbf{[Location, 5.99]}
\item \textit{``the resort itself, décor, pool, beach access was great''} \textbf{[Cleanliness, 5.90]}
\end{itemize}
\vspace{0.3cm}
\noindent
\textbf{Review 3, Overall 5 stars with 3 stars in Location aspect:} ``\textit{Was awesome. my wife and I traveled to excellence 11/20-11/26 and had a great time \ldots}'' \\
Sentiment snippet for Location aspect via the lowest sentence score:
\begin{itemize}
\item \textit{``the worst part about this resort is the drive there and back, the roads are terrible and it is over an hour''} \textbf{[Location, -1.53]}
\end{itemize}
\vspace{0.3cm}
\noindent
\textbf{Review 4, Overall 4 stars with 2.5 stars in Palate aspect:} ``\textit{A: Pours a clear yellow with a mild white head, good retention \ldots}'' \\
Sentiment snippet by extracting the only Palate sentence:
\begin{itemize}
\item \textit{``M: Very light-bodied, watery, light base beer for sure.''} \textbf{[Palate, -0.96]}
\end{itemize}
\section{Conclusion}
\label{conclusion}
In this paper, we presented a novel add-on framework called the sentiment-aspect attribution module (SAAM) that can be combined with common deep learning architectures to solve the problem of multi-aspect sentiment analysis. The proposed SAAM addresses the token-to-doc connection bottleneck problem using an intuitive and expressive latent sentiment-aspect attribution (LSAA) process. Furthermore, the LSAA process also facilitates fine-grained sentiment analysis and summarization. Two classification and one regression variants of the SAAM were demonstrated and tested on both CNN and RNN based networks. Experimental results on real-world hotel review dataset and beer review dataset demonstrated significant performance improvement over original base networks. Lastly, we also demonstrated the potential of using sentence level latent features generated by SAAM for aspect-specific or sentiment-specific snippet extraction. We understand there is a lot of room for improvement for this iteration of SAAM. However, we believe this work presents a fascinating new angle in solving multi-labelled document classification problems.
\vspace{0.3cm}
\noindent\textbf{Acknowledgement}
\noindent\textit{Research was supported in part by grants NSF 1838147 and ARO W911NF-20-1-0254. The views and conclusions contained in this document are those of the authors and not of the sponsors.}
\bibliographystyle{IEEEtran}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,601
|
The Gospel of Sri Ramakrishna as translated by Swami Nikhilananda offers the reader a penetrating view into the spiritual wisdom of India.
G.A. Williamson's translation puts clarity first. Tedious repetitions have gone, sentences and sections are kept short, and clear editorial headings signpost the reader.
The Hobbit and The Lord of the Rings in their definitive text settings complete with maps and cover illustrations by the celebrated artist Alan Lee.
A major work on the practice of yoga and meditation. Learn how you can control your mind and achieve inner freedom and peace through methods taught for over 2,000 years.
For five hundred years, this gentle book, filled with the spirit of the love of God, has brought understanding and comfort to millions of readers in over fifty languages.
One of the most celebrated books on mystical theology in existence, it is the most sublime and mature of Teresa's works.
In this skillful translation, Herbert Guenther offers English-speaking readers Gampopa's comprehensive and authoritative exposition of the stages of the Buddhist path.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,419
|
\section{Introduction}
Asymmetric exclusion models with a fixed
\cite{schutz,Derrida-92,Derrida-93}, and typically large number of
lattice sites have been the subject of much recent theoretical
attention \cite{Sandow-94,Kolomeisky-98,Chou-04,Lak-03,
Shaw-03,Evans-94,Lahiri-97,Lahiri-00,Naimi-05,Evans-03,Frey-03,Frey-04}. Biophysical
applications and new fundamental understanding of non-equilibrium
steady-states (NESS) have motivated many extensions of the simple
totally asymmetric exclusion process (TASEP) with open
boundaries. These include partially asymmetric models, where particles
can hop backward \cite{Sandow-94}, exclusion processes with nonuniform
hopping rates \cite{Kolomeisky-98,Chou-04,Lakatos-06}, exclusion among
particles of arbitrary size \cite{Lak-03,Shaw-03,dong}, multispecies
exclusion processes \cite{Evans-94, Lahiri-97,Lahiri-00,Naimi-05},
multichannel exclusion processes \cite{junction} and exclusion
processes with Langmuir type adsorption and desorption kinetics
\cite{Evans-03,Frey-03,Frey-04}. All of these studies have considered
open, well-defined boundaries, where the length of the lattice is
fixed. TASEP models with one open and one closed boundary conditions
have also been considered \cite{Klumpp-05}.
However, applications may arise where the length of the system is
dynamically varying. The system size may vary because a single
particle pushes against a boundary-defining wall. One example is
helicase-induced opening of replication forks in DNA processing
\cite{Betterton-03}. Here, the moving replication fork defines a
moving boundary of the system. Examples of variable-system size
exclusion processes that involve multiple motor particles include mRNA
translation in the presence of hairpins in the mRNA, and molecular
motors processing on elongating actin filaments. Ribosomes that
process along mRNA (in the process of protein synthesis) during
translation \cite{MacDonald-68,Chou-04} often encounter a hairpin and
the position at which the hairpin starts represents a wall over which
the processing ribosomes cannot pass. The detachment rates of the
ribosomes and the tightness of the hairpin may determine if the
ribosomes can translate the mRNA through the hairpin sequences. Actin
polymerization at the leading edge of filopodia also seems to be
mediated by processing molecular motors that may carry actin assembly
components \cite{mitchinson,traffic}. The motors detach, and possibly
attach, anywhere along the growing actin filament
\cite{Klumpp-04}. The depolymerization of the leading tip may be
limited or enhanced by the presence of a motor or other actin
associated proteins \cite{purich,julicher}. Finally, a recent model of
a dynamically extending exclusion process without Langmiur kinetics
has been recently studied \cite{HYPHAE2}. This model has been applied
to filamentous hyphae growth in fungi \cite{HYPHAE}.
With the above applications in mind, we consider a TASEP with a
dynamically varying length. Specifically, we analyze a many-particle
asymmetric exclusion process with a fixed open boundary on the left, a
fluctuating boundary on the right, and Langmuir kinetics. The
particles have a fixed injection site and can adsorb and desorb. A
wall with an intrinsic leftward drift (representing {\it e.g.}, a
hairpin which energetically favors spontaneous closing or the barbed
end of an actin filament that prefers depolymerization) prevents the
passage of particles. The particles advance and provide a pressure
against the wall. For certain attachment/detachment and wall hopping
rates, the system reaches a NESS in which the statistics of the wall
position are stationary. For other values of the kinetic parameters,
no time-independent mean wall position exists. The wall will either
drift steadily towards the particle injection site and fall off the
lattice, or move indefinitely away from the injection site,
continuously increasing the size of the system. The specific details
of the stochastic process are shown in Figure \ref{FIG1}. Particles
are injected into the first lattice site with rate $\alpha$ provided it is
empty. In the interior of the lattice, each particle moves forward
with rate $p$ only if the site ahead of it is unoccupied. Particle
attachment and detachment occur with rate $k_{+}$ and $k_{-}$,
respectively, throughout the lattice.
\begin{figure}[h]
\begin{center}
\includegraphics[height=0.48in]{Fig1.eps}
\end{center}
\vspace{-4mm}
\caption{A totally asymmetric exclusion process bounded by a
fluctuating wall. Particles are injected onto the leftmost site with
rate $\alpha$, and move to the right with rate $p$. In the interior,
particles detach and adsorb with rates $k_{-}$ and $k_{+}$,
respectively where $k_{\pm} \ll p$. The lattice is bounded on the
right by a fluctuating wall with intrinsic hopping rates $w_+$ and
$w_-$, where $w_+<w_-$.}
\label{FIG1}
\end{figure}
The lattice length is not fixed, and $N$ denotes the position of the
particle-confining wall that hops forward with rate $w_+$, and
backward with rate $w_-$ provided there is no particle to its
immediate left. The particle occupation at each site, $1\leq i \leq
N-1$, left of the wall is represented by the occupation variable
$\sigma_{i} \in \{0,1\}$. If $w_{-} \leq w_{+}$, the wall will move
indefinitely away from the injection site. In order to prevent the
wall from always escaping to infinity, we consider the more
interesting case of an intrinsic leftward drift described by $w_{-}>
w_{+}$.
The wall position $N$ is not fixed (even at steady state), but rather,
is determined by the intrinsic wall hopping rates, and the
exclusionary interactions between the wall and the lattice
particles. Our analysis is aimed at understanding how the wall
dynamics depend on the parameters $\alpha, k_{\pm}, w_{\pm}, p$. In
the next section, we derive relations for the distribution functions
of the wall. In steady-state, these relations constrain the particle
density at the wall. In Section \ref{SIGMA}, we use mean field theory
(MFT) to solve for the density profile, and show that the density
profile obtained using mean field theory is inaccurate near the wall.
Since quantitative prediction of the wall dynamics will require
accurate determination of the particle densities near the wall, in
Section \ref{sec:FS}, we develop a moving-frame finite segment mean
field approach to accurately solve for the density profile near the
wall. The existence of a steady-state solution and the dependencies
of the mean wall position $\langle N \rangle$ on the problem
parameters are explored and plotted in Section \ref{RESULTS}.
\section{Wall dynamics}\label{WALL}
The net drift of the wall is the difference between its forward and
effective backward hopping rates. The effective backward hopping rate
depends on both the intrinsic backward hopping rate $w_-$, and on the
occupancy of the site immediately to the left of the wall since a
particle there will block the wall's backward motion. The wall's
rightward hopping is never impeded. The probability of finding a
particle immediately to the wall's left varies with its position,
thus, the wall dynamics are position-dependent. Define $Q_N(t)$ as
the probability that the wall is at position $N$ at time $t$, and
$Q'_N(t)$ as the joint probability that the wall is at position $N$ at
time $t$ \emph{and} the site just before the wall is empty. The wall
dynamics obey
\begin{equation}
\dot{Q}_{N}(t) = w_{-}Q'_{N+1}-w_{-}Q'_{N}-w_{+}Q_{N} + w_{+}Q_{N-1},
\label{EFFECTIVEQ}
\end{equation}
and the moments of the wall position can be formally expressed as
\begin{equation}
{\partial \over \partial t} \langle N^k\rangle =
\sum_{N=0}^{\infty}(w_{-}Q_{N}'-w_{+}Q_{N})\sum_{j=0}^{k-1}(-1)^{k-j}
{k \choose j} N^{j}.
\label{NDOT}
\end{equation}
Although one cannot find $Q_N$ or $Q'_N$ explicitly without solving
the full exclusion problem, we can take $k=1$ in (\ref{NDOT}) to determine the
mean wall velocity via
\begin{equation}
{\partial \over \partial t} \langle N \rangle = -w_{-}
\sum_{N=0}^{\infty} Q'_N+w_{+}.
\label{meanV}
\end{equation}
If the mean wall position is time-independent, $\sum_{N=0}^{\infty}
Q'_N = w_{+}/w_{-}$, and the expected occupancy of the site
immediately preceding the wall is
\begin{equation}
\langle \sigma_{N-1}\rangle =1-\sum_{N=0}^{\infty}
Q'_N=1-\frac{w_{+}}{w_{-}}. \label{BeforeWall}
\end{equation}
We show in section \ref{unstable} that there are some parameter
regimes in which (\ref{BeforeWall}) cannot be satisfied. For these
parameter values, there exists no time-independent mean wall
position. However, one can still use (\ref{meanV}) to determine the
relevant mean wall dynamics. The preceding analysis suggests that it
may be more natural to define sites near the wall by their position
relative to the wall than by their absolute position on the lattice.
To avoid working in both frames of reference, in the next section, we
will begin by considering the limit in which the wall hopping rates
are small compared to other rates in the problem ($p$, $k_+$, and
$k_-$). In this limit, the wall dynamics are slow compared to the
particle dynamics, and we will assume that the wall frame is
stationary.
\section{Mean Field Solution of Density Profile}\label{SIGMA}
In the $w_{\pm}/k_{\pm}, w_{\pm}/p \rightarrow 0$ limit, we expect the
wall to be nearly stationary. Mean field equations can be derived by
ensemble-averaging the rate equations for the occupation variables
$\sigma_{i}$, and ignoring correlations ($\langle
\sigma_{i}\sigma_{j}\rangle \approx
\langle\sigma_{i}\rangle\langle\sigma_{j}\rangle$). Upon defining the
mean occupation $s_{i} \equiv \langle\sigma_{i}\rangle$, the mean field
equations for a {\it fixed} ($w_{\pm} = 0$) wall system in NESS are
\begin{eqnarray}
\displaystyle {\mbox{d} s_{i} \over \mbox{d} t} =
-s_{i}(1-s_{i+1})+s_{i-1}(1-s_{i}) - k_{-}s_{i} \qquad \nonumber
\\[2pt] +k_{+}(1-s_{i}) = 0, \qquad \label{MFTa} \\[13pt]
\displaystyle {\mbox{d} s_{1} \over \mbox{d} t} =
\alpha(1-s_{1})-k_{-}s_{1}-s_{1}(1-s_{2}) \qquad \,\,\, \nonumber \\[2pt]
+k_{+}(1-s_{1})= 0, \qquad\qquad \,\,\label{MFTb} \\[13pt]
\displaystyle {\mbox{d} s_{N-1} \over \mbox{d} t} = -k_{-}s_{N-1}
+k_{+}(1-s_{N-1}) \qquad \qquad \qquad \, \nonumber \\[2pt]\:
+s_{N-2}(1-s_{N-1}) = 0. \qquad \quad
\label{MFTc}
\end{eqnarray}
where the adsorption, desorption and injection rates have been
normalized by $p$ and time has been rescaled by $p^{-1}$ -- hence,
$k_{\pm}$, $\alpha$ and $t$ in (\ref{MFTa}-\ref{MFTc}) are dimensionless.
However, in order to use condition (\ref{BeforeWall}), we need
expressions for particle density at sites defined by their distance
from the wall. In the fluctuating frame of the wall, we use the notation
$\tilde{s}_j \equiv s_{N-j}$. Upon rewriting (\ref{MFTa}-\ref{MFTc})
in the wall frame, we find
\begin{widetext}
\begin{eqnarray}
\displaystyle \frac{\mbox{d} \tilde{s}_j}{\mbox{d} t} &=&\displaystyle
-(1+w_{+})\tilde{s}_j (1-\tilde{s}_{j-1}) + (1+(1-\tilde{s}_{1})
w_{-}) \tilde{s}_{j+1} (1-\tilde{s}_{j}) - k_{-}\tilde{s}_{j} +
k_{+}(1-\tilde{s}_{j}) \nonumber \\ \: & \: & \displaystyle
\hspace{4.5cm} -w_-(1-\tilde{s}_1)(1-\tilde{s}_{j+1})\tilde{s}_j +
w_{+}(1-\tilde{s}_{j})\tilde{s}_{j-1} = 0, \label{MFTEQNa} \\[13pt]
\displaystyle {\mbox{d} \tilde{s}_{N-1} \over \mbox{d} t} &=& \displaystyle
\alpha (1-\tilde{s}_{N-1}) -k_- \tilde{s}_{N-1} -
\tilde{s}_{N-1}(1-\tilde{s}_{N-2}) +k_{+}(1-\tilde{s}_{N-1})= 0, \label{MFTEQNb} \\[13pt]
\displaystyle {\mbox{d} \tilde{s}_1 \over \mbox{d} t} &=& \displaystyle
-k_{-}\tilde{s}_{1}+k_{+}(1-\tilde{s}_{1})+ (1+w_{-})\tilde{s}_{2}
(1-\tilde{s}_{1}) - \tilde{s}_{1} w_{+} = 0. \label{MFTEQNc}
\end{eqnarray}
\end{widetext}
As expected, (\ref{MFTEQNa}) and (\ref{MFTEQNc}) reduce to
(\ref{MFTa}) and (\ref{MFTc}) in the $w_{\pm} =0$ limit.
If the position of the wall were fixed, we could simply use the
iteration given by (\ref{MFTa}), along with boundary conditions
(\ref{MFTb}) and (\ref{MFTc}) to solve for the density profile $s_i$.
Now consider a moving wall problem. Because $\langle N\rangle$ is
undetermined, we need three conditions to solve (\ref{MFTEQNa}). In
addition to the two boundary conditions (\ref{MFTEQNb}) and
(\ref{MFTEQNc}), we require a third condition,
$\tilde{s}_1=1-w_{+}/w_{-}$, to determine $\langle N \rangle$. This
third boundary condition fixes $\tilde{s}_1$; $\tilde{s}_2$ is set by
(\ref{MFTEQNc}), and we can use (\ref{MFTEQNa}) to iterate forward in
$j$ as many times as required toward the injection site, until
(\ref{MFTEQNb}) is satisfied. The number of iterations required to
satisfy (\ref{MFTEQNb}) determines the mean position, $\langle
N\rangle$, of the left boundary, and hence the NESS size reached by the
system. Although (\ref{MFTEQNa}) was derived in the wall frame, the
resulting density profile is nearly identical to a stationary frame
profile derived from (\ref{MFTa}) when $s_{i}$ is not varying rapidly
with site $i$. See the Appendix for further discussion.
\begin{figure}[h]
\begin{center}
\includegraphics[height=1.9in]{Fig2a.eps}\\
\vspace{3mm}
\includegraphics[height=1.9in]{Fig2b.eps}
\end{center}
\vspace{-4mm}
\caption{(Color online) A comparison of density profiles derived from Monte-Carlo
simulation and MFT. (a) The MFT and MC density profiles for conserved
particle TASEP are compared ($\alpha = 0.6$, $\beta = 0.9$, $N =
10,000$). Despite differences between MFT and MC in the boundary
layers, the particle density at the ends ($i = 1$, $i = 10000$) are
matched through particle conservation. Insets show the left and right
boundary layers in detail. (b) For a TASEP with Langmuir kinetics
(with $k_- = 0.01$, $N=300$, $p = 1$, and $k_+, w_+, w_- = 0$), the
MFT density profile can be appreciably different from the MC results,
especially near the boundaries.}
\label{DIFFERENCE}
\end{figure}
For standard particle-conserving TASEP models, away from boundaries,
MFT predicts the particle densities to a very high
accuracy \cite{Derrida-92}. A conservation law for the particle density
can be used to fix the end densities to their exact values so that the
MFT also performs well near boundaries \cite{Lakatos-06}. In Figure
\ref{DIFFERENCE}a, we plot the density profiles from Monte-Carlo
simulations and mean field recursion relations for the simple TASEP
($k_{\pm} = 0$) with a fixed number of sites, $N=10000$. Differences
in the density profiles are evident in the insets.
Because we include particle adsorption and desorption through Langmuir
kinetics, there is no conservation law for the particle density. In
this case, the boundary densities are not fixed and we see in Fig.
\ref{DIFFERENCE}b that simple mean field calculations of the boundary
density can differ appreciably from the values found from Monte-Carlo
simulations. However, MFT still matches simulation results in the bulk
where $s_i$ varies slowly. In the following section, we use an
approach that couples explicit enumeration within a finite segment of
sites to the mean field results accurate outside the segment. This
finite segment mean field theory (FSMFT) includes particle
correlations within a segment of sites adjacent to the wall.
\section{Finite Segment Method}\label{sec:FS}
We have shown that mean field theory does a poor job of predicting the
profile $s_{i\approx N-1}$ near the wall when there is a boundary
layer. To more accurately compute the particle density in this
region, we will solve the Master equation for a finite segment of $m$
sites preceding the wall. First, we introduce some notation to
explain the mechanics of the finite-segment mean field theory (FSMFT).
For the binary string $(\sigma_{N-m}, ..., \sigma_{N-2},
\sigma_{N-1})$, corresponding to the occupancy of sites in the finite
segment we define the \emph{state} of the segment as the base ten
value of the string. For example, for $m=2$ sites just left of the
wall, we have four possible combinations for the occupancies (00),
(01), (10), and (11) corresponding to states $i=0,1,2, 3$,
respectively. If $P_{i}$ is the probability that the finite segment
configuration is in state $i$, the Master equation is
$\partial_{t}P_{i} = M_{ij}P_{j}$ where $M_{ij}$ is the transition
matrix. In the $m=2$ case,
\begin{widetext}
\begin{equation}
{\bf M} = \left(\begin{array}{cccc}
-(1+w_{-})s^*-2k_{+} & k_{-} & k_{-}+w_{+} & 0 \\[13pt]
k_{+} & -(w_{+}+k_{-}+k_{+}) - s^* & 1 + w_{-}(1-s^*) & k_{-}\\[13pt]
(1+w_{-})s^{*}+k_{+} & w_{+} & -(w_{+}+w_{-}+1+k_{-}+k_{+}) & k_{-}+w_{+} \\[13pt]
0 & k_{+}+s^{*} & k_{+}+w_{-}s^{*} & -2k_{-}-w_{+}\end{array} \right)
\end{equation}
\end{widetext}
where $s^*\equiv \langle \sigma_{N-m-1}\rangle$ is the mean occupancy
in the lattice site just to the left of the segment. The mean
occupancies in the finite segment can be calculated from ${\bf M}$ in
the following way. First, the eigenvector, ${\bf P}^{(0)}$,
corresponding to the eigenvalue zero is computed. The vector ${\bf
P}^{(0)}$, normalized such that $\sum_{i}^{m} P_{i}^{(0)} = 1$
corresponds to the stationary probability distribution, {\it i.e.},
$\partial_t {\bf P}^{(0)} = 0$. Let $\textbf{v}$ be a $m \times
2^{m}$ matrix where the columns are the ordered state vectors. The
mean densities are then given by
$(s_{N-m-1},\ldots,s_{N-2},s_{N-1})^{T} = {\bf v}{\bf P}^{(0)}$.
\begin{figure}[h]
\begin{center}
\includegraphics[height=1.9in]{Fig3a.eps}\\
\vspace{4mm}
\includegraphics[height=1.9in]{Fig3b.eps}
\end{center}
\vspace{-4mm}
\caption{(a) $F(s^*)$ is plotted with parameter values $k_+ = 0$, $k_-
= .01$, $w_+ = 0.005$, $w_- = 0.01$. (b) The finite segment method
predicts the boundary layer profile significantly better than mean
field theory. Here, the final ten sites of a fixed-wall profile
($N=300$) are plotted. The parameters $k_-=0.01$, $k_+ = 0$, and
$\alpha=1$ were used.}
\label{SEGMENT}
\end{figure}
For every value of $s^*$, FSMFT can be used to compute the
mean densities $s_{N-m}, ..., s_{N-2}, s_{N-1}$. In particular, it
establishes a one-to-one correspondence between $s^*$ and $s_{N-1}$:
\begin{equation}
s_{N-1} = F(s^*;w_{\pm},k_{\pm}). \label{FSM-relation}
\end{equation}
\noindent Our calculations indicate that $F(s^*)$ is always a monotonically
increasing function of $s^*$, as shown in Figure \ref{SEGMENT}a.
Comparing the density profiles near a {\it fixed} wall
($w_{+}=w_{-}=0$), Figure \ref{SEGMENT}b shows that using FSMFT (with
$m=5$) significantly improves our prediction of the particle density
near the wall over that obtained using simple ($m=1$) MFT. To
calculate $\langle N \rangle$ for a fluctuating wall ($w_{-} > w_{+}
>0$), we first solve for the profile of a segment of sites adjacent to
the wall. From (\ref{BeforeWall}), when the wall attains a
steady-state position, $s_{N-1}=1-w_{+}/w_{-}$. Using
(\ref{FSM-relation}), we find the value of $s^*$ satisfying
$1-w_{+}/w_{-} = F(s^*)$. Defining this particular value of $s^*$ as
$s^*_{eq}$, we then use the recursion relation given by
(\ref{MFTEQNa}) to solve the density profile to the left of the finite
segment. Using the values of $s_{N-m}$ and $s^*_{eq} \equiv
s_{N-m-1}$ from the FSMFT as starting conditions for the recursion
equations, we iterate to the left until the left boundary condition
(\ref{MFTEQNb}) is satisfied. In summary, the finite segment mean
field theory (FSMFT) is implemented by the following steps:
\vspace{4mm}
$\bullet$ For a given $s^{*}$, solve for the normalized eigenvector
corresponding to the zero eigenvalue of the $2^{m}\times 2^{m}$
transition matrix $M_{ij}(s^{*})$.
\vspace{3mm}
$\bullet$ From the zero eigenvector, express the mean density $s_{N-1}$
at the site nearest the wall as a function of $s^{*}$, giving relation
(\ref{FSM-relation}).
\vspace{3mm}
$\bullet$ For a static wall NESS, set $s_{N-1}=1-w_{+}/w_{-}$ and find
$s_{eq}^{*}$ that yields zero net wall drift by using $1-w_{+}/w_{-} =
F(s_{eq}^*;w_{\pm},k_{\pm})$.
\vspace{3mm}
$\bullet$ Starting with $\tilde{s}_{m+1} = s_{eq}^{*}$ (and
$\tilde{s}_{m} = s_{N-m}$) iterate using the simple mean field
equation (\ref{MFTEQNa}) until equation (\ref{MFTEQNb}) is satisfied.
\vspace{3mm}
$\bullet$ The number of iterations required determines the mean wall
position ($\langle N\rangle \approx$ number of iterations $+m+2$) as a
function of the rate parameters through the starting value
$s_{eq}^{*}$.
\vspace{4mm}
We expect the predicted results from a moving-frame FSMFT to be in
good agreement with those from MC simulations. This is because in
regions where $s$ is slowly varying, the mean field equations
describing the density profile in the wall frame and in the lab frame
yield nearly identical profiles. This can be seen from the continuum
equations, as will be discussed in the Appendix. In these regions,
accurate estimates of the mean wall position can also be obtained
using the continuum approximations to (\ref{MFTEQNa}), provided
$\langle N \rangle$ is large. When state enumeration of a larger
segment is used, more of the correlations within the density boundary
layer is taken into account and more accurate results are
expected. Provided most of the regions with large gradients in density
is captured by the finite segment, the results will be very accurate.
The incremental accuracy achieved as larger segments are used has been
discussed in a different, but related system \cite{Chou-04}. In the
subsequent analyses, we use a five-site ($m=5$) FSMFT -- generating a
$2^{5}\times 2^{5}$ eigenvalue problem in the process -- and
self-consistently solve for the densities away from the boundary
layer. This choice of segment size is sufficient to yield accurate
results for all parameters explored.
\section{Results and Discussion} \label{RESULTS}
\subsection{Time-Independent Mean Wall Positions} \label{NO-KON}
We first consider regimes in which the wall acquires a
static mean position in NESS. Using Monte Carlo simulations
and FSMFT, we study the dependence of the mean wall position on the
injection rate $\alpha$, particle adsorption and desorption rates
$k_{\pm}$, and the wall hopping rates $w_{\pm}$.
We can use analytic solutions of the bulk continuum equations in order
to understand parameter dependencies of our model. Although mean
field theory poorly describes our system in the boundary layers where
the profile varies rapidly, away from boundary layers, simple MFT is
accurate. In these regions, to guide our our analysis, we will use
the continuum limit of the mean field equations. We define
$\varepsilon\equiv 1/N_0$ where $N_0$ is a characteristic number of
lattice sites (to be derived below) and $x \equiv (i-1)/N_0 $ as a
relative position along the lattice. As shown in the Appendix, the
NESS density profile obeys
\begin{equation}
\varepsilon(2s-1)s'(x)+k_+-(k_+ + k_-)s + O(\varepsilon^2) = 0
\label{eq:bulk1}
\end{equation}
in both the lab and wall frames of reference. Upon integrating, we
obtain the implicit equation
\begin{equation}
\begin{array}{l}
\displaystyle \frac{(k_+ -k_-)\ln\vert k_{+}-(k_{+}+k_{-})s\vert
-2k_{+} + 2(k_{+} + k_{-})s}{(k_{+}+k_{-})^2} \\[13pt]
\hspace{5cm} \displaystyle ={x\over \varepsilon}+C,
\end{array}
\label{bulk}
\end{equation}
where $C$ is a constant of integration. In the continuum description,
the entrance site is at position $x = 0$, the wall's position is $L$,
and the mean wall position is $\langle L \rangle \equiv \varepsilon
\langle N \rangle$. We can use (\ref{bulk}) to understand the
behavior of the mean wall position $\langle N \rangle$. First, note
that the left hand side of (\ref{bulk}) scales as
$(k_++k_-)^{-1}$. Since $\varepsilon^{-1} \equiv N_0$ scales as
$(k_-+k_+)^{-1}$, we define $N_0 \equiv (k_+ + k_-)^{-1}$. For a
continuum description to be useful, $N_0$ must be large, so $k_{\pm}$
must be small.
\begin{figure}[t]
\begin{center}
\includegraphics[height=1.8in]{Fig4a.eps}\\
\vspace{4mm}
\includegraphics[height=1.8in]{Fig4b.eps}
\end{center}
\vspace{-4mm}
\caption{(Color online) Simulations were performed with $w_+=0.001$, $w_-=0.01$,
$k_-=0.01$, and different values of $\alpha$. In (a), profiles for
three simulations are plotted with their mean wall positions aligned.
When $\alpha \lesssim 0.5$, $s(0)\approx \alpha$. When $\alpha \gtrsim 0.5$, $s_B$
becomes multivalued and there is a boundary layer on the left. Within
the boundary layer, a small change in the position results in a large
change in the particle density. Thus, in (b), where $\langle N
\rangle$ is plotted as a function of $\alpha$, large changes in
$\alpha$ result in small changes in $\langle N \rangle$ when
$\alpha\gtrsim 0.5$. Also shown is the prediction from (\ref{meanN}) with
$s_{eq}^{*} = 0.026$, determined by FSMFT.} \label{fig:Alpha}
\end{figure}
Equation (\ref{bulk}) gives an implicit formula for the bulk density,
which we denote by $s_B(x)$, in terms of the adsorption and desorption
rates, $k_{\pm}$ and the integration constant $C$. The injection rate
$\alpha$ determines $C$, and along with the wall hopping rates
$w_{\pm}$, determines the mean wall position. As shown in the Appendix,
the solution near the left boundary varies slowly when $\alpha \lesssim
0.5$. Furthermore, if $k_- \ll \alpha$, we can approximate $s_1
\approx s_2$ in (\ref{MFTb}) to conclude that $s(0) \approx \alpha$.
This simplified condition can be used to determine $C$ in
(\ref{bulk}).
When $\alpha \gtrsim 0.5$, a boundary layer arises on the left (this can be
seen in Fig. \ref{fig:Alpha}a). In this regime, $s(0)$ can no longer
be approximated as $\alpha$, and $s_B(x)$ becomes invalid near the
injection site. While $s_B(x)$ is still a good approximation to the
density profile outside the boundary layer (where $s\lesssim 0.5$), there is
no straightforward, analytic way to calculate $C$ when $\alpha\gtrsim 0.5$.
In Fig. \ref{fig:Alpha}a, when $\alpha = 1$, $C$ is used as a single
fitting parameter and is determined empirically such that in the bulk,
$s_B(x)$ approximates the density profile obtained using MC
simulations. The mean wall position, $\langle L \rangle$, is found
through the relation $s_B(\langle L \rangle-(m+1)) = s^*_{eq}$ where
$s^*_{eq}$ is found using an $m-$site FSMFT and is the value of $s^*$
that puts no net drift on the wall.
Figure \ref{fig:Alpha}a shows results from MC simulations, shifted so
that the mean wall positions are aligned at $\langle N \rangle = 225$,
which is the mean wall position when $\alpha = 1$. While the density
profile has a sharp boundary layer at the wall in the wall frame, in
the lab frame, the boundary layer is smeared out due to wall
fluctuations. This results in the broad peaks centered on the mean
wall position seen in Figure \ref{fig:Alpha}a. The outer solution
$s_B(i/N_0)$ with $N_0 = (k_+ + k_-)^{-1}$ is shown by the dotted
curve. The close agreement between the MC data and $s_B(x)$ suggests
that dropping the $O(\varepsilon^2)$ term in (\ref{eq:bulk1}) to obtain
$s_B(x)$ produces an excellent approximation to the mean particle
density, provided $\alpha \lesssim 0.5$. Note that $\alpha$, through $C$,
simply shifts $s_B(x)$ to the left or right; thus, when we vary only
$\alpha$ and plot the resulting density with the mean wall positions
aligned (as they are in \ref{fig:Alpha}a), the profiles collapse onto
the same curve.
We can also use (\ref{bulk}) to predict the mean wall position as a
function of the injection rate $\alpha$ when $\alpha$ is not too large. For
simplicity, consider $k_+ = 0$ -- the analysis for $k_+ \neq 0$ is
analogous. Using the simplified condition $s(0) = \alpha$ in
(\ref{bulk}), we have
\begin{equation}
C = {2\alpha -\ln (\alpha k_{-}) \over k_{-}}.
\label{eq:C}
\end{equation}
Now, using the relation $s^*_{eq}=s_B(\langle
L\rangle-\varepsilon(m+1))$ and (\ref{eq:C}), (\ref{bulk}) becomes
\begin{equation}
\langle N\rangle = \frac{1}{k_-} \ln \left( \frac{\alpha}{e^{2\alpha}}
\frac{e^{2 s^*_{eq}}}{s^*_{eq}}\right)+m+1.
\label{meanN}
\end{equation}
The dependence of $\langle N \rangle$ on $\alpha$ is shown in Figure
\ref{fig:Alpha}b, predicted using four different methods. Simple MFT
($m=1$, dotted curve) performs poorly relative to MC simulations (open
diamonds). The results from FSMFT with $m=5$ (solid curve) agree very
well with the MC data for all values of $\alpha$. The solution of
(\ref{meanN}) (dashed curve) performs reasonably well provided $\alpha$ is
not too large. When $\alpha \gtrsim 0.5$, $s(0) = \alpha$ is a poor approximation
to (\ref{MFTb}) and the resulting prediction of $\langle N \rangle$
suffers. In fact, the slope $s_B'$ diverges when $s_B=0.5$, which can
be seen from (\ref{eq:bulk1}). When $\alpha \gtrsim 0.5$, there is a
boundary layer on the left with width $O(\sqrt{\varepsilon})$
(cf. Appendix). As a result, increases in $\alpha$ above $0.5$ will
increase the height of the boundary layer, but will not significantly
change the mean wall position, and $\langle N \rangle$ becomes
insensitive to changes in $\alpha$ \footnote[2]{In \cite{Evans-03,Frey-04},
an asymmetric exclusion process in a fixed
domain with open boundaries and Langmuir kinetics was studied.
Regimes arise in which the position of a shock in the particle density
becomes insensitive to the ejection rate $\beta$ on the right, once
$\beta >0.5$. Because these observations were made
under the assumption $k_+>k_-$, and we consider $k_->k_+$, by particle
hole symmetry, the ejection rate in these works corresponds to
the injection rate, $\alpha$ in our problem. While the authors of
\cite{Evans-03, Frey-04} find a shock position insensitive to boundary
conditions, we find an insensitive mean wall position.}.
\begin{figure*}[htb]
\begin{center}
\includegraphics[height=1.5in]{Fig5a.eps}
\hspace{1mm}
\includegraphics[height=1.5in]{Fig5b.eps}
\hspace{1mm}
\includegraphics[height=1.5in]{Fig5c.eps}
\end{center}
\vspace{-4mm}
\caption{Boundary effects near the wall determine wall position. In
(a), $s^*_{eq}$ is determined empirically from MC simulations captured
in the wall frame and numerically using the finite segment method. In
(b), MC simulations are plotted in the lab frame. In (c), the
occupancy of the last site in the wall frame $s_{N-1} = 1-w_{+}/w_{-}$
is plotted as a function of the mean wall position $\langle N
\rangle$. The parameters $\alpha =1$, $k_-=0.01$, $k_+ = 0$, and $w_+
= 0.001$ were used.}
\label{fig:wp}
\end{figure*}
We now discuss how changes in the wall hopping rates can affect the
wall position. In Fig. \ref{fig:wp}a, for a fixed value of $w_{+}$,
one sees that an increase in $w_{-}$ increases the value of
$s^*_{eq}$. Our FSMFT predicts that given values of $w_+$ and $w_-$,
$s^*_{eq}$ must satisfy $1 - w_{+}/w_{-} = F(s^*_{eq};w_{\pm})$.
For small values of $w_{\pm}$, $F(s_{eq}^{+},w_{\pm})\approx
F(s_{eq}^{*},0) \approx 1-w_{+}/w_{-}$, suggesting that
$s^*_{eq}$ depends primarily on the ratio $w_+/w_-$, with only a weak
dependence on the individual wall hopping rates. Since
$F$ is a monotonically increasing function,
$s^*_{eq}$ increases with $w_{-}/w_{+}$.
A change in $s_{eq}^{*}$ induces a change in the mean wall position,
shown in Fig. \ref{fig:wp}b. Again, this is consistent with our theory
since $\langle L \rangle$ must satisfy $s_B(\langle L\rangle
-\varepsilon(m+1)) = s_{eq}^{*}$. In the special case $\alpha \lesssim 0.5$,
one can use (\ref{meanN}) to predict $\langle N\rangle$ directly,
given $s^*_{eq}$. When $\alpha \gtrsim 0.5$, one either has to solve the
full set of discrete MFT equations (\ref{MFTEQNa}) and (\ref{MFTEQNb})
-- or the equivalent continuum equations (\ref{CONTINUUM}) and
(\ref{BCLEFT}) -- coupled to a finite segment, to obtain $\langle
N\rangle$. Our results from solving the discrete equations are shown
in Fig. \ref{fig:wp}(c). Using simple MFT ($m=1$) without a larger
finite segment generally results in poor predictions for $\langle
N\rangle$.
A more complete understanding of the wall dynamics can be garnered by
analyzing the wall fluctuations. For simplicity, we consider the
continuum description in which the wall's motion can be approximately
described by a diffusion constant $D = \varepsilon^2 w_+$ and a
position-dependent drift, $V(L)$. If one assumes that the wall
fluctuates within a harmonic ``potential,'' this drift takes the form
$V(L) = -a(L-\langle L \rangle)$, where $a \equiv -(\mbox{d} V/\mbox{d}
L)|_{\langle L \rangle}$. This approximation effectively closes
(\ref{EFFECTIVEQ}) by expressing the effects of conditional
probability, $Q_{N}'$, in terms of a drift. For $L \approx \langle
L\rangle$, the probability density of the wall's position, $Q(L)$ (the
continuum analog of $Q_N$), can be approximately found from the
solution of
\begin{equation}
\frac{\partial Q(L,t)}{\partial t} = a{\partial \over \partial L}\left[
(L-\langle L\rangle)Q(L)\right]+ D\frac{\partial^2 Q}{\partial
L^2}.
\label{eq:Diffusion}
\end{equation}
Upon imposing the normalization $\int_{-\infty}^{\infty}Q(L)dL = 1$, we find the
steady-state solution to (\ref{eq:Diffusion}):
\begin{equation}
Q(L)=\sqrt{\frac{a}{2D\pi}}e^{-\frac{a(L-\langle L \rangle)^2}{2D}},
\label{eq:Gauss}
\end{equation}
where $a$ is given by
\begin{equation}
a \equiv -\frac{d V}{d L}\bigg|_{\langle L \rangle} = -\frac{\partial
V}{\partial F}\frac{\partial F}{\partial s^*}s_B'(\langle L \rangle -
\varepsilon(m+1)).
\label{A}
\end{equation}
The drift velocity $V(L)$ can be inferred from $\varepsilon(w_{+}-w_{-}(1-s_{N-1}))$
and (\ref{FSM-relation}), which relates the mean occupancies at positions $L$ and
$L-\varepsilon(m+1)$,
\begin{equation}
V(L) = \varepsilon\left[w_{+} - w_{-} + w_{-} F(s^*(L-\varepsilon(m+1)))\right].
\label{wall-vel}
\end{equation}
\noindent By defining the drift $V(L)$ using the steady-state relation
$F$, we have implicitly made an adiabatic approximation where the
particles have reached a NESS for any wall position $L$.
We can also estimate the variance of the wall position by using
$\Sigma^{2} \approx D/a$, and (\ref{A}) for $a$. Upon differentiating
$V(L) = \varepsilon(w_+-w_-[1-F(s_B'(L- \varepsilon(m+1)])$, we find
$(\partial V/\partial F) = \varepsilon w_-$. We can estimate
$(\partial F/\partial s^*)\vert_{s^{*}_{eq}}$ using the finite segment
method, and we know $s_B'(\langle L \rangle - \varepsilon(m+1))$
exactly from (\ref{eq:bulk1}). Assuming that $\langle L^2 \rangle
\approx \int_{-\infty}^{\infty} dL L^2 Q(L) $, we expect the variance
of the wall position to be approximately
\begin{equation}
\Sigma^2=\langle L^2 \rangle - \langle L \rangle^2 =
\frac{D}{\varepsilon w_- F'(s^*_{eq}) s_B'(\langle L
\rangle - \varepsilon(m+1))}
\label{eq:Variance}
\end{equation}
In Figure \ref{fig:WallDist}, we plot an example distribution $Q(L)$
found using both Monte Carlo simulations and from (\ref{eq:Gauss}).
\begin{figure}[h]
\begin{center}
\includegraphics[height=1.9in]{Fig6.eps}
\end{center}
\vspace{-4mm}
\caption{Probability density of the wall position, $Q(L)$, is plotted
as a function of the deviation from the mean wall position $\langle
L\rangle$. The parameters $\alpha = 1$, $w_{+} = 0.005$, $w_{-} =
0.01$, $k_- = 0.01$, and $k_+ = 0$ yield $\langle L \rangle \approx
350$. The distribution predicted from (\ref{eq:Gauss}) is a close
approximation to that derived from MC simulations.}
\label{fig:WallDist}
\end{figure}
We have aligned the distributions such that their maxima coincide.
Using (\ref{eq:Variance}), the standard deviation $\Sigma \approx
0.1215$, in good agreement with the standard deviation found from MC
simulations $\Sigma\approx 0.143$. Since $\Sigma/\langle L\rangle
\sim 0.14/3.5 \ll 1$, the wall is fairly stable and not likely to fall
off the injection end of the lattice except on exponentially long time
scales.
\subsection{Time-dependent mean wall positions}
\label{unstable}
In the previous section, we explored the dependence of the
statistically stationary mean wall position on the model
parameters. However, a stable mean wall position may not always
exist. In this section, we use a FSMFT to determine the stability of
the wall and the conditions under which a permanent net wall drift
might arise.
The motion of the wall can be understood completely in terms of the
outer solution $s_B(x)$ -- given by inverting (\ref{bulk}) -- and the
particle density inside finite segment. First, we consider some
important properties of $s_B(x)$. Equation (\ref{bulk}) admits two
branches to the bulk solution, $s_B(x)$, because the argument of
$\ln[|\quad|]$ can either be positive or negative. The argument
approaches zero as $s$ approaches $s_{\Gamma} \equiv
k_{+}/(k_{+}+k_{-})$, the density arising from Langmuir kinetics
alone. When we invert $x(s_B)$ to find $s_B(x)$, we see that for
increasing $x$, one branch of the density profile $s_B(x)$ approaches
$s_{\Gamma}$ asymptotically from below, and a second branch approaches
$s_{\Gamma}$ asymptotically from above. A representative $s_B(x)$ is
plotted in Figure \ref{fig:dynamics}. Notice that $s'(x)>0$ in the
lower branch, and $s'(x)<0$ in the upper branch
\footnote[3]{When $s_B(x)$ passes though $s_B(x) = 0.5$, the sign of
$s'_B(x)$ changes. The profile $s(x)$ departs from $s_B(x)$ near
$s_B(x)=0.5$ because one of the assumptions used to derive $s_B(x)$ --
that $s'_B(x) = O(1)$ -- becomes invalid and the sign of $s'(x)$
fails to change upon passing through $s(x)=0.5$. See the Appendix for
further discussion of when the continuum equations may cease to be
valid.}.
\begin{figure}[h]
\begin{center}
\includegraphics[height=2.2in]{Fig7.eps}
\end{center}
\vspace{-4mm}
\caption{Four possibilities for the wall motion. Arrows indicate the
mean wall motion if it is at position $i$. (a) the wall could have no
net drift and a stable fixed mean position, with small perturbations
to the wall position decaying over time, (b) the wall could have no
net drift, but an unstable fixed mean position, with small
perturbations to its position causing it to drift indefinitely to the
left or right, (c) the wall could drift indefinitely to the left and
(d) the wall could drift indefinitely to the right. The outer solution
$s_{B}(x)$ is shown by the dotted curves.}
\label{fig:dynamics}
\end{figure}
If $\alpha > s_{\Gamma}$, the steady state density profile will lie on the
upper branch and the bulk density will have values satisfying
$s_{\Gamma}<s_B(x)<\alpha$. If the injection rate $\alpha < s_{\Gamma}$, the steady
state density profile will lie on the lower branch and the bulk
density will attain values $\alpha<s_B(x)<s_{\Gamma}$.
We are now ready to derive conditions for the existence of a fixed
mean wall position and stability criteria. In the adiabatic
approximation (\ref{wall-vel}), $m$ is the number of sites in the
finite segment and $\varepsilon = 1/N_0$. This equation expresses the
velocity of the wall at position $\varepsilon N = L$ in terms of a particle
density at position $(N-m-1)\varepsilon$. Since $s^*_{eq}$ is the
value of $s^*$ that puts no net drift on the wall, {\it i.e.}, $w_+ -
w_-(1-F(s^*_{eq})) = 0$, we can expand $V(L)$ from (\ref{wall-vel}) in
a Taylor Series about $s^*_{eq}$ to find
\begin{equation}
V(L) \approx \varepsilon w_- F'(s^*_{eq})(s^*(L-\varepsilon(m+1)) - s^*_{eq}).
\label{wall-vel2}
\end{equation}
Because $F$ is a monotonically increasing function and $w_{-}> 0$, the
wall drifts to the right if $s^*(L-\varepsilon(m+1)) > s^*_{eq}$, to
the left if $s^*(L-\varepsilon(m+1)) < s^*_{eq}$ and has a fixed mean
position if $s^*(L-\varepsilon(m+1)) = s^*_{eq}$. If we now assume
that $m$ is sufficiently large so that the point $L -
\varepsilon(m+1)$ lies outside of the boundary layer, then
$s^*(L-\varepsilon(m+1))$ can be well approximated by the outer
solution given by $s_B(x)$, i.e. $s_B(L-\varepsilon(m+1)) \approx
s^*(L-\varepsilon(m+1))$. Furthermore, we know that $s_B(x)$
satisfies $\alpha \leq s_B \leq s_{\Gamma}$ on the lower branch and $s_{\Gamma} \leq
s_B \leq \alpha$ on the upper one. Therefore, we conclude that if
$s^*_{eq} \notin [\alpha, s_{\Gamma}]$, the wall can never have a fixed mean
position. In particular, for all $t$
\begin{equation}
\begin{array}{rl}
V(L(t)) & > 0 \quad \mbox{if} \quad s^*_{eq} < \alpha, s_{\Gamma}, \nonumber \\
\: & < 0 \quad \mbox{if} \quad s^*_{eq} > \alpha, s_{\Gamma},
\end{array}
\end{equation}
corresponding to an indefinite rightward and leftward drift,
respectively (cf. Fig. \ref{fig:dynamics}c and d).
If a fixed mean position does exist, we can understand
its stability by considering the sign of $dV/dL$. If this quantity is
negative (positive), the position is stable (unstable). These
possibilities are summarized in Figure \ref{fig:dynamics}. By
differentiating (\ref{wall-vel2}), we have
\begin{equation}
{\mbox{d} V\over \mbox{d} L} \approx \varepsilon w_- F'(s^*_{eq}) s_B'(L-\varepsilon(m+1)).
\end{equation}
Hence, if a mean wall position exists at $L$, a necessary and
sufficient condition for its stability is
\begin{equation}
s_B'(L-\varepsilon(m+1)) < 0.
\label{stab}
\end{equation}
In particular, when there is no adsorption ($k_+ = 0$), the bulk
solution $s_B(x)$ decreases monotonically from the injection site and
any mean wall position $\langle L\rangle$ induced by the kinetics will
be deterministically stable.
\begin{figure}
\begin{center}
\includegraphics[height=1.8in]{Fig8a.eps}\\
\vspace{4mm}
\includegraphics[height=1.8in]{Fig8b.eps}
\end{center}
\vspace{-4mm}
\caption{In (a), we plot the density profile near the wall from
FSMFT and from MC simulations. We find that $s_{N-1}$ is less than
$0.9=1-w_{+}/w_{-}$, the value required for the wall to have zero
drift. In (b), the position of the wall $N(t)$ found from MC
simulations is plotted with the expected $N(t)$ calculated using
FSMFT. Parameter values were $\alpha = 0.01$, $w_+=0.001$, $w_-=0.01$,
$k_+=0.0001$, $k_-=0.01$.}
\label{WALLOFF}
\end{figure}
Figures \ref{WALLOFF} and \ref{ESCAPE} compare the result
(\ref{wall-vel}) with simulation data. Figure \ref{WALLOFF} shows the
results of a MC simulation in which the wall particle has a mean
leftward drift. In Fig. \ref{WALLOFF}a, the density profile in the
wall frame found from MC simulation and that predicted using FSMFT are
shown. Far from the injection site $s_{B}$ asymptotes to
$s_{\Gamma}$. Thus, when the wall starts at a position $L_{0} \gg 1$,
we assume $s^*(L-\varepsilon(m+1))=s_{\Gamma}$ and the wall's velocity
$V$ is independent of its position $L$.
Since $s_{N-1}<1-\frac{w_+}{w_-}$, we expect, from (\ref{wall-vel}),
that the net drift on the wall will be negative. In
Fig. \ref{WALLOFF}b, we compare $N(t)=L(t)/\varepsilon$ found from MC
simulations with that calculated assuming $L(t) = L_0+V t$ where
$V$ is calculated using (\ref{wall-vel}) in the large $L$ limit. Similarly,
Fig. \ref{ESCAPE}a shows a density profile from MC simulation in the
case where the wall acquires a mean rightward drift. In
Fig. \ref{ESCAPE}b both MC simulations and FSMFT show that in the wall
frame, the occupancy of the site adjacent to the wall is greater than
$1-\frac{w_+}{w_-}$, and we expect a mean rightward drift. In
Fig. \ref{ESCAPE}c, we see that this is the case, and the predicted
time course $N(t)=L(t)/\varepsilon$ is compared with $N(t)$ found using MC
simulations.
In contrast to the case of a static mean wall position, when the wall
has a position-independent velocity, $V$, the diffusion constant $D$
of the wall is given by $D = \varepsilon^2(w_+-w_-+w_-F(s_{\Gamma}))/2$. The
probability density $Q(L,t)$ describing wall position then follows
\begin{equation}
\frac{\partial Q}{\partial t}=D\frac{\partial^2 Q}{\partial L^2}-V \frac{\partial Q}{\partial L},
\end{equation}
the solution of which is
\begin{equation}
Q(L,t)=\frac{1}{2\sqrt{\pi D t}}e^{-\frac{(L-Vt-L_0)^2}{4Dt}}.
\end{equation}
We now discuss our results in the context of the phase transitions
\cite{Evans-03,Frey-04,SCHUTZ2} of the interior density. When Langmuir
kinetics is coupled to a fixed domain TASEP with open boundaries,
qualitative properties of $s(x)$ can change abruptly when
adsorption/desorption and injection/ejection rates vary. For example,
an interior boundary layer separating regions of low and high density
can suddenly disappear, replaced with a single region of high density
as the injection rate $\alpha$ is increased.
Our moving wall TASEP system coupled with Langmuir kinetics does
\emph{not} support the phase structure seen in
\cite{Evans-03,Frey-04,SCHUTZ2}. Because we limit ourselves to
$k_+<k_-$, we can see from (\ref{eq:bulk1}) that when $s_B(x)>0.5$,
(corresponding with a high density region), $s_B'(x)>0$. From
(\ref{stab}), the wall cannot have a stable equilibrium position
within the high density region, and we do not find time-independent
density profiles with low to high density interior shocks (a low-high
shock), as is observed in \cite{Evans-03,Frey-04}. In fact,
references \cite{Frey-03,SCHUTZ2} show that high-low shocks are never
stationary in an exclusion process with Langmuir kinetics. Therefore,
interior shocks are never stable in our model system. In our problem,
the presence of a wall that responds to particle dynamics relaxes any
shocks in density that may otherwise occur in the interior, forcing
them to the left or right boundaries.
\begin{figure*}
\begin{center}
\includegraphics[height=1.52in]{Fig9a.eps}\hspace{1mm}
\includegraphics[height=1.52in]{Fig9b.eps}\hspace{1mm}
\includegraphics[height=1.52in]{Fig9c.eps}
\end{center}
\vspace{-2mm}
\caption{When $s_{\Gamma}>s^*_{eq}$ and $\alpha>s^*_{eq}$, the wall
escapes. In (a), far from the injection site, the particle density
approaches $s_{\Gamma}= 0.029$ as predicted by analytic theory. In
(b), we use FSMFT and MC to find $s_{N-1}>0.9=1-w_{+}/w_{-}$, the
value for which the wall's drift would be zero. In (c), we show
$N(t)$ to compare the escape velocity calculated from finite segment
analysis to the escape velocity found in simulations. Although the
value of $s_{N-1}$ found by FSMFT differs from that found in MC
simulations by only $0.4\%$, the calulated velocities differ by
$17\%$. Parameter values were $\alpha = 0.3$, $w_+ = 0.001$, $w_- =
0.01$, $k_+=0.0003$, and $k_-=0.01$.}
\label{ESCAPE}
\end{figure*}
\section{Summary and Conclusions}
Our model of an asymmetric exclusion process with Langmuir kinetics
and a movable right boundary, and the corresponding results provide a
guide to understanding biophysical processes in which many processing
molecular motors push against a load. The detachment and attachment
rate of the motors, as well as the injection rate at the entry site,
determine the load the motors can support. If a static load particle
position is reached, we see that the mean wall distance from the
injection site saturates upon increasing injection rate $\alpha$ past
about $0.5$. The analyses can be used to predict whether biological
processes such as ribosome movement and filopodia/filament extension
continues or reaches a static configuration.
Within our model, we found four parameter regimes. In the first
regime, $(s_{\Gamma} < s_{eq}^{*} < \alpha)$, the wall attains a
stable equilibrium position for the wall. In the second regime,
$(\alpha < s_{eq}^{*} < s_{\Gamma})$, there is an equilibrium, but
unstable mean wall position. In the third and fourth regimes,
$(s_{eq}^{*}\notin [\alpha, s_{\Gamma}])$, the wall will always feel a
net drift to the to the right and left, respectively. In the latter
case, the wall will fall off the lattice in a time scaling linearly
with the starting position. When there is a stable equilibrium wall
position, we can find the mean wall position $\langle N \rangle$ as a
function of the particle injection rate $\alpha$, the adsorption and
desorption rates $k_{\pm}$, and the intrinsic hopping rates of the
wall $w_{\pm}$. Determination of $\langle N \rangle$ requires
accurate evaluation of the particle density near the wall. Using a
hybrid finite segment/mean field approach in the reference frame of
the fluctuating wall, we accurately determine the particle density
near the wall, and use this to determine the wall's steady-state
position.
When there is no steady state wall position, the finite segment mean
field approach allows us to estimate the steady state velocity of the
wall far from the injection site. In our analysis, we assumed that
the particle density has reached steady state, thus ignoring the
initial particle density profile and wall position. Even in regimes
where we expect an equilibrium wall position at steady state, if the
wall is initially near the injection site, and the particle density is
initially very low, we would expect the wall to fall off the lattice
before reaching its equilibrium position. The times to falling off
the lattice may be treated with extensions of large deviation theory
as suggested by Fig. \ref{fig:WallDist} \cite{LDT}. A number of
interesting extensions of the free boundary problem arise. For
example, we expect for certain parameter regimes that slow bottleneck
sites \cite{Chou-04} can attract the fluctuating wall. These features
and other novel applications to biophysical systems deserve
investigation.
\vspace{3mm}
This material is based upon work supported under a National Science
Foundation Graduate Research Fellowship. The authors also acknowledge
support from the NSF through grant DMS-0349195, and the NIH through
grant K25 AI41935.
\section{Appendix}\label{Ap:Continuum}
We can take the continuum limit of (\ref{MFTa}) by defining $x=(i-1) \varepsilon$
where $\varepsilon$ is the lattice spacing. We find,
\begin{equation}
\displaystyle {\partial s(x,t)\over \partial t} = \varepsilon s'(2s-1)+{\varepsilon^{2}\over
2}s'' -k_{-}s + k_{+}(1-s) = 0.
\label{CONTINUUM}
\end{equation}
\begin{widetext}
The left hand boundary condition, (\ref{MFTb}) becomes
\begin{equation}
\begin{array}{l}
\displaystyle {\partial s(0,t)\over \partial t} = \alpha(1-s(0)) - k_{-}s(0)
+k_{+}(1-s(0)) - s(0)(1-s(\varepsilon)) = 0.
\end{array}
\label{BCLEFT}
\end{equation}
When significant changes in the solution near $x=0$ vary over a length
scale that is $> O(\varepsilon)$, this equation is well approximated
by
\begin{equation}
\begin{array}{l}
\displaystyle {\partial s(0,t)\over \partial t} = \alpha(1-s(0)) - k_{-}s(0)
+k_{+}(1-s(0)) - s(0)(1-s(0)-\varepsilon s'(0)) = 0.
\end{array}
\end{equation}
The right hand boundary condition analogous to (\ref{MFTEQNc}), becomes
\begin{equation}
\begin{array}{l}
\displaystyle {\partial s(L,t) \over \partial t} = -k_{-}s(L)
+k_{+}(1-s(L)) +s(L-\varepsilon)(1-s(L))
+w_{-}s(L-\varepsilon)(1-s(L))-w_{+}s(L) = 0.
\end{array}
\label{BCRIGHT}
\end{equation}
Again, when significant changes in the solution near $x=L$ vary over a
length scale that is $>O(\varepsilon)$, this equation is well approximated by
\begin{equation}
\displaystyle {\partial s(L,t) \over \partial t} = -k_{-}s(L)
+k_{+}(1-s(L))+(s(L)-\varepsilon s'(L))(1-s(L))
+ w_{-}(s(L)-\varepsilon s'(L)) )(1-s(L))-w_{+}s(L) = 0,
\label{BCrighteps}
\end{equation}
at the free boundary $L(t)$, where $s(L)$ defines the particle density
at the position just to the left of the wall.
In the wall frame, the continuum limit of equation (\ref{MFTEQNa}) is
\begin{equation}
\begin{array}{l}
\displaystyle {\partial s(x,t)\over \partial t} = \varepsilon [w_+-(1+w_-(1-s_{N-1}))]s'(1-2s)+{\varepsilon^{2}\over
2}[1+w_++w_-(1-s_{N-1})]s'' -k_{-}s + k_{+}(1-s).
\end{array}
\label{CONTINUUM_WALL}
\end{equation}
\end{widetext}
In the wall frame, an interior particle shifts to the right when it either hops to the
right, which it does with a (normalized) rate of unity, or when the
wall hops to the left, which it does with rate $w_{-}(1-s_{N-1}) = w_{+}$ in steady
state.
Similarly, a particle shifts to the left when it hops to the left or
when the wall hops to the right, which it does with rate $w_+$. In
the bulk, where $s'(x)=O(1)$, the diffusive term is small and can be
neglected. The only term we retain that depends on hopping rates is
$(w_+ - 1 - w_+)(1-2s)s'$, which is equal to the value of the
corresponding term in the lab frame, $-(1-2s)s'$. The bulk density is
described in both frames by
\begin{equation}
\varepsilon s'(1-2s)+k_-s-k_+(1-s)=0.
\end{equation}
In our problem, the second order term in (\ref{CONTINUUM}) becomes
important in the right hand boundary layer, and in the left hand
boundary layer when there is one. On the left, when $s \approx 0.5$,
one cannot assume that $s$ varies slowly. Making the change of
variables $x = \xi X$ ($\xi \ll 1$), equation (\ref{CONTINUUM})
becomes
\begin{equation}
\frac{\varepsilon}{\xi}s'(2s-1)+
\frac{\varepsilon^2}{2 \xi^2}s''-k_-s+k_+(1-s)=0;
\label{XIEQN}
\end{equation}
furthermore, we know that $\varepsilon$ is necessarily on the order
$(k_-+k_+)$. Since the first order term becomes very small as
$s\rightarrow 0.5$, the second order term must match either the
adsorption or desorption term. In this case, the second order term
will be balanced when $\xi \sim \sqrt{\varepsilon}$. Therefore, we expect a
boundary layer of width $O(\sqrt{\varepsilon})$ to arise near the injection
site if $\alpha>0.5$, $s_{\Gamma}<0.5$ or if $\alpha<0.5$ and
$s_{\Gamma}>0.5$.
While the boundary layer on the left can be captured using a second
order continuum equation, the boundary layer on the right cannot. In
equation (\ref{CONTINUUM}), we kept terms only up to order $\varepsilon^2$ in
our expansion $s(x+\varepsilon)$ = $s(x)+\varepsilon
s'(x)+\frac{\varepsilon^2}{2}s''(x)+\frac{\varepsilon^3}{6} s^{(3)}(x)+...$. The
boundary layer on the right hand side arises to join the outer
solution with the boundary condition $s(L) = 1-w_{+}/w_{-}$. Making
the substitution $X = x/\xi$, and matching first and second order
terms in (\ref{XIEQN}), we find that $\xi = O(\varepsilon)$. In
the boundary layer on the left, $s \approx 0.5$, and we assume that
the term $\varepsilon(2s-1)s'$ is relatively small. When we match the
second order term, $\varepsilon s''/2$, with the adsorption and desorption
terms, $k_{+}(1-s)$ and $k_{-}s$, we find that $\xi \sim
\sqrt{\varepsilon}$. However, on the right, we cannot assume that $s
\approx 0.5$. We therefore assume that the leading terms are
$\varepsilon(2s-1)s'$ and $\varepsilon s''/2$, which leads us to
conclude that the wall-hugging boundary layer has width of
$O(\varepsilon)$. In this case, all terms $\varepsilon^n s^{(n)}(X)/n!$ in the
Taylor expansion of (\ref{BCRIGHT}) are $O(1)$, and continuum theory
breaks down.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,215
|
Car Wall
@LowdownBlog
Categories Select Category Announcements (20) Columns (128) Entertainment (765) Humour (347) Link-Off (1,170) Man Lab (120) News (108) Podcasts (80) Randominity (187) Sports (827) Video Games (351)
Archives Select Month March 2017 (4) January 2017 (1) December 2016 (13) November 2016 (26) October 2016 (25) September 2016 (26) August 2016 (29) July 2016 (30) June 2016 (39) May 2016 (29) April 2016 (30) March 2016 (32) February 2016 (29) January 2016 (32) December 2015 (32) November 2015 (30) October 2015 (34) September 2015 (29) August 2015 (30) July 2015 (30) June 2015 (44) May 2015 (30) April 2015 (32) March 2015 (40) February 2015 (30) January 2015 (30) December 2014 (33) November 2014 (30) October 2014 (33) September 2014 (30) August 2014 (33) July 2014 (32) June 2014 (33) May 2014 (31) April 2014 (32) March 2014 (35) February 2014 (33) January 2014 (24) December 2013 (36) November 2013 (33) October 2013 (33) September 2013 (35) August 2013 (34) July 2013 (35) June 2013 (43) May 2013 (36) April 2013 (46) March 2013 (44) February 2013 (32) January 2013 (41) December 2012 (40) November 2012 (39) October 2012 (41) September 2012 (39) August 2012 (40) July 2012 (40) June 2012 (49) May 2012 (39) April 2012 (44) March 2012 (53) February 2012 (42) January 2012 (44) December 2011 (43) November 2011 (44) October 2011 (45) September 2011 (47) August 2011 (39) July 2011 (39) June 2011 (32) May 2011 (37) April 2011 (37) March 2011 (45) February 2011 (46) January 2011 (44) December 2010 (48) November 2010 (52) October 2010 (45) September 2010 (42) August 2010 (37) July 2010 (44) June 2010 (52) May 2010 (50) April 2010 (43) March 2010 (40) February 2010 (32) January 2010 (33) December 2009 (36) November 2009 (35) October 2009 (33) September 2009 (42) August 2009 (47) July 2009 (53) June 2009 (49) May 2009 (73) April 2009 (61) March 2009 (34) February 2009 (17) January 2009 (1)
Where men are men and censors are nervous.
Link-Off
Man Lab
Randominity
Subscribe to The Lowdown
Keep up to date with our latest posts by getting them delivered directly to your inbox.
RT @FlashScoreCA: "It wouldn't be really nice if we get this gatorade on coach, that's really sticky stuff. Only in Canada are the Gatorad… 2 years ago
RT @USPORTSca: For the first time since 1994, the @WesternMustangs are Vanier Cup champions! https://t.co/DGAlHC9Q92 2 years ago
RT @RoKhanna: In Portugal, with no net neutrality, internet providers are starting to split the net into packages. https://t.co/TlLYGezmv6 2 years ago
Follow @LowdownBlog
Top 10 Orange County Choppers Bikes
Man Lab Showdown: Anna Kendrick vs. Aimee Teegarden
The History of Hulk Hogan's Entrance Music
Good Eats: The Complete Series
25 More Reasons to Love Canada
Posted on September 30, 2010 July 25, 2018 by Steve Murray Entertainment
Casting The Mighty Ducks with NHL Stars
An idea that's floated around Lowdown HQ for the last couple of years is which NHL players are most like players from the Mighty Ducks movie franchise. In my opinion, the original movie, The Mighty Ducks, is the greatest hockey movie ever made so this comparison has to do the franchise justice. That's why this post has taken us an obscenely long near two years of debate over this one to get the perfect roster of NHL players who are most like players on The Mighty Ducks.
Adam Banks = Sidney Crosby
Without a doubt, Banks is the star of the Mighty Ducks. He may not be the best defensive player on the squad but no one can match his skills or offensive firepower. Similarly, Sidney Crosby, occasionally referred to as "The Next One," has near unmatchable offensive fire power. Only Malkin and Ovechkin can match him point for point. Where Sidney sets himself apart from his Russian comrades is that he, like Banks, has as much charisma as a paper bag. The most charisma Sid has ever shown is saying "No" in an NHL on Versus commercial. Banks… I'll get back to you on that. Maybe falling onto the stage at Eden Hall. Also, they can be whiners at times.
Charlie Conway = Jonathan Toews
Jackie helped me with this one so if you don't like it you can blame him. I can see the comparison, though. Conway is the leader of his team. He's not the flashiest player on the team but he provides steady scoring and is clutch with penalty shots. He can even backcheck which is a seldom used skill in the movies. Toews is the captain of the Chicago Blackhawks and regularly outshined by his offensively gifted teammate Patrick "20 Cent" Kane. Toews is also guaranteed to make the highlight reels regularly with some of the moves he busts out to get around defenders and goalies.
Fulton Reed = Zdeno Chara
These two guys are easily the biggest and scariest guys on their respective teams. Both have ridiculously fast slap shots that nobody wants to get in front of. Unlike Fulton, Chara can hit the net more than one out of five. Though, I could be wrong, I don't see enough Bruins games to really know. Why do they each have the fastest shots on their teams? Because they're each the biggest players on their teams. While we're not sure how big Fulton is, we do know that Chara is about 6'9″ and towers over every other player in the league. Because of their size, they're not the fastest or most graceful skaters you'll ever see. Like a lot of poorer skaters, they make up for it with physical play. They'll hit you hard with their shoulders and then they'll hit you hard with their fists. Despite their big hits and big shots, they're not big talkers. However, both guys are leaders of their teams.
Greg Goldberg = Vesa Toskala
I'm not sure who I feel worse for in this. It's not a flattering comparison which ever way you look at it. But, really, how often do we ever see these guys make saves. In the whole series, we might have seen Goldberg make five saves. If you watch a Leafs game, you would know that's about as many as Toskala will make over 60 minutes. If you recall some of the situations that Goldberg has ended up in and that 197-foot shorthanded goal that Toskala gave up, you know that both are out there mostly for comic relief. Both are starters that have been shown up by the new guy and found themselves riding the pine. In D3, we saw Goldberg out of the pads and playing defense which turned out to be a good career move for him. I can't help but think it would be a good one for Toskala too.
Les (Dave) Averman = Max Talbot
Somehow his name got changed after the first movie. Actually, in the credits Averman's name is Les but Bombay calls him Dave. Slightly similarly, some folks call Talbot "Max" and others call him "Maxime." Okay, it's not the same thing but both are better known for their comic relief rather than their on ice play. Averman and Talbot do score the occasional goal and play centre. However, Averman is better known for his comedy shtick than his stick work. Talbot might be better known for his unintentionally funny commercials for A&L Motors (superstar treatment) than playing on a Stanley Cup champion.
Connie Moreau = Martin St. Louis
It would have been easy to compare Connie to someone like Hayley Wickenheiser as a woman who is good enough to play with the men. But this is a comparison to NHL players. The best comparison for Connie would be another small player. Both her and Marty St. Louis are on the small side compared to other players on the ice but they're a hell of a lot tougher than they look. That's where the comparison ends really. I'd like to say Connie is as skilled offensively as St. Louis but she sure hasn't shown it.
Guy Germaine = Sean Avery
Neither Germaine nor Avery will ever be mistaken for all-stars. That being said, they are occasionally cogs in their team's offence. I believe the term that most talking heads would use to describe them is "secondary scoring." However, their scoring prowess on the ice is overshadowed by their scoring prowess off the ice. Germaine hooked up (or at least tried to hook up) with fellow Duck Connie Moreau. I can't say that Avery keeps it in the locker room but he does have a list of "sloppy seconds" a mile long.
Jesse Hall = Mikhail Grabovski
Another Jackie choice. I can see this one too. If there's a fight to be started among teammates, Hall and Grabovski are usually going to be involved at the heart of the matter. Whether it's fighting teammates or other players or coaches, they're your go to guys. Hall was part of the revolt against Coach Bombay in the first movie and was anti-Banks and anti-Tyler in the first two movies. Grabovski has managed to fight two of his own teammates (Jason Blake and Francois Beauchemin) over the last two seasons and start a feud with Montreal's Sergei Kostitsyn.
Dean Portman = Derek Boogaard
There isn't much to either guy's game. Portman and Boogaard are enforcers, plain and simple. Neither has a discernible skill set outside of hitting someone hard and then punching them harder. Actually, have either of them ever scored a goal?
Julie "The Cat" Gaffney = Marc-André Fleury
The most obvious comparison is that both are starters for championship-calibre teams. But when you dig down, you realise that they're really the new kids on the block. Both are young goalies looking for some respect. Gaffney had to outwork Goldberg to get any consideration from Coaches Bombay and O'Ryan for the starting job and even then needed Goldberg to struggle for her to get the job. Fleury was handed the starting job in Pittsburgh but is getting no love from the Canadian media for the Olympic starting job. Only thanks to Cam Ward's struggles has he gotten any consideration for the third goalie spot on Team Canada. Despite being so talented, both are overshadowed by the offensive stars on their respective teams. However, when push comes to shove, they pick up their games and carry their teams to victory.
Dwayne Robertson = Rick Nash
Robertson is the best puck handler on the Ducks. Anytime that he gets his stick on the puck, there's a very good chance that there will be a highlight reel play upcoming. Rick Nash isn't just the best puck handler on his team but the best in the whole league. It's almost a given that he will make a play that features prominently on any highlight reel. Unlike Robertson, however, Nash isn't just secondary scoring. He's a former Rocket Richard Trophy winner as the NHL's top goal scorer and he's the captain of his team to boot.
Russ Tyler = Rob Davison
The knuckle puck isn't actually physically possible but players have had some tricky shots go in the take. Take Davison whose claim to fame is a 197 foot shorthanded goal that bounced down the ice and past Vesa Toskala. That's about as close as we'll get to a real knuckle puck in NHL play. Also, both guys aren't even close to first line players. Davison spends his time either in the AHL or in an NHL press box while Tyler is riding the pine when the game is on the line (unless the knuckle puck is required).
Luis Mendoza = Andrew Cogliano
Mendoza is the fastest player every time he touches the ice. He has occasional stopping problems though they are increasingly intermittent. Cogliano is considered among the fastest, if not the fastest player in the NHL. He doesn't have stopping issues but like Mendoza, he suffers from bouts of inconsistency. If they find some consistency, they could be stars. For now, they're generally after-thoughts on their teams.
Ken Wu = Paul Kariya
Another Jackie selection. Paul Kariya is his favourite player which might say what he thinks about Wu. He may have hinted that it had something to do with the "Asianness" (as Jackie would put it). Kariya and Wu are the only Asian players in the leagues they're in. They're also finesse players that rely more on skill than brute force to be effective. They're also fairly brittle players. Wu can barely move after any hit and Kariya is very injury prone though that is largely thanks to that Scott Stevens related injury.
Peter Mark = Ray Emery
Mark was only in the first Mighty Ducks movie. He might be best known for leading the locker room revolt and being flipped upside-down by Lewis the limo driver. It's easiest to describe him as a little shit disturber. Ray Emery has a reputation for being a bit of a locker room cancer and a shit disturber. If a character in any movie had a real-life counterpart, it's Mark and Emery.
Dave Karp = Todd Fedoruk
Karp was only in the first movie so I don't have a lot to base this comparison on. Even in that movie, we didn't see a lot of Karp. He's probably best remembered for getting hit in the head with a slap shot and getting the first ever Disney movie concussion. I don't know remember how good Fedoruk is at getting hit by pucks, but he does have a history of getting hit in the head. He had his orbital bone shattered in a fight by Derek Boogaard and was knocked out by Colton Orr. Also, both Karp and Fedoruk can best be described as warm bodies just there to fill out a lineup.
Terry Hall = Rob Niedermayer
Hall was one of the original D-5 players but I'm pretty sure he didn't even have a line in the first movie. The star Hall brother was Jesse. Terry was sort of a warm body on the last line that was well overshadowed by his brother. Rob Niedermayer is a decent player. He's more of a defensive forward than offensive star. His brother Scott is a near Hall of Fame defenseman. Rob might play better defense but Scott is the faster and more offensively gifted player and will always outshine Rob. Speaking of Rob, did he retire? Anyone know where he is?
Tammy Duncan = Claude Lemieux
The Duncan siblings were bit players in the first movies. Hell, if I didn't tell you that they were the figure skaters, you might not remember them. Fortunately, the CBC started a new reality show last season called Battle of the Blades (though I prefer to call it Skating with the Stars). The surprise sensation of the series was noted thug and the dirtiest player in the game Claude Lemieux, who finished second in the competition. Like Lemieux, Tammy also had a bit of a temper and came up with a big play in the big game. That and as a Red Wings fan, I like having the opportunity to compare Claude to a figure skater.
Tommy Duncan = Tie Domi
Tommy was the smaller of the two Duncan siblings. Tie is smaller of the two recast Duncan siblings. They're also both more successful as figure skaters than hockey players. Like Lemieux, Domi was also a participant on CBC's Battle of the Blades but went out long before the finish. So Tie's time in figure skating was like his hockey seasons – shorter than most everyone else's. That and as someone who has never cared for Tie Domi, I like having the opportunity to compate him to a figure skater.
Disney Hockey Mighty Ducks Movies NHL
← Best of The Telegraph's Sign Language
The Humanoids: From Downtown T.O. →
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 8,960
|
Interesting Cupboard Design Ideas Collection Bedroom Wardrobe Designs For From Wardrobe Inside Designs For Bedroom - The image above with the title Interesting Cupboard Design Ideas Collection Bedroom Wardrobe Designs For From Wardrobe Inside Designs For Bedroom, is part of Wardrobe Inside Designs For Bedroom picture gallery. Size for this image is 630 × 425, a part of Others category and tagged with wardrobe inside designs for bedroom, sliding wardrobe inside designs for bedroom, published September 1st, 2018 09:45:17 AM by Saige Orn. Find or search for images related to "Interesting Cupboard Design Ideas Collection Bedroom Wardrobe Designs For From Wardrobe Inside Designs For Bedroom" in another posts. Back to: Wardrobe Inside Designs For Bedroom.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,428
|
Can Fages és un edifici de Porqueres (Pla de l'Estany) inclosa a l'Inventari del Patrimoni Arquitectònic de Catalunya.
Descripció
És un mas d'estructura d'estil basilical, distingint-se en alçat tres cossos units: un cos central de més alçada i coberta a dues vessants al que queden adossats dos cossos laterals, d'alçada inferior i amb coberta a una vessant. L'edifici consta de planta baixa, pis i golfes. La porta d'entrada amb forma d'arc rebaixat està flanquejada per dues senzilles finestres amb llinda arquejada d'una sola peça. Al primer pis s'obren tres finestres de les mateixes característiques que queden alineades respecte les obertures de la planta baixa. La façana s'articula amb criteris de simetria. L'edifici ha estat realitzat amb pedra petita i irregular, a excepció dels marcs de les portes i finestres composts per pedres de grans dimensions ben escairades.
Història
No posseïm cap data que faci referència a l'edifici però per la seva estructura cal proposar com a data aproximada els segles o . La casa ha estat objecte d'una restauració al .
Referències
Patrimoni monumental de Porqueres
Masies de Porqueres
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,703
|
\section*{Introduction}
Human mobility is a vital social activity in our society that is relevant to various applications in commerce, urban design, marketing, and economics while also being involved in the spreading of diseases such as COVID-19.
Today, the popularisation of mobile phones has enable the collection of real-time, massive data, in addition to ordinary survey-based collection.
The collected data are usually aggregated as an \textit{origin-destination (OD) matrix} (Fig. \ref{fig:odtable}),
which describes how many people are moving from one place (origin) to another (destination).
Thus, the mobility data characterises the relationships between places based on human behaviour,
and is expected to reveal the places that attract human flow and their basins.
Such information tells us the centres and limits of cities based on the human behaviour of real-time movements and unfolds the actual shapes of cities,
which is dynamically changeable according to years, transportation methods, and movement restrictions.
They, in turn, aid location decision making for commercial or public buildings, the optimisation of transportation systems, urban planning by policy makers, and the measures for movement restrictions in COVID-19 spreading.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{ODtable.pdf}
\caption{\textbf{Origin-Destination (OD) matrix and the concept of the potential of human flow.}}
\label{fig:odtable}
\end{figure}
To reveal the spatial structure of cities,
we consider the {\it scalar potential} of human flow.
Potential is a popular mathematical concept used in various scientific fields, from physics to economics.
In the context of our study, it is defined as a function of location, and its gradient yields a net movement of people among locations.
Such a potential landscape provides an intuitive perspective of human flow by analogously representing water flowing from a higher place to a lower place.
Furthermore, it reduces the relational flow data to location-level statistics that are ready to be shown on a map.
From the map, we can easily identify the sinks and sources of human flow, as illustrated in Fig. \ref{fig:odtable}.
A sink of human flow indicates attractive places.
Potential landscape can visualise the urban structure behind massive data of human mobility
and utilise it for relevant applications, if successfully introduced.
However, it is not obvious how to introduce the potential to human flow.
Unlike an electromagnetic field,
human flow is not described by a two-dimensional vector field, but as an OD matrix.
Furthermore, it is an open question whether human flow can be effectively described by a potential in the first place, according to Helmholtz's theorem.
In the literature,
the OD matrix is converted to a 2D vector field by averaging all trips from each location \cite{Mazzoli2019b}.
This is analogous to considering only the motion of the centre of mass instead of the motions of the individuals.
The resultant vector field was found to be almost irrational, and a scalar potential was introduced.
However, this aggregation discards the place-to-place information of the original data.
As demonstrated in benchmark tests using synthetic data (Supplementary Note 1),
it is difficult using the previous method to identify the number of centres and their areas expressed in the given data.
Another approach is to define a potential \cite{Stewart1947} or an attractiveness \cite{HarrisWilson1978} based on the so-called gravity model \cite{gravity_zipf1946,Ullman1956,WilsonBook1974}, which is a well-known model of human flow.
These measures have been evaluated by using several residential and economic datasets \cite{Geurs2004,Ellam2018}.
However, these measures are specific to the assumed model and are not calculated from the OD matrix data.
Here, we provide a straightforward introduction of a potential to the OD matrix by applying the Hodge-Kodaira decomposition of graph flow \cite{DeRham1984,hodge1989theory,Jiang2011,Kodaira1949,Warner1983}.
As described in the Method section, the human flow is uniquely decomposed into a potential-driven (gradient) flow and another circular flow.
The potential at each place is directly and easily calculated from a given OD matrix
without any model assumptions and calibration parameters.
The potential is interpretable: it refers to the difference between night-time and daytime populations for commuting trips.
Furthermore, the decomposition allows us to determine how well the potential describes human flow by evaluating the percentage of the gradient component.
We find that the circular component in human flow is not always negligible.
This is in contrast to previous studies where flow is, by assumption, described as the gradient component \cite{Stewart1947} or the circular flow is treated as noise \cite{Mazzoli2019b}.
In the following, we depict the potentials of the commuting flow in London for several different transport methods,
and show the evolution of the potential landscape over 30 years in Tokyo.
We then study the percentage of the gradient component in metropolitan areas in the USA.
Finally, we discuss the practical meaning of the potential and limitations of the method.
\section*{Overview of Hodge-Kodaira decomposition to an OD matrix}
Here, we overview Hodge-Kodaira decomposition, which is applied to an OD matrix (see Methods for the details).
First, we consider the net flow of movement from a given OD matrix $M$ as
when 150 persons move from location $i$ to another location $j$ and 50 people move in the opposite direction, we consider the net movements of 100 persons from $i$ to $j$.
The net flow is given by
\begin{equation}
A = M - M^{\intercal}, \label{eq:netflow}
\end{equation}
where $M^{\intercal}$ denotes the transpose of $M$.
Matrix $A$ is skew-symmetric, that is, $A_{ij} = - A_{ji}$,
and is possibly described by combinatorial gradient of a potential $s$, given by
\begin{equation}
(\text{grad}\, s)(i, j) = s_j-s_i.
\end{equation}
Then, we define an optimisation problem for potential $s$:
\begin{equation}
\min_s \lVert \text{grad}\ s - A \rVert_2 = \min_s \left[ \sum_{i,j} \left[ (s_j - s_i) - A_{ij} \right]^2 \right]. \label{eq:optimization_problem}
\end{equation}
According to the combinatorial Hodge theory \cite{Jiang2011}, the space of net flow $\mathcal{A}$ is orthogonally decomposed into two subspaces:
\begin{equation}
\mathcal{A} = \text{im}(\text{grad}) \oplus \text{im}(\text{curl}^*),
\end{equation}
where $\text{curl}$ is the combinatorial curl operator and $\text{curl}^*$ is its adjoint operator.
Thus, the optimisation problem is equivalent to an $l_2$-projection of $A$ onto im(grad),
and the minimal norm solution is simply given by
\begin{equation}
s_i = -\frac{1}{N} \text{div} A = - \frac{1}{N} \sum_{j=1}^N A_{ij}, \label{eq:potential}
\end{equation}
where $s_i$ is the potential at the $i$th location and $N$ is the number of locations.
It is noted that $s_i$ is negative potential ($s_i=-V_i$).
This means that we observe more trips from a place with lower potential to another place with higher potential.
\section*{Potential landscapes in cities}
\begin{figure}[tb]
\centering
\includegraphics[width=12cm]{london_type2.pdf}
\caption{
\textbf{Negative potential $-V$ of the home-work trips in London.}
(\textbf{A})
The potential at a place is indicated by its colour and its height.
(\textbf{B,C})
The potentials are depicted for the selected trips by specific transport methods.
}
\label{fig:PotentialLondon}
\end{figure}
Figure \ref{fig:PotentialLondon} depicts the negative potential $-V_i (=s_i)$ of the decomposed gradient flow in Greater London, using a person-trip dataset in 2011.
The OD matrix shows the number of commuters aggregated by the middle layer super output area (MSOA) in the census 2011.
The potential has the largest peak at ``City of London 001'', literally the centre of London.
Its neighbouring areas, such as ``Westminister 018'' and ``Westminister 013'', also have a large potential.
Another peak, that is, a local maximum in the potential landscape, is seen at ``Tower Hamlets 033'',
and there are small peaks outside the central area of London.
Most other areas are characterised by a relatively lower potential by $-V$, serving as the sources of commuters to the centres.
The flows selected by specific transport methods provide another picture of potential landscapes in London.
The potential for public transportation (Fig. \ref{fig:PotentialLondon}B) is rather similar to that by all methods.
In contrast, the potential for private cars (Fig. \ref{fig:PotentialLondon}C) becomes a \textit{single-center} city than \textit{multi-center}:
a few locations still have higher potential, and the other locations have very low potentials without small peaks.
In addition, the potential amplitude is smaller than other cases, reflecting the volume of commuters (public transport = 1.6 million trips , private car = 45.2 thousand trips).
\begin{figure}[tbp]
\includegraphics[width=\linewidth]{tokyo.pdf}
\caption{
\textbf{Time-evolution of potential landscape in Tokyo metropolitan area.}
The potentials are obtained from the home-work trips of successive surveys over 30 years in the Tokyo metropolis and surrounding provinces.
Each zone is basically equal to a municipal district.
}
\label{fig:PotentialTokyo}
\end{figure}
Next, we examined the time evolution of the potential landscape (Fig. \ref{fig:PotentialTokyo}),
using the commuter datasets of successive person-trip surveys from 1988 to 2018 in the Tokyo metropolitan area.
Over 30 years, \textit{Chiyoda} city --- the Imperial Palace and its surrounding areas --- has been at the top of the potential.
The city is known as the economic and political centre in Japan:
it houses the headquarters of major enterprises, government institutions, and the Tokyo Central Railway Station.
Its neighbouring cities, such as \textit{Minato}, \textit{Chuo}, \textit{Shinjuku}, and \textit{Shibuya}, have occupied the top five ranks by potential over the years (Table \ref{tbl:top20_tokyo}), and formed the largest stable peak in the Tokyo metropolitan area.
Several small, steady peaks were observed outside the central area (e.g. \textit{Yokohama}, \textit{Chiba}, \textit{Kawasaki}, and \textit{Atsugi} cities).
In contrast to these steady peaks, new peaks appeared in \textit{Tachikawa} and \textit{Akishima} cities after 1998 and at the \textit{Omiya} ward in \textit{Saitama} city after 2008.
These small peaks correspond to the business cores envisioned by the fourth National Capital Regional Development Plan in 1986 \cite{Itsuki2006}, which aimed at multi-nucleated urban structures to avoid over-concentration in the Tokyo central area.
The potential over 30 years reveals how the urban structure in Tokyo has changed or remained unchanged.
\section*{How much percentages of human flows is represented by the potential?}
\begin{figure}[tbp]
\includegraphics[width=\linewidth]{percentage_map.pdf}
\caption{
\textbf{The percentage of gradient component $R^2$ in human flow}.
(\textbf{A}) Percentage $R^2$ in home-work trips for each Core-based statistical area (CBSA) in 2018. The inset shows the CBSAs in Hawaii.
(\textbf{B}) Histogram of percentage $R^2$. CBSAs are classified into metropolitan statistical areas (MSAs) and micropolitan statistical areas ($\mu$SAs).
(\textbf{C}) Percentage $R^2$ is plotted against the population of the CBSAs.
}
\label{fig:Percentage}
\end{figure}
To answer this question, we first define the percentage of the gradient component as
\begin{equation}
R^2 =\frac{\sum_{i,j} \left[ (\text{grad}\, s)(i, j) \right]^2}{\sum_{i,j} A_{ij}^2} = 1 - \frac{ \sum_{i} \sum_{j \neq i} \left( A_{ij} - (s_j - s_i) \right)^2 }{\sum_{i} \sum_{j \neq i} \left( A_{ij} \right)^2 }.
\end{equation}
This quantity is known as the so-called `coefficient of determination' in statistics, and
is a reasonable choice for evaluating the explanatory power of the potential,
which is determined via orthogonal projection as ordinary least squares.
We evaluated the percentage $R^2$ for the metropolitan areas in the USA (Fig. \ref{fig:Percentage}A),
using the home-work trip datasets for each Core-based statistical area (CBSA) in 2018.
The percentage $R^2$ widely varies among the areas:
the minimal percentage $R^2$ = 17.72\% was in \textit{New York-Newark-Jersey City, NY-NJ-PA}, while the maximum was 99.98\% in \textit{Zapata, TX}.
The distribution has a mean of $\mu$ = 66.2\% and standard deviation $\sigma$ = 15.3\% (Fig. \ref{fig:Percentage}B).
In CBSAs, metropolitan statistical areas (MSAs) tend to have a lower percentage than micropolitan statistical areas ($\mu$SAs).
Thus, the percentage $R^2$ is plotted against the population (Fig. \ref{fig:Percentage}C), showing that the percentage $R^2$ tends to decline for larger populations.
In addition, the percentage $R^2$ was changed by the transport methods in the London case (Table \ref{tbl:LondonPercentage}) and by years in the Tokyo case (Table \ref{tbl:TokyoPercentage}).
\section*{Discussion}
\label{sec:discussion}
In this paper, we have introduced a potential for the OD matrix by Hodge-Kodaira decomposition,
and depicted the potential landscape in cities.
In London, the largest peak of the potential landscape, that is, the most attractive centre of the flow, is located at ``City of London 001''.
The landscape could give a different view of urban structure by the transportation method.
In the Tokyo metropolitan area, the time evolution of the potential over 30 years revealed how Tokyo has changed or remained unchanged in the view of human flow.
We found that the largest peak was stably located in \textit{Chiyoda} city, which is the central area of Tokyo.
Other peaks were seen at suburban business cores,
confirming the development of the multi-nucleated urban structure as envisioned in the national development plan in 1986.
These business cores are also known as ``edge cities'' of Tokyo, which are dynamically organised \cite{Garreau1991, Li2018, fujita1982multiple}.
In fact, it is clearly shown that some cores have emerged during the years as new peaks in the potential landscape.
We first discuss the practical meaning of the potential and how it is related to the spatial concentration of employment or daytime population, which have been used to depict polycentric intra-urban structures \cite{van2016pacifying, barthelemy2016structure}.
Using equations (\ref{eq:netflow}) and (\ref{eq:potential}), the potential is clearly interpreted as the balance between incoming and outgoing flows of people, mathematically given by,
\begin{equation}
s_i = \frac{1}{N} \left( \sum_j^N M_{ji} - \sum_j^N M_{ij} \right).
\end{equation}
The terms represent the incoming and outgoing flow, respectively.
Thus, the potential is the difference between the daytime and night-time populations when the OD matrix describes the home-work trips of all residents.
The daytime population, that is, the population at destination places in home-work trips, is the first part of the potential, but not the entirety. The night-time population as the population at the origin is subtracted.
The potential of human flow also incorporates the importance of circular flow in cities.
We evaluated how well the potential describes human flow in metropolitan areas in the US by using the percentage of the gradient component.
We found that the percentage is not always 100\% and not a universal value
but highly variable among the areas.
For several areas, the gradient component is more dominant, with a high percentage $R^2$,
while in other areas, the other component (curl component) is dominant.
This variation reflects the differences in human flow across the areas
and raises new questions of when and why human flow is well described by the potential. %
Furthermore, the curl component tends to be dominant for large cities,
indicating the importance of circular flow in net movements of people.
The curl is defined for triplets of locations as described in Methods.
In contrast, human flow has been discussed in terms of paired locations: origin and destination.
The circulation along triangle places addresses a new aspect of human flow with another question: What drives the circular flows in populated areas?
The decomposition method opens up new research avenue on human mobility and urban structures.
The limitation of the proposed method should be noted.
The potential is based on the rigorous mathematical definition of the OD matrix and
does not require any model assumptions and any additional datasets.
Conversely, the analysis in this study does not consider several factors assumed in spatial interaction models, such as the gravity model \cite{gravity_zipf1946,Ullman1956,WilsonBook1974} or radiation model \cite{Simini2012}.
In particular, the distance deterrence on human mobility is not considered.
This could impose limitations on a native application to a dataset at the country level, where distance critically matters.
Thus, it is appropriate to apply decomposition to the human flow dataset within cities or narrow regions.
Otherwise, a distance-weighted function can be integrated into the decomposition, as described in Supplementary Note 4.
In summary,
the potential landscape by Hodge-Kodaira decomposition provides an intuitive perspective of human flow by its gradient flow from a higher place to a lower place.
The landscape allows us to understand the spatial structure of cities based on human movements rather than administrative circumstances
and to study the dynamic changes in the spatial structure under different conditions.
For example, we can study whether the global increase in remote workers due to the COVID-19 pandemic is alleviating over-concentration of population in city centres by checking the emergence of new potential peaks in suburbs or the decline of preexisting ones.
The method provides an easy-to-use visualisation tool to show the attractive places of human flow and will aid relevant applications in commerce, urban design, and epidemic spreading.
\bibliographystyle{plain}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 3,059
|
Elin Rodum Agdestein (* 10. August 1957 in Steinkjer) ist eine norwegische Politikerin der konservativen Partei Høyre. Von 2013 bis 2021 war sie Abgeordnete im Storting.
Leben
Agdestein studierte von 1979 bis 1983 Ergotherapie an der Technisch-Naturwissenschaftlichen Universität Norwegens (NTNU). Anschließend arbeitete sie in diesem Beruf in verschiedenen Positionen und als selbstständige Beraterin. In den Jahren von 2003 bis 2013 betrieb sie einen von ihr gegründeten privaten Kindergarten und eine Grundschule. In den Jahren 2003 bis 2013 war sie Mitglied im Kommunalparlament von Steinkjer, zwischen 2006 und 2013 saß sie zudem im Fylkesting der damaligen Provinz Nord-Trøndelag.
Agdestein kandidierte bei der Parlamentswahl 2009 für einen Sitz im norwegischen Nationalparlament Storting, sie verpasste jedoch den Einzug. Bei der der folgenden Wahl im September 2013 erhielt sie schließlich ein Mandat im Wahlkreis Nord-Trøndelag. Sie wurde Mitglied im Außen- und Verteidigungsausschuss. Nach der Wahl 2017 wechselte sie in den Finanzausschuss. Bei der Stortingswahl 2021 verpasste sie den erneuten Einzug ins Parlament.
Weblinks
Elin Rodum Agdestein beim Storting (norwegisch, englisch)
Elin Rodum Agdestein im Store norske leksikon (norwegisch)
Storting-Abgeordneter (Nord-Trøndelag)
Fylkesting-Abgeordneter (Nord-Trøndelag)
Høyre-Mitglied
Norweger
Geboren 1957
Frau
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,891
|
ITServe Alliance invites all IT Services & Solution business owners to the Inaugural Meeting of its 10th chapter at Seattle, WA.
Noah Purcell, Solicitor General, Washington, USA.
ITServe is a non-profit voluntary association of more than 675+ IT staffing and consulting firms in USA who provide services to Fortune 500 companies by hiring workers in US and keeping jobs here in US.
Our mission is to serve as the voice of the industry, educate our members on best practices, and protect the U.S. economy by providing cost-effective alternatives with top class local employment compared to outsourcing/offshoring.
Through our membership, we reach out to elected officials, the media, and government agencies to find ways to better address the immigration needs of our member companies and their employees. Over the last seven years, ITServe has worked to bring to light some of the issues consulting firms and their employees face in navigating the current business immigration system. ITServe Alliance is committed to work for the betterment and welfare of IT consulting and services companies.
The current U.S. immigration system is broken according to most experts. ITServe Alliance and its members realize that we can achieve greater success by working together and fighting for better laws and regulations, alongside our employees. If employers and employees both understand the other's perspectives, and work together, there will be greater harmony and success for all concerned in the long term.
ITServe was founded on principles to promote best business practices and is supporting immigration laws that encourage keeping jobs in the USA instead of being outsourced through policy advocacy in support of STEM OPT, H1Bs and Intended Immigrants.
ITServe strives for bringing trust in the consulting business by enabling members to adhere to the highest business ethics and principles.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,949
|
Limit One per Customer, Three
Mar1 by The Esoteric Sybarite
The problem with ten-dollar words is that they can price themselves right out of the business. Once a fancy showboat like purlieu or miasma or effluvium has come on the scene saying, "Get a load of me," it can be hard for the reader, upon subsequent encounters with those same ringers, not to think, "Didn't I get a load of this already?" No matter how long the book you're reading is, when you hit that second circumambient, you probably still have a fairly distinct recollection of the first. Making a splashy debut is one thing, but holding down steady work is another.
I've been to the well on this subject twice before* and have in those previous bucket-dunkings brought up such shiny attention-getters as unsentient, the sort of word that hardly needs to be used excessively for its use to feel excessive: Absalom, Absalom!'s "unsentient earth," "unsentient barrow," and—no, you are not reading this next one wrong—"unsentient plow handles" together constitute, I would argue, 200% more unsentience than any one novel should rightly contain. (And as to my inconsistent practicing-versus-preaching policy in the area of not repeating oneself, I can only say in my defense that the well in question is pretty darn deep.)
Apotheosis, for example, seems like the kind of vocabulary seasoning that one would want to apply with some economy, but Absalom! is never less than a robustly flavored dish. In one episode, kept women of multiracial heritage are described as "the supreme apotheosis of chatterly" and then, two pages later, as "the apotheosis of two doomed races," while, in another moment, a man's unobtainable self-ideal is characterized as "his own lonely apotheosis," which, four pages after that, gets artily inverted into "the apotheosis lonely." (Elsewhere, for a final pinch of zest, there is "the dream which, conjunctive with the dreamer, becomes immolated and apotheosized." Oh, that dream again.)
Transmogrify is, no bones about it, an awesomely cool word. External to the universe of Calvin and Hobbes, though, its frequency of use within a single work of fiction would likely best be capped at one (and even that might be pushing it). If your book contains an unhappy woman whose married life has left her "transmogrified into a mask looking back with passive and hopeless grief upon the irrevocable world" and a couple of hardy, temperature-be-damned types who face the cold "in deliberate flagellant exaltation of physical misery transmogrified into the spirits' travail," you've got yourself an overegged pudding.
One click adjacent on the transmogrification knob is the setting for metamorphosis—and, fear not, the Absalom! stew will not go undersalted with metamorphoses, as unlikely as it is to see that word so well-represented outside of a lepidoptera textbook. Fitting, then, that it should figure repeatedly in delicately winged metaphors of personal development—"Ellen went through a complete metamorphosis, emerging into her next lustrum with the complete finality of actual rebirth"; "[people] grow from one metamorphosis—dissolution or adultery—to the next…as the butterfly changes once the cocoon is cleared." But it also comes into play in the rather tangled erotic imaginings of one character who not only fantasizes about assuming the form of his sister's fiancée so he can sleep with her ("that complete abnegate transference, metamorphosis into the body which was to become his sister's lover"; "in the person of the brother-in-law, the man whom he would be if he could become, metamorphose into, the lover, the husband"), he also—and, hey, points for empathy—envisions what it would be like to be the female half of that coupling, to receive "the lover, the husband…by whom he would be despoiled, choose for despoiler, if he could become, metamorphose into the sister, the mistress, the bride." So, once your therapist is done interpreting those immolated and apotheosized dreams of yours, see what she makes of that one.
The multi-stage arc of experiencing a word like importunate can be charted with a series of points: Upon one's first brush ("to heat and make importunate the blood of a young man"), one may reach for the dictionary and offer appreciations, Ah, "urgent or persistent"—what lively word selection!; upon the second ("the surprised importunate traitorous flesh"), one may feel a flickering of concern, Ah, but Author, Good Sir, did we not see this rather noteworthy word in just the previous chapter, also in very similar context?; and upon the third ("any hushed wild importunate blood"), one may gently opine, Ah, man…another one? And with "blood" again?!
I don't know if I necessarily noticed the second or even the third occasion of recapitulation as it appeared in Absalom, Absalom!, but by the end, once it had fully transmogrified through the many phases of its recapitulative metamorphosis—"harsh recapitulation," "outraged recapitulation," "patient amazed recapitulation," "vain and empty recapitulation"—its bright colors had definitely caught my eye. At one point, a lawyer, apparently charging by the recapitulation, crafts a letter of introduction between two men—"an introduction (clumsy though it be) to one young gentleman whose position needs neither detailing nor recapitulation in the place where this letter is read, of another young gentleman whose position requires neither detailing nor recapitulation in the place where it was written." (Whatever gave him the idea that this was clumsy?)
Other nominations I would make for the One of Those Should Be More Than Enough, Thank You designation include repercussive ("the fierce repercussive flush of vindicated loyalty," "the tedious repercussive climax"), volte face ("a volte face of character," "one of mankind's natural and violent and inexplicable volte faces"), and lugubrious ("some lugubrious and painless purgatory," "lugubrious and vindictive anticipation," "lugubrious and even formal occasions"—these last two separated only by the space of as many pages). You get all of these together in the same recipe, and, mamma mia, that's a spicy meatball.
*Physician, heal thyself! (Those visits are here and here.)
This entry was posted in Perverse and tagged Absalom Absalom!, apotheosis, apotheosized, Calvin and Hobbes, circumambient, effluvium, importunate, lugubrious, metamorphose, metamorphosis, miasma, purlieu, recapitulation, repercussive, transmogrify, unsentience, unsentient, volte face, William Faulkner.
← Hunting Wabbits
Unabridged Too Far →
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 2,492
|
Festive Land: Carnaval in Bahia é um filme documentário de 2001 sobre o carnaval baiano. O filme faz parte do currículo sobre Estudos Latino-Americanos e Antropologia Cultural de diversas universidades nos Estados Unidos. Conta com depoimentos de Gilberto Gil, Daniela Mercury, Vovô do Ilê, Armandinho, Prof. Albergaria, Milton Moura. Carolina Moraes-Liu foi responsável pela produção, direção e montagem e os produtores associados foram Chung Liu, Delicia Hegwood, Lisa Earl Castillo.
Prêmios
Silver Award no WorldFest-Houston Intl. Film Festival Award
African Studies Assn. honoree
African Literature Assn. honoree
Ver também
Carnaval de Salvador
Ligações externas
Festive Land Festive Land no site Berkeley Media
Documentários do Brasil
Filmes do Brasil de 2001
Carnaval da Bahia
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 2,698
|
menu.Comparing Tractor Implements was never this easy. KhetiGaadi provides you the easiest way of online comparing tractor implements or attachments. Compare Implements as per their Indian Price, size, features, Specifications, Dimensions and many more features.
Choose two or more Implements to compare them head-to-head. You have choice of comparing new Implements as well as discontinued Implements in India.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,485
|
export GOOGLE_CLOUD_PROJECT=cdpe-functions-billing-test
export GCF_REGION=us-central1
export NODE_ENV=development
# Configure Slack variables
export BOT_ACCESS_TOKEN=$(cat ${KOKORO_GFILE_DIR}/secrets-slack-bot-access-token.txt)
export CHANNEL=$(cat ${KOKORO_GFILE_DIR}/secrets-slack-channel-id.txt)
export BILLING_ACCOUNT=$(cat ${KOKORO_GFILE_DIR}/secrets-billing-account-id.txt)
cd github/nodejs-docs-samples/${PROJECT}
# Install dependencies
npm install
# Configure gcloud
export GOOGLE_APPLICATION_CREDENTIALS=${KOKORO_GFILE_DIR}/secrets-key.json
gcloud auth activate-service-account --key-file "$GOOGLE_APPLICATION_CREDENTIALS"
gcloud config set project $GOOGLE_CLOUD_PROJECT
npm run ${TEST_CMD}
exit $?
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 5,382
|
Q: Junit: How to know line number in parameter file I have a method in junit that is responsible for checking errors in a very large file (more than 200k lines).
I would like to know if there is any variable in which Junit put the lines that have the file and the line in which he is doing the test, in order to use them.
I know that in testCase() there is a private variable that contains the line on which the test is running, but I can not access it, any advice?
The code used is like this:
@Test
@FileParameters("fileparameter")
public void testFechaAlteracionExpedienteFS(String line) {
String TEST= 'test';
assertThat(TEST).overridingErrorMessage("Expected: <%s> - but it was: <%s>", line, TEST, ConstantesSql.getConsulta()).isEqualTo(line);
I'm using Maven with Junit 4+.
A: Why not use simple java api?
Documentation: http://docs.oracle.com/javase/8/docs/api/java/nio/file/Files.html#lines-java.nio.file.Path-
A: Use a parameterized test:
import java.io.File;
import java.util.Arrays;
import java.util.Collection;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import org.junit.runners.Parameterized.Parameter;
import org.junit.runners.Parameterized.Parameters;
@RunWith(Parameterized.class)
public class ParameterizedTest {
@Parameter(0)
public File file;
@Parameter(1)
public String line;
@Parameters(name = "{index}: {0}")
public static Collection<Object[]> data() {
return Arrays.asList(
new Object[][] { { new File("/path/to/file1"), "line1" },
{ new File("/path/to/file2"), "line2" },
{ new File("/path/to/file3"), "line3" } });
}
@Test
public void test() {
// Your test code here (read file and line variables)
}
}
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,168
|
\section{\textbf{Introduction}}
\label{Introduction}
Optical coherence tomography (OCT) is a powerful imaging modality used to image biological tissues to obtain structural and molecular information \cite{huang1991optical}. By using the low coherence interferometry, OCT can provide high-resolution cross-sectional images from backscattering profiles of biological samples. Over the past two decades, OCT has become a well-established imaging modality and widely used by ophthalmologists for diagnosis of retinal and optical nerve diseases. One of the OCT imaging biomarkers for retinal and optical nerve disease diagnosis is the thickness of the retinal layers. Automated OCT image segmentation is therefore necessary to delineate the retinal boundaries.
Since the intensity patterns in OCT images are the result of light absorption and scattering in retinal tissues, OCT images usually contain a significant amount of speckle noise and inhomogeneity, which reduces the image quality and poses challenges to automated segmentation to identify retinal layer boundaries and other specific retinal features. Retinal layer discontinuities due to shadows cast by the retinal blood vessels, irregular retinal structures caused by pathologies, motion artefacts and sub-optimal imaging conditions also complicate the OCT images and cause inaccuracy or failure of automated segmentation algorithms.
Over the past two decades a number of automatic and semi-automatic OCT segmentation approaches have been proposed. These approaches can be roughly categorised into three families: A-scan based methods, B-scan based methods and volume based methods, as illustrated in Figure~\ref{fig:ABV}. A-scan based methods \cite{hee1995optical,koozekanani2001retinal,ishikawa2002detecting,ishikawa2005macular,shahidi2005quantitative,fernandez2005automated,mayer2008automatic} detect intensity peak or valley points on the boundaries in each A-scan profile and then form a smooth and continuous boundary by connecting the detected points using model fitting techniques. These methods can be inefficiency and lack of accuracy. Common approaches for segmenting two-dimensional (2D) B-scans include active contour methods \cite{fernandez2005delineating,mujat2005retinal,mishra2009intra,yazdanpanah2009intra,ghorbel2011automated,rossant2015parallel}, shortest-path based graph search \cite{chiu2010automatic,yang2010automated} and statistical shape models \cite{kajic2010robust,kajic2012automated,pilch2012automated} (i.e. active shape and appearance models \cite{cootes1995active, cootes2001active}). B-scans methods outperform A-scan methods in general. However, they are prone to the intrinsic speckle noise in OCT images and more likely to fail in detection of pathological retinal structures. Three-dimensional (3D) scan of the retina is now widely used in commercial OCT devices. Existing volume based segmentation methods mainly use 3D graph based methods \cite{haeker2006segmentation,garvin2008intraretinal,garvin2009automated,quellec2010three,antony2012incorporation,dufour2013graph,kafieh2013intra,tian2015real,tian2016performance} and pattern recognition \cite{vermeer2010automated,vermeer2011automated,fuller2007segmentation,szkulmowski2007analysis,lang2013retinal}. Benefiting from contextual information represented in the analysis graph, graph based methods provide optimal solutions and ideal for volumetric data processing. However, the computation can be very complex and slow. Pattern recognition methods normally require training data manually segmented by experts in order to learn a feasible model for classification. These approaches also suffer in accuracy and efficiency. Segmentation of retinal layers in OCT images thereby remains a challenging problem.
\begin{figure}[h!]
\centering
{\includegraphics[width=0.65\textwidth]{Fig/VBA}}
\vspace{-10pt}
\caption{A en-face fundus image (left) with lines overlaid representing the locations of each B-scan within a volumetric OCT data. The red line corresponds to the B-scan in the image (right top). One vertical A-scan of the B-scan is shown in the plot (right bottom). The fovea region is characterised by a depression in the centre of the retina surface.}
\label{fig:ABV}
\vspace{-10pt}
\end{figure}
In this paper, we propose an algorithm for retinal layer segmentation based on a novel geodesic distance weighted by an exponential function. As opposed to using a single horizontal gradient as in other works \cite{chiu2010automatic,tian2015real,tian2016performance}, the exponential function employed in our method integrates both horizontal and vertical gradient information and can thus account for variations in the both directions. The function plays the role of enhancing the foveal depression regions and highlighting weak and low contrast boundaries. As a result, the proposed geodesic distance method (GDM) is able to segment complex retinal structures with large curvatures and other irregularities caused by pathologies. We compute the weighted geodesic distance via an Eikonal equation using the fast sweeping method \cite{zhao2005fast,tsai2003fast,duan2015surface}. A retinal layer boundary can then be detected based on the calculated geodesic distance by solving an ordinary differential equation via a time-dependent gradient descent equation. A local search region is identified based on the detected boundary to delineate all the nine retinal layer boundaries and overcome the local minima problem of the GDM. We evaluate the proposed GDM through extensive numerical experiments and compare it with state-of-the-art OCT segmentation approaches on both healthy and pathological images.
In the following sections, we shall first review the state-of-the-art methods that are to be compared with the proposed GDM, such as parallel double snakes \cite{rossant2015parallel}, Chiu's graph search \cite{chiu2010automatic}, Dufour's method \cite{dufour2013graph}, and OCTRIMA3D \cite{tian2015real,tian2016performance}. This will be followed by the details of the proposed GDM, ground-truth validation, numerical experimental results, and comparison of the GDM with the state-of-the-art methods.
\section{\textbf{Literature Review}}
\label{reviewedMethod}
In this section, we will provide an overview of the state-of-the-art methods (i.e. parallel double snakes \cite{rossant2015parallel}, Chiu's method \cite{chiu2010automatic}, OCTRIMA3D \cite{tian2015real,tian2016performance}, Dufour's method \cite{dufour2013graph}) that will be compared with our proposed GDM in Section \ref{GDM}. For a complete review on the subject, we refer the reader to \cite{debuc2011review}. Among the four methods reviewed, the first two can only segment B-scans, while the latter two are able to extract retinal surfaces from volumetric OCT data. We note that the term `surface' refers to a set of voxels that fall on the interface between two adjacent retinal layer structures. The retinal layer boundaries to be delineated are shown in Figure~\ref{fig:OCTBoundary}.
\textbf{Parallel double snakes (PDS)}: Rossant et al. \cite{rossant2015parallel} detected the pathological (retinitis pigmentosa) cellular boundaries in B-scan images by minimising an energy functional that includes two parallel active parametric contours. Their proposed PDS model consists of a centreline $C(s)=(x(s),y(s))$ parametrised by $s$ and two parallel curves $C_1(s)=C(s)+b(s)n(s)$ and $C_2(s)=C(s)-b(s)n(s)$ with $b(s)$ being a spatially varying half-thickness and $n(s)=(n_x(s),n_y(s))$ the normal vector to the the centreline $C(s)$. Specifically, their PDS model is defined as
\begin{equation}
E(C,{C_1},{C_2},b) = {E_{Image}}({C_1}) + {E_{Image}}({C_2}) + {E_{Int}}(C) + R\left( {{C_1},{C_2},b} \right)
\label{eq:PDS}
\end{equation}
where the image energy ${E_{Image}}({C_1}) =- \int_0^1 {{{\left| {\nabla I({C_1})} \right|}^2}ds}$ ($\nabla$ is the image gradient operator) attracts the parametric curve $C_1$ towards one of retinal borders of the input B-scan $I$, whilst ${E_{Image}}({C_2})$ handles curve $C_2$ which is parallel to $C_1$. The internal energy ${E_{Int}}(C)=\frac{\alpha }{2}\int_0^1 {{{\left| {{C_s}\left( s \right)} \right|}^2}ds} + \frac{\beta }{2}\int_0^1 {{{\left| {{C_{ss}}\left( s \right)} \right|}^2}ds}$ imposes both first and second order smooth regularities on the central curve $C$, with $\alpha$ and $\beta$ respectively controlling the tension and rigidity of this curve. $R\left( {{C_1},{C_2},b} \right)=\frac{\varphi }{2}\int_0^1 {{{\left| {b'\left( C \right)} \right|}^2}ds}$ is a parallelism constraint imposed on $C_1$ and $C_2$. Nine retinal borders have been delineated by the method, i.e., ILM, RNFL$_o$, IPL-INL, INL-OPL, OPL-ONL, ONL-IS, IS-OS, OS-RPE and RPE-CH.
\begin{figure}[h!]
\centering
{\includegraphics[height=0.23\textwidth]{Fig/line}}
\vspace{-5pt}
\caption{An example cross-sectional B-Scan OCT image centred at the macula, showing nine target intra-retinal layer boundaries detected by the proposed method. The names of these boundaries labelled as notations $B_1$,$B_2$...$B_9$ are summarised in Table~\ref{tb:OCTBoundary}. Knowledge of these layer boundaries allows us to calculate the retinal layer thickness, which is imperative for detecting and monitoring ocular diseases.}
\label{fig:OCTBoundary}
\end{figure}
\newcolumntype{C}{>{\centering\arraybackslash}p{7em}}
\begin{table}[ht]
\centering
\caption{Notations for nine retinal boundaries/surfaces, their corresponding names and abbreviations}
\vspace{-10pt}
\begin{tabular}{lcc}
\toprule
Notation & Name of retinal boundary/surface & Abbreviation\\
\midrule
\rowcolor{Gray}
$B_1$& internal limiting membrane & ILM \\
\rowcolor[RGB]{195, 195, 195}
$B_2$& outer boundary of the retinal nerve fibre layer & RNFL$_o$\\
\rowcolor{Gray}
$B_3$& inner plexiform layer-inner nuclear layer & IPL-INL \\
\rowcolor[RGB]{195, 195, 195}
$B_4$& inner nuclear layer-outer plexiform layer & INL-OPL \\
\rowcolor{Gray}
$B_5$& outer plexiform layer-outer nuclear layer & OPL-ONL \\
\rowcolor[RGB]{195, 195, 195}
$B_6$& outer nuclear layer-inner segments of photoreceptors & ONL-IS\\
\rowcolor{Gray}
$B_7$& inner segments of photoreceptors-outer segments of photoreceptors &IS-OS \\
\rowcolor[RGB]{195, 195, 195}
$B_8$& outer segments of of photoreceptors-retinal pigment epithelium & OS-RPE\\
\rowcolor{Gray}
$B_9$& retinal pigment epithelium-choroid & RPE-CH\\
\bottomrule
\end{tabular}
\label{tb:OCTBoundary}
\end{table}
\textbf{Chiu's method}: Chiu et al. \cite{chiu2010automatic} modelled the boundary detection problem in OCT retinal B-scan as determining the shortest-path that connects two endpoints in a graph $G=(V,E)$, where $V$ is a set of nodes and $E$ is a set of undirected weights assigned to each pair of two nodes in the graph. Node $V$ corresponds to each pixel in the B-scan image, whilst weight $E$ is calculated from the intensity gradient of the image in its vertical direction. Each node is connected with its eight nearest neighbours and all other node pairs are disconnected, resulting in a sparse adjacency matrix of graph weights of vertical intensity variation. For example, an $M \times N$ sized image has an $MN \times MN$ sized adjacency matrix with $8MN$ non-zero filled entries. Mathematically, the weights between two nodes used in their method are calculated based on the pure vertical gradient value, defined as
\begin{equation}
w\left( {a,b} \right) = \left\{ \begin{array}{ll}
2 - \left( {g_a + g_b} \right) + {w_{\min }}&\text{if}\;\left| {a - b} \right| \le \sqrt 2 \\
0&{\text{otherwise}}
\end{array} \right.
\label{eq:chiu}
\end{equation}
where $g$ is the vertical gradient of a B-scan image; $a$ and $b$ denote receptively two separate nodes in $V$ and $w_{min}$ is a small positive value added to stabilise the system. The most prominent boundary is then detected as the minimum weighed path from the first to the last vertex in $V$ using Dijkstra's Algorithm. A similar region refinement technique to Section \ref{Detect9border} was used to detect seven retinal boundaries, i.e., ILM, RNFL$_o$, IPL-INL, INL-OPL, OPL-ONL, IS-OS and RPE-CH.
\textbf{Dufour's method}: Dufour et al. \cite{dufour2013graph} proposed a modification of optimal graph search approach \cite{song2010simultaneous} to segment retinal surfaces in OCT volume data. By using soft constraints and adding prior knowledge learned from a model, they improve the accuracy and robustness of the original framework. Specifically, their Markov random field based model is given by
\begin{equation} \nonumber
E\left( S \right) = \sum\limits_{i = 1}^n {\left( {{E_{boundary}}\left( {{S_i}} \right) + {E_{smooth}}\left( {{S_i}} \right)} \right)} + \sum\limits_{i = 1}^{n - 1} {\sum\limits_{j = i + 1}^n {{E_{{\mathop{ int}} er}}\left( {{S_i},{S_j}} \right)} }
\end{equation}
where $S$ is a set of surfaces $S_1$ to $S_n$. The external boundary energy ${{E_{boundary}}\left( {{S_i}} \right)}$ is computed from the input 3D image data. The surface smoothness energy ${{E_{smooth}}\left( {{S_i}} \right)}$ guarantees the connectivity of a surface in 3D and regularises the surface. The interaction energy ${{E_{{\mathop{\rm int}} er}}\left( {{S_i},{S_j}} \right)}$ integrates soft constraints that can regularise the distances between two simultaneously segmented surfaces. This model is then built from training datasets consisting of fovea-centered OCT slice stacks. Their algorithm is capable to segment six retinal surfaces ($n=6$ in above formulation) in both healthy and macular edema subjects, i.e., ILM, RNFL$_o$, IPL-INL, OPL-ONL, IS-OS and RPE-CH.
\textbf{OCTRIMA3D}: Tian et al. \cite{tian2015real,tian2016performance} proposed a real-time automatic segmentation of OCT volume data. The segmentation was done frame-by-frame in each 2D B-Scan by considering the spatial dependency between each two adjacent frames. Their work is based on Chiu's graph search framework \cite{chiu2010automatic} for B-Scan OCT images. However, in addition to Chiu's work they introduce the inter-frame flattening to reduce the curvature in the fovea region and thus the accuracy of their algorithm has been improved. Moreover, they apply inter-frame or intra-frame information to limit the search region in current or adjacent frame so that the computational speed of their algorithm can be increased. Furthermore, the biasing and masking techniques are developed so as to better attain retinal boundaries within the same search region. A totally eight retinal surfaces, i.e., ILM, RNFL$_o$, IPL-INL, INL-OPL, OPL-ONL, IS-OS, OS-RPE and RPE-CH, can be delineated by the method. To sum up, Table~\ref{tb:Checkmark} reports the retinal boundaries/surfaces segmented by the four methods as well as our GDM proposed in the next section.
\newcolumntype{C}{>{\centering\arraybackslash}p{7em}}
\newcolumntype{g}{>{\columncolor[RGB]{255, 255, 255}}C}
\newcolumntype{k}{>{\columncolor[RGB]{255, 255, 255}}C}
\begin{table*}[htbp]
\caption{Target boundaries/surfaces of the five methods compared in this paper (check mark means the boundary/surface can be segmented, while cross mark means the boundary/surface cannot be segmented).}
\vspace{-10pt}
\centering \setlength{\tabcolsep}{1pt}
\resizebox{\columnwidth}{!}{
\begin{tabular}{lgkCgkCgkC}
\toprule
Method &ILM ($B_1$) &RNFL$_o$ ($B_2$) &IPL-INL ($B_3$)
&INL-OPL ($B_4$) &OPL-ONL ($B_5$) &ONL-IS ($B_6$)
&IS-OS ($B_7$) &OS-RPE ($B_8$) &RPE-CH ($B_9$) \\
\midrule
\rowcolor{Gray}
PDS \cite{rossant2015parallel} &\checkmark &\checkmark &\checkmark &\checkmark &\checkmark &\checkmark &\checkmark &\checkmark &\checkmark \\
\rowcolor[RGB]{195, 195, 195}
Chiu's method \cite{chiu2010automatic} &\checkmark &\checkmark &\checkmark &\checkmark &\checkmark &$\times$ &\checkmark &$\times$ &\checkmark \\
\rowcolor{Gray}
Dufour's method \cite{dufour2013graph} &\checkmark &\checkmark &\checkmark &$\times$ &\checkmark &$\times$ &\checkmark &$\times$ &\checkmark \\
\rowcolor[RGB]{195, 195, 195}
OCTRIMA3D \cite{tian2015real,tian2016performance} &\checkmark &\checkmark &\checkmark &\checkmark &\checkmark &$\times$ &\checkmark &\checkmark &\checkmark \\
\rowcolor{Gray}
GDM &\checkmark &\checkmark &\checkmark &\checkmark &\checkmark &\checkmark &\checkmark &\checkmark &\checkmark \\
\bottomrule
\end{tabular}
}
\label{tb:Checkmark}
\end{table*}
\section{\textbf{The Proposed Geodesic Distance Method (GDM)}}
\label{GDM}
In this section, we propose a novel framework using the geodesic distance to detect from OCT images nine retinal layer boundaries defined in Figure~\ref{fig:OCTBoundary} and Table~\ref{tb:OCTBoundary}. As the proposed methodology applies equally to both 2D and 3D segmentation, we will illustrate the approach for 2D segmentation here, as the steps would be the same for 3D segmentation. Numerical implementation of the approach is given in Appendix.
\subsection{Geodesic distance}
We use geodesic distance to identify the pixels on the boundaries of retinal layers in OCT images. The geodesic distance $d$ is the smallest integral of a weight function $W$ over all possible paths from two endpoints (i.e. $s_1$ and $s_2$). The weight function determines how the path goes from $s_1$ to $s_2$. Small weight at one point indicates that the path has high possibility of passing that point. Specifically, the weighted geodesic distance between two pixels/endpoints $s_1$ and $s_2$ is given as
\begin{equation} \label{eq:GeodesicEq}
D\left( {{s_1},{s_2}} \right) = {\min _C}\int_0^1 {W^{-1}\left( {C\left( s \right)} \right)ds}
\end{equation}
where $C\left( s \right)$ is the set of all the paths that link $s_1$ to $s_2$, and the path length is normalised and the start and end locations are $C(0)=s_1$ and $C(1)=s_2$, respectively. The infinitesimal contour length $ds$ is weighted by a non-negative function $W\left( {C\left( s \right)} \right)$. This minimisation problem can be interpreted as finding a geodesic curve (i.e. a path with the smallest weighted length) in a Riemannian space. In geometrical optics, it has proven that the solution of (\ref{eq:GeodesicEq}) satisfies the Eikonal equation (\ref{eq:eikonalEq}).
The retinal layers of OCT images are normally near horizontal. The gradient in the vertical direction thus can be considered as a good candidate for computing weight $W$ in (\ref{eq:GeodesicEq}). For instance, each of the two prominent boundaries, e.g. ILM ($B_1$) and IS-OS ($B_7$) in Figure~\ref{fig:gradinetMaps} (a) and (e), is at the border of a dark layer above a bright layer. As a result, pixels in the region around the two boundaries will have high gradient values, as shown in Figure~\ref{fig:gradinetMaps} (b) and (f). As the retinal layers at each side of the boundary are either transiting from dark to bright or bright to dark, the non-negative weight function $W$ in this paper is defined based on intensity variation as follows
\begin{equation} \label{eq:weights}
W\left( x \right) = \left\{ \begin{array}{ll}
1 - exp\left(-\lambda \left( {1 - n\left( {{\nabla _x}I} \right)} \right)n\left( {\left| {{\nabla _y}I} \right|} \right)\right)&\text{dark-to-bright}\\
exp\left(-\lambda\left( {1 - n\left( {{\nabla _x}I} \right)} \right)n\left( {\left| {{\nabla _y}I} \right|} \right)\right)& \text{bright-to-dark}
\end{array} \right.
\end{equation}
where $I$ is an input OCT image; $n\left( \cdot \right)$ is a linear stretch operator used to normalise values to between 0 and 1; $exp$ is the exponential function and $\lambda$ is a user-define parameter, together they enhance the foveal depression regions and highlight the weak retinal boundaries \cite{duan2016edgeweighted}; and $\nabla_x$ and $\nabla_y$ are the first-order gradient operator along x (vertical) and y (horizontal) direction, respectively. The two gradient operators are discretised using a central finite difference scheme under the Neumann boundary condition. (\ref{eq:weights}) also includes the positive horizontal gradient information $n\left( {\left| {{\nabla _y}I} \right|} \right)$, without which only vertical direction is accounted for and it is thus only applicable to flat retinal boundaries. Consequently, our proposed method is robust against curved features (e.g. the central region of the fovea) as well as other irregularities (e.g. bumps or large variations of boundary locations) caused by pathologies. In other words, the proposed method with the weight $W$ defined in (\ref{eq:weights}) can deal with both normal and pathological images, as illustrated in Figure~\ref{fig:gradinetMaps} as well as in the experimental section.
\begin{figure}[h!]
\centering
{\includegraphics[width=1\textwidth]{Fig/NP1}}\\
\vspace{-5pt}
\caption{Illustrating the effectiveness of the weight $W$ defined in (\ref{eq:weights}). (a) and (e): normal B-scan OCT data and pathological B-scan from an eye with dry age-related macular degeneration (drye-AMD); (b) and (f): vertical dark-to-bright gradient maps of (a) and (e), respectively; (c) and (g): dark-to-bright gradient maps calculated using equation (\ref{eq:weights}) with $\lambda=1$. Note the gradient values of pixels have been enhanced in the region with strong curvature and big bumps; (d) and (h): boundary detection results via the method described in Section \ref{EikonalEq} using different gradient maps. Yellow lines are computed using (b) and (f), whilst red lines using (c) and (g). }
\label{fig:gradinetMaps}
\end{figure}
\subsection{Selection of endpoints $s_1$ and $s_2$}
For fully automated segmentation, it is essential to find a way to initialise the two endpoints $s_1$ and $s_2$ automatically. Since the retinal boundaries in the OCT images used in this paper run across the entire width of the image, we add an additional column on each side to the gradient map computed from (\ref{eq:weights}). As the the minimal weighted path is sought after, a weight $W_{max}$ larger than any of the non-negative weights calculated from (\ref{eq:weights}) is therefore assigned to each of the newly added vertical columns (note that we use $W^{-1}$ for the geodesic distance \ref{eq:GeodesicEq}, the minimal weighted path thereby prefers large weights). This forces the path traversal in the same direction as the newly added vertical columns with maximal weights, and also allows the start and end points to be arbitrarily assigned in the two columns. Once the retinal layer boundary is detected, the two additional columns can be removed. Figure~\ref{fig:endPoints} shows two examples of endpoint initialisation.
\begin{figure}[h!]
\centering
{\includegraphics[width=0.8\textwidth]{Fig/endPoints}}\\
\vspace{-5pt}
\caption{Two segmentation examples using different automatic endpoints initialisations on a drak-to-bright gradient map. $s_1$ and $s_2$ are start and end points, respectively.}
\label{fig:endPoints}
\end{figure}
\subsection{\textbf{Eikonal equation and minimal weighted path}}
\label{EikonalEq}
The solution of (\ref{eq:GeodesicEq}) can be obtained by solving the Eikonal equation after the endpoints are determined. Specifically, over a continuous domain, the distance map $D(x)$ to the seed start point $s_1$ is the unique solution of the following Eikonal equation in the viscosity sense
\begin{equation}\label{eq:eikonalEq}
\left| {\nabla D\left( x \right)} \right| = W^{-1}\left( x \right),\;\;\forall x \notin s_1
\end{equation}
with $D\left( s_1 \right) = 0$. The equation is a first order partial differential equation and its solution can be found via the classical fast marching algorithm \cite{sethian1996fast,sethian1999level} using an upwind finite difference approximation with the computational complexity $O(MNlog(MN))$ (MN is the total number of grid points). Recently, the fast sweeping algorithm \cite{zhao2005fast,tsai2003fast} has been proposed. This technique is based on a pre-defined sweep strategy, replacing the heap priority queue to find the next point to process, and thereby has the linear complexity of $O(MN)$. In this paper, we apply fast sweep for (\ref{eq:eikonalEq}) and its detailed 3D implementation has been given in Appendix. Figure~\ref{fig:distMap} shows two distance maps calculated using the dark-to-bright weight defined in (\ref{eq:weights}) and two different start points as shown in the examples in Figure~\ref{fig:endPoints}.
Once the geodesic distance map to the start point $s_1$ has been computed, the minimal weighted path (geodesic curve) between point $s_1$ and $s_2$ can be extracted from the following ordinary differential equation through the time-dependent gradient descent
\begin{equation}
\gamma '\left( t \right) = - {\eta _t}\nabla D\left( {\gamma \left( t \right)} \right),\;\; \gamma \left( 0 \right) = {s_2}
\end{equation}
where $\eta _t>0$ controls the parametrisation speed of the resulting curve. To obtain unit speed parametrisation, we use ${\eta _t} = \left| {\nabla D\left( {\gamma \left( t \right)} \right)} \right|_\varepsilon ^{ - 1}$. Since distance map $D$ is nonsmooth at point $s_1$, a small positive constant $\varepsilon$ is added to avoid dividing by zero. Note the point $s_1$ is guaranteed to be found from this ordinary differential equation because the distance field is monotonically increasing from $s_1$ to $s_2$, which can be observed in Figure~\ref{fig:distMap}. This technique can achieve sub-pixel accuracy for the geodesic path even if the grid is discrete.
\begin{figure}[h!]
\centering
{\includegraphics[width=0.8\textwidth]{Fig/distMap2}}\\
\vspace{-5pt}
\caption{Two distance maps calculated using the dark-to-bright weight $W^{-1}$ and two different start points in the two exmaples in Figure~\ref{fig:endPoints}, respectively. The distance values are expanded to $[0,800]$ for better visualisation.}
\label{fig:distMap}
\end{figure}
The geodesic curve is then numerically computed using a discretised gradient descent, which defines a discrete curve $\gamma^{k}$ using
\begin{equation} \label{eq:GradientDescentFlow}
{\gamma ^{k + 1}} = {\gamma ^k} - \tau G\left( {{\gamma ^k}} \right)
\end{equation}
where $\gamma^{k}$ is a discrete approximation of $\gamma(t)$ at time $t=k\tau$, and the time step size $\tau>0$ should be small enough. $G\left( x \right)$ is the normalised gradient $ {{\nabla D\left( {\gamma \left( t \right)} \right)} \mathord{\left/
{\vphantom {{\nabla D\left( {\gamma \left( t \right)} \right)} {\left| {\nabla D\left( {\gamma \left( t \right)} \right)} \right|_\varepsilon ^{ - 1}}}} \right.
\kern-\nulldelimiterspace} {\left| {\nabla D\left( {\gamma \left( t \right)} \right)} \right|_\varepsilon ^{ - 1}}}$ parametrised by the arc length. Once $\gamma^{k+1}$ reaches $s_1$, one of the retinal boundaries can be found. The following \hyperlink{alogrithm1}{\textbf{Algorithm 1}} concludes the proposed geodesic distance algorithm for extracting one retinal boarder in OCT images.
\hypertarget{alogrithm1}{}
\begin{table}[h!]
\centering
\begin{tabular}{p{11cm}}
\toprule
{\textbf{Algorithm 1}}: the proposed GDM for one retinal boundary detection\\
\midrule
\rowcolor{Gray}
1: Input OCT data $I$ (i.e. B-scan or volume)\\
\rowcolor[RGB]{195, 195, 195}
2: calculate dark-to-bright or bright-to-dark weight $W$ using (\ref{eq:weights})\\
\rowcolor{Gray}
3: pad two new columns to the weight and assign large values to them\\
\rowcolor[RGB]{195, 195, 195}
4: select two endpoints $s_1$ and $s_2$ on the two newly padded columns\\
\rowcolor{Gray}
5: calculate distance map $D$ in (\ref{eq:eikonalEq}) using fast sweeping algorithm\\
\rowcolor[RGB]{195, 195, 195}
6: find one retinal layer boundary $\gamma$ using the gradient descent flow (\ref{eq:GradientDescentFlow})\\
\rowcolor{Gray}
7: remove the additional columns in the edge detection result\\
\bottomrule
\end{tabular}
\end{table}
\subsection{\textbf{Detection of nine retinal layer boundaries}}
\label{Detect9border}
We have introduced how the proposed geodesic distance algorithm (\ref{eq:GeodesicEq}) can find the minimal weighted path across the whole width of the OCT image for one retinal layer boundary. In this section, we shall describe the implementation details of the proposed approach to delineate nine retinal layer boundaries as shown in Figure~\ref{fig:OCTBoundary} and Table~\ref{tb:OCTBoundary}. Since the proposed model (\ref{eq:GeodesicEq}) is not convex, its solution can easily get stuck in local optima. For example, Figure~\ref{fig:gradinetMaps} (c) and (g) have high gradient values in the region around both the ILM and IS-OS boundaries. However, in Figure~\ref{fig:gradinetMaps} (d) the algorithm detected the ILM boundary while in Figure~\ref{fig:gradinetMaps} (h) it detected the IS-OS. In order to eliminate such uncertainty, we dynamically define the search region based on the detected boundaries. The following section describes the proposed method in detail.
\subsubsection{\textbf{Detection of the IS-OS boundary}}
The intensity variation between two layers divided by the IS-OS ($B_7$) border are normally the most prominent in OCT B-scans. However, due to the fact that OCT images are always corrupted by speckle noise as a result of light absorption and scattering in the retinal tissue, it is not always the case. For example, the intensity variation around the IML ($B_1$) border sometimes can be more obvious than that around the IS-OS, as shown in the gradient image Figure~\ref{fig:gradinetMaps} (c). To make sure the segmentation of the IS-OS boundary we first enhance the IS-OS via a simple local adaptive thresholding approach\footnote{{http://homepages.inf.ed.ac.uk/rbf/HIPR2/adpthrsh.htm}}, which is given as follows
\begin{equation} \label{eq:threshold}
p = \left\{ \begin{array}{ll}
0 &ls\left( {I,ws} \right)-I > C\\
1 &\text{otherwise}
\end{array} \right.
\end{equation}
where $I$ is the input OCT image, and $ls\left( {p,ws} \right)$ means that $I$ is convolved with a suitable operator, i.e. the mean or median filter. $ws$ is the window size of the filter and $C$ a user-defined threshold value. In the paper, we use the mean filter with the window size $ws=100$ and set $C=0.01$. The enhanced image can be then obtained by multiplying the original image $I$ with $p$. The first two images in Figure~\ref{fig:preSegImg} illustrates that the contrast of the IS-OS boarder has been enhanced and the most obvious intensity variation now takes place around the IS-OS boundary. The IS-OS boundary then is detected on the dark-to-bright gradient image. Consequently, the delineated line is guaranteed to pass the IS-OS in both cases, as shown in the last two images in Figure~\ref{fig:preSegImg}.
\begin{figure}[h!]
\centering
{\includegraphics[height=0.16\textwidth]{Fig/PreSeg1}}
{\includegraphics[height=0.16\textwidth]{Fig/PreSeg2}}
{\includegraphics[height=0.16\textwidth]{Fig/PreSeg3}}
{\includegraphics[height=0.16\textwidth]{Fig/PreSeg4}}
\vspace{-5pt}
\caption{Detecting the IS-OS boarders in the normal and pathological subjects after image enhancement via a local adaptive thresholding method (\ref{eq:threshold}).}
\label{fig:preSegImg}
\end{figure}
\subsubsection{\textbf{Detection of the RPE-CH, OS-RPE and ONL-IS boundaries}}
Once the IS-OS ($B_7$) is segmented, it can be used as a reference to limit the search region for segmenting the RPE-CH ($B_9$), OS-RPE ($B_8$) and ONL-IS ($B_6$) boundaries. RPE-CH and OS-RPE are below the IS-OS and they are delineated in the following way: the RPE-CH can be extracted by applying the geodesic distance algorithm with the bright-to-dark gradient weights (\ref{eq:weights}) obtained from the region pixels below the detected IS-OS (i.e. the bright-to-dark gradient weights are set to zeros above the IS-OS); the OS-RPE is then delineated on the bright-to-dark gradient map in the region between the detected IS-OS and RPE-CH (i.e. the bright-to-dark gradient weights are set to zeros outside of the region between the IS-OS and RPE-CH). The dark-to-bright ONL-IS is above the IS-OS. The search region can be constructed between the IS-OS boundary and a parallel line above it with a diameter of 15 pixels. The dark-to-bright gradient weights outside of the region are then set to zeros. Hence, the only boundary in the search region of the dark-to-bright gradient image is the ONL-IS which can be extracted.
\subsubsection{\textbf{Detection of the ILM and INL-OPL boundaries}}
Both the ILM ($B_1$) and INL-OPL ($B_4$) are at the border of a darker layer above a bright layer. The intensity variation around the IML boundary is much more prominent and thus the IML is segmented first. The detected ONL-IS ($B_6$) edge is taken as a reference and the dark-to-bright gradient weights below the ONL-IS is set to zeros. The ILM can then be obtained via the proposed method. The INL-OPL can then be easily detected on the dark-to-bright gradient map by simply limiting the search region between the ILM and ONL-IS (i.e. the dark-to-bright gradient values are set to zeros outside of the region between the ILM and ONL-IS).
\subsubsection{\textbf{Detection of the OPL-ONL, IPL-INL and RNFL$_o$ boundaries}}
The OPL-ONL ($B_5$), IPL-INL ($B_3$) and RNFL$_o$ ($B_2$) demonstrate a bright layer above a darker layer and thus can be detected on the bight-to-dark gradient map defined in (\ref{eq:weights}). The segmented INL-OPL ($B_4$) and ONL-IS ($B_6$) are taken as two reference boundaries, and the OPL-ONL edge can be found by limiting the search region between the INL-OPL and ONL-IS. The search region for the IPL-INL can be then constructed between the INL-OPL boundary and a parallel line above it with a diameter of 20 pixels. The IPL-INL can be located on a bright-to-dark gradient map which is set to zeros outside of the search region constructed. Finally, the RNFL$_o$ ($B_2$) can be found in the search region between the two reference boundaries IPL-INL and IML ($B_1$). However, because the IPL-INL and IML boundaries are very close to each other in the central region of the fovea, the search region for the RNFL$_o$ are sometimes missing around the fovea region. This leads to segmentation errors of the RNFL$_o$, as shown in Figure~\ref{fig:SegAllLayers} (a). These errors however can be avoided by simply removing the spurious points detected on the RNFL$_o$ in the region above the IML, as shown in Figure~\ref{fig:SegAllLayers} (b). The proposed methods for segmenting nine retinal layer boundaries can be summarised in the flow chart as shown in Figure~\ref{fig:flowChart}.
\begin{figure}[h!]
\centering
{\includegraphics[width=1\textwidth]{Fig/finSeg3}}\\
\vspace{-5pt}
\caption{The segmentation results of the nine retinal layer boundaries on both normal and dye-AMD pathological B-scans, as shown in (a) and (c). The detection of the RNFL$_o$ boundary however shows errors due to the absence of a search region for this boundary in (a). (b) shows that these errors have been corrected. }
\label{fig:SegAllLayers}
\end{figure}
\begin{figure}[h!]
\centering
{\includegraphics[width=0.8\textwidth, height=0.7\textwidth]{Fig/flowChart4}}\\
\vspace{-5pt}
\caption{The overview of the proposed framework for dynamically delineating nine retinal layer boundaries defined in Figure~\ref{fig:OCTBoundary} and Table~\ref{tb:OCTBoundary}. Section \ref{Detect9border} describes this flow chart in detail.}
\label{fig:flowChart}
\end{figure}
\section{\textbf{Experiment Setup}}
To evaluate the performance of the proposed GDM qualitatively and quantitatively, numerical experiments are conducted to compare it with the state-of-the-art approaches reviewed in Section \ref{reviewedMethod} on both healthy and pathological OCT retinal images. As the GDM is able to segment both 2D and 3D OCT images, we perform numerical experiments on both B-scan and volumetric OCT data. An anisotropic total variation \cite{goldstein2009split} is used to reduce noise prior to determining the layers boundaries/surfaces for all segmentation methods. In the following, we introduce the detailed procedure of OCT data acquisition, the evaluation metrics used to quantify the segmentation results, the final numerical results, and the computational complexity of different methods.
\subsection{Clinical Data}
30 Spectralis SDOCT (ENVISU C class 2300, Bioptigen, axial resolution = 3.3µm, scan depth = 3.4mm, 32, 000 A-scans per second) B-scans from 15 healthy adults (mean age = 39.8 years, SD = 8.6 years; 7 male, 8 female) were used for the research. All the data was collected after informed consent was obtained and the study adhered to the tenets of the Declaration of Helsinki and Ethics Committee approval was granted.
\textbf{2D B-scan data}: The normal vivo B-scan OCT data was imaged from the left and right eye of 15 healthy adults using a spectral domain OCT device with a chin rest to stabilise the head. The B-scan located at the foveal centre was identified from the lowest point in the foveal pit where the cone outer segments were elongated (indicating cone specialisation). To reduce the speckle noise and enhance the image contrast, every B-scan was the average of aligned images scanned at the same position. In addition to the 30 OCT images from the healthy subjects, another 20 B-scans from subjects with pathologies are also used to compare the proposed GDM with other approaches in pathological cases. These B-scans are from an eye with dry age-related macular degeneration (drye-AMD), which is available from Dufour's software package's website\footnote{{http://pascaldufour.net/Research/software$\_$data.html}}. The accuracy of segmentation results obtained by the three automated 2D methods (i.e. PDS, Chiu's method and GDM) over these healthy and pathological B-scans is evaluated using the ground truth datasets, which were manually delineated with extreme carefulness by one observer.
\textbf{3D Volume data}: 10 Spectralis SD-OCT (Heidelberg Engineering GmbH, Heidelberg, Germany) volume data sets from 10 healthy adult subjects are used in this study. Each volume contains 10 B-scans, and the OCT A-scans outside the 6mm $\times$ 6mm (lateral $\times$ azimuth) area and centred at the fovea were cropped to remove low signal regions. All volumetric data can be downloaded from \cite{tian2015real}, where also contains the results of the \textbf{OCTRMA3D}, and the manual labellings from two graders. In this study we choose the manual labelling of grader 1 as the 3D ground truth.
\subsection{Evaluation Metrics}
Performance metrics are defined to demonstrate the effectiveness of the proposed method and compare it with the existing methods. Three commonly used measures of success for OCT boundary detection are signed error (SE), absolute error (AE) and Hausdorff distance (HD). Among them, SE indicates the bias and variability of the detection results. AE is the absolute difference between the automatic detection results and ground truth, while HD measures the distance between the farthest point of a set to the nearest point of the other and vice versa. Specifically, these metrics are denoted as
\[\begin{array}{c}
{\rm{SE}}\left( {{B_i},{{\tilde B}_i}} \right) = \frac{1}{n}\sum\limits_{j = 1}^n {\left( {{B_{ij}} - {{\tilde B}_{ij}}} \right)} \\
{\rm{AE}}\left( {{B_i},{{\tilde B}_i}} \right) = \frac{1}{n}\sum\limits_{j = 1}^n {\left( {\left| {{B_{ij}} - {{\tilde B}_{ij}}} \right|} \right)} \\
{\rm{HD}}\left( {{B_i},{{\tilde B}_i}} \right) = \max \left( {\mathop {\max }\limits_{x \in {B_i}} \left\{ {\mathop {\min }\limits_{y \in {{\tilde B}_i}} \left\| {x - y} \right\|} \right\},\mathop {\max }\limits_{x \in {{\tilde B}_i}} \left\{ {\mathop {\min }\limits_{y \in {B_i}} \left\| {x - y} \right\|} \right\}} \right)
\end{array}\]
where $B_i$ and ${\tilde B}_i$ are respectively the detected boundaries and ground truth boundaries (i.e. manual labellings). $n$ is the number of pixels/volexs that fall on the retinal boundary/surface. Statistically, when the SE value is close to zero, the difference between $B_i$ and ${\tilde B}_i$ is small. In this case, the detection result is less bias. The measurements of AE and HD (varies from 0 to $\infty $ theoretically) signify the difference between two boundaries, e.g., 0 indicates that both retinal structures share exactly the same boundaries, and larger AE and HD values mean larger distances between the measured boundaries. We also monitor the overall SE (OSE), AE (OAE) and HD (OHD) during all the experiments. They are defined as
\begin{align*}
\rm{OSE} &= \frac{1}{s}\sum\limits_{i = 1}^s {{\rm{SE}}\left( {{B_i},{{\tilde B}_i}} \right)} \\
\rm{OAE} &= \frac{1}{s}\sum\limits_{i = 1}^s {{\rm{AE}}\left( {{B_i},{{\tilde B}_i}} \right)} \\
\rm{OHD} &= \frac{1}{s}\sum\limits_{i = 1}^s {{\rm{HD}}\left( {{B_i},{{\tilde B}_i}} \right)}
\end{align*}
where $s$ is the total number of retina boundaries one method can delineate.
\subsection{Parameter Selection}
There are five parameters in the PDS model: three smooth parameters $\alpha$, $\beta$, $\varphi$ and two time step sizes $\gamma_C$ and $\gamma_b$ used within the gradient descent equations to minimise the functional (\ref{eq:PDS}) with respect to $C$ and $b$. In this study we use $\alpha=10$, $\beta=0$, $\varphi=700$, $\gamma_C=10$ and $\gamma_b \ge 2$ suggested in \cite{rossant2015parallel}. In addition, as the PDS is a nonconvex model and its segmentation results depend on initialisation. We initialise the parallel curves very closely to the true retinal boundaries for fair comparison with other methods. A maximal number of iterations number 500 is used to ensure convergence of the PDS model. The graph theoretic based methods, i.e., Chiu's method, OCTRIMA3D and Dufour's method, require no parameter input. Finally, our GDM has two build-in parameters: $\lambda$ in (\ref{eq:weights}) and $\tau$ in (\ref{eq:GradientDescentFlow}). We set $\lambda=10$ and $\tau=0.8$ to detect the retinal layers in the OCT images.
\subsection{Numerical Results}
\label{NumericalRes}
We first visually compare the segmentation results of the proposed GDM method, the PDS (\ref{eq:PDS}) and Chiu's graph search method on both the healthy and pathological B-scans, which are shown in Figure~\ref{fig:SegComparison} (a)-(d). The PDS results as shown in (e)-(h) have some errors on some of the boundaries detected. For instance, the $B_1$ and $B_2$ cannot converge to the true retinal boundaries around the central fovea region, as shown in (f) and (h). This is because the PDS is the classical snake-driven model which has difficulty handling boundary concavity problem. Moreover, due to the fact that the $B_7$ has a much stronger image gradient than the $B_6$ and $B_8$, some parts of these two boundaries have been mistakenly attracted to the $B_7$. As Chiu's graph search method only considers the intensity changes in the pure vertical direction (\ref{eq:chiu}), it also fails segment the fovea region layers with strong curvature, as shown in (i)-(l). Moreover, the algorithm cannot handle irregular bumps caused by pathologies very well, which can be observed from the bottom $B_9$ line delineated in (k) and (l). In general, Chiu's method works very nicely when the retinal structures are flat or smooth without big changes on boundary locations. The results by the proposed GDM method, as shown in (m)-(p), performs better than the PDS and Chiu's methods when compared with the ground truth in the last row. As analysed in Section \ref{GDM}, the gradient weights defined in (\ref{eq:weights}) account for both vertical and horizontal variations, making it very suitable for both flat and nonflat retinal structures. Hence, the GDM is a better clinical tool for detecting retinal boundaries from both normal or pathological subjects.
\begin{figure}[h!]
\centering
{\includegraphics[height=0.9\textwidth]{Fig/Exp1}}
\vspace{-5pt}
\caption{Comparison of different segmentation methods on healthy and pathological 2D OCT B-scans. 1st row: healthy (i.e. first two) and pathological (i.e. last two) B-scans; 2nd row: results by the PDS model (\ref{eq:PDS}); 3rd row: results by Chiu's method; 4th row: results by the proposed GDM; 5th row: ground truth.}
\label{fig:SegComparison}
\end{figure}
\clearpage
The accuracy of the segmentation results by different methods against ground truth over 30 healthy and 20 pathological B-scans is indicated in Table~\ref{tb:healthyResults} and Table~\ref{tb:pathologicalResults}, respectively. In order to make the comparison clearer, we plot the data in the two tables in Figure~\ref{fig:plotHealthyResults} and Figure~\ref{fig:plotPathologicalResults}, respectively.
In Table~\ref{tb:healthyResults} and Figure~\ref{fig:plotHealthyResults}, the SE shows that the PDS leads to very large segmentation bias with the largest error being 6.01$\mu m$, whilst the bias of the GDM is less than 1.22$\mu m$ for all the retinal layer boundaries. Moreover, the mean SE plot of the GDM is close to zero, which means the GDM are less biased than the other two methods. Large errors of the PDS normally take place at the $B_1$, $B_2$, $B_6$ and $B_8$, which is consistent with visual inspection on the healthy scans in Figure~\ref{fig:SegComparison}. Furthermore, the mean AE quantities and plots show that the GDM performs the best for all the boundaries. Particularly at the $B_1$ and $B_2$ where the curved fovea region is located, the HD values of the GDM (3.702$\pm$1.62$\mu m$, 7.340$\pm$2.16$\mu m$) are significantly lower than those of the PDS (36.56$\pm$15.9$\mu m$, 29.00$\pm$11.6$\mu m$) and Chiu's method (22.12$\pm$9.23$\mu m$, 21.25$\pm$5.98$\mu m$). However, the accuracy of different methods are comparable at flat or smooth retinal boundaries such as $B_4$, $B_7$ and $B_9$. Finally, as the manual segmentation traces the small bumps of the true boundaries but the segmentation results by the PDS are however very smooth, the overall accuracy of the method is the lowest among all the approaches compared.
\newcolumntype{g}{>{\columncolor[RGB]{195, 195, 195}}c}
\newcolumntype{k}{>{\columncolor{Gray}}c}
\begin{table*}[htbp]
\caption{Mean and standard deviation of SE ($\mu m$), AE ($\mu m$) and HD ($\mu m$) calculated using the results of different methods (the PDS, Chiu' method and GDM) and the ground truth manual segmentation, over 30 healthy OCT B-scans.}
\vspace{-10pt}
\centering \setlength{\tabcolsep}{1pt}
\resizebox{\columnwidth}{!}{
\begin{tabular}{lgkcgkcgkc}
\toprule
&\multicolumn{3}{c}{SE ($\mu m$)} &\multicolumn{3}{c}{AE ($\mu m$)} &\multicolumn{3}{c}{HD ($\mu m$)}\\
Boundary &PDS &Chiu et al. &GDM
&PDS &Chiu et al. &GDM
&PDS &Chiu et al. &GDM \\
\midrule
ILM ($B_1$) &-3.92$\pm$1.90 &-1.22$\pm$0.68 &0.273$\pm$0.33
&4.615$\pm$2.03 &2.605$\pm$1.12 &0.924$\pm$0.26
&36.56$\pm$15.9 &22.12$\pm$9.23 &3.702$\pm$1.62 \\
RNFL$_o$ ($B_2$) &-2.57$\pm$1.38 &-1.67$\pm$1.34 &-0.53$\pm$0.37
&3.864$\pm$1.49 &2.676$\pm$0.82 &1.262$\pm$0.34
&29.00$\pm$11.6 &21.25$\pm$5.98 &7.340$\pm$2.16 \\
IPL-INL ($B_3$) &-0.55$\pm$0.83 &-1.04$\pm$1.21 &-0.38$\pm$0.61
&1.876$\pm$0.60 &2.020$\pm$0.79 &1.314$\pm$0.32
&8.619$\pm$3.77 &10.53$\pm$5.25 &7.258$\pm$1.92 \\
INL-OPL ($B_4$) &0.012$\pm$0.58 &-0.90$\pm$0.61 &-0.71$\pm$0.71
&1.708$\pm$0.39 &1.699$\pm$0.40 &1.807$\pm$0.51
&6.772$\pm$2.53 &7.036$\pm$2.84 &7.505$\pm$2.96 \\
OPL-ONL ($B_5$) &-0.23$\pm$1.29 &-1.51$\pm$1.30 &-1.12$\pm$1.17
&2.127$\pm$1.00 &2.133$\pm$1.05 &1.949$\pm$0.94
&10.22$\pm$3.70 &9.044$\pm$3.48 &7.463$\pm$3.24 \\
ONL-IS ($B_6$) &6.010$\pm$0.83 &--- &-0.73$\pm$0.49
&6.055$\pm$0.86 &--- &1.376$\pm$0.36
&9.969$\pm$1.58 &--- &4.630$\pm$1.05 \\
IS-OS ($B_7$) &-0.09$\pm$0.61 &0.194$\pm$0.49 &0.291$\pm$0.63
&0.823$\pm$0.29 &0.720$\pm$0.25 &0.771$\pm$0.36
&3.676$\pm$1.63 &3.240$\pm$1.60 &2.611$\pm$0.74 \\
OS-RPE ($B_8$) &5.202$\pm$2.25 &--- &-0.78$\pm$0.47
&5.570$\pm$1.76 &--- &1.125$\pm$0.36
&8.913$\pm$2.28 &--- &3.601$\pm$0.96 \\
RPE-CH ($B_9$) &-0.31$\pm$0.79 &-0.84$\pm$0.58 &-0.74$\pm$0.69
&1.291$\pm$0.25 &1.228$\pm$0.47 &1.213$\pm$0.45
&4.237$\pm$1.47 &4.027$\pm$1.31 &3.831$\pm$1.08
\\
Overall &0.394$\pm$0.39 &-1.00$\pm$0.54 &-0.49$\pm$0.23
&3.103$\pm$0.74 &1.869$\pm$0.59 &1.305$\pm$0.32
&13.11$\pm$4.25 &11.04$\pm$3.75 &5.327$\pm$1.11 \\
\bottomrule
\end{tabular}}
\label{tb:healthyResults}
\end{table*}
\begin{figure}[h!]
\centering
{\includegraphics[width=0.32\textwidth]{Fig/meanHealthySE}}
{\includegraphics[width=0.32\textwidth]{Fig/meanHealthyAE}}
{\includegraphics[width=0.32\textwidth]{Fig/meanHealthyHD}}\\
Mean value\\
\vspace{5pt}
{\includegraphics[width=0.32\textwidth]{Fig/standardHealthySE}}
{\includegraphics[width=0.32\textwidth]{Fig/standardHealthyAE}}
{\includegraphics[width=0.32\textwidth]{Fig/standardHealthyHD}}\\
Standard deviation\\
\vspace{-5pt}
\caption{Plots of mean and standard derivation obtained by different methods in Table~\ref{tb:healthyResults} for healthy B-scans. The 1st and 2nd rows respectively denote the mean and standard derivation of the SE ($\mu m$), AE ($\mu m$) and HD ($\mu m$) for segmentation of boundary $B_1$ to $B_9$ using the PDS, Chiu's mehtod and GDM. The overall value is the average result over all boundaries. }
\label{fig:plotHealthyResults}
\end{figure}
In Table~\ref{tb:pathologicalResults} and Figure~\ref{fig:plotPathologicalResults}, the mean and standard deviation plots show that the GDM is more accurate and robust compared with the other two methods for pathological data. However, larger errors have been found at the last four boundaries $B_6$, $B_7$, $B_8$ and $B_9$ for all the segmentation methods. This is because the dry age-related macular degeneration has led irregularities to these retinal boundaries, making these methods less accurate and robust. The overall accuracy measured by the three quantities has decreased compared with the corresponding measurements listed in Table~\ref{tb:healthyResults}. Chiu's graph search method using Dijkstra's algorithm can be deemed as a discrete approximation of the proposed GDM. This makes its final results comparable to those of the GDM at some flat retinal boundaries and better than those of the PDS. However, the fast sweeping algorithm used to solve the Eikonal equation guarantees local resolution for the geodesic distance, which significantly reduces the grid bias and achieves sub-pixel accuracy for the geodesic path of the GDM. In addition to the novel weight function proposed in (\ref{eq:weights}), the GDM also resolves the metrisation problem caused by discrete graph method and thus can obtain more accurate results than Chiu's method for delineating cellular layers from both normal or pathological subjects.
\newcolumntype{g}{>{\columncolor[RGB]{195, 195, 195}}c}
\newcolumntype{k}{>{\columncolor{Gray}}c}
\begin{table*}[htbp]
\caption{Mean and standard deviation of SE ($\mu m$), AE ($\mu m$) and HD ($\mu m$) calculated using the results of different methods (the PDS, Chiu's method and GDM) and the ground truth manual segmentation, over 20 pathological OCT B-scans.}
\vspace{-10pt}
\centering \setlength{\tabcolsep}{1pt}
\resizebox{\columnwidth}{!}{
\begin{tabular}{lgkcgkcgkc}
\toprule
&\multicolumn{3}{c}{SE ($\mu m$)} &\multicolumn{3}{c}{AE ($\mu m$)} &\multicolumn{3}{c}{HD ($\mu m$)}\\
Boundary &PDS &Chiu et al. &GDM
&PDS &Chiu et al. &GDM
&PDS &Chiu et al. &GDM \\
\midrule
ILM ($B_1$) &-0.41$\pm$0.59 &-0.34$\pm$0.25 &-0.36$\pm$0.29
&0.932$\pm$0.44 &0.796$\pm$0.17 &0.683$\pm$0.09
&6.461$\pm$4.86 &4.087$\pm$1.01 &3.337$\pm$1.10 \\
RNFL$_o$ ($B_2$) &-0.93$\pm$0.93 &-0.38$\pm$0.33 &-0.49$\pm$0.50
&1.792$\pm$0.63 &1.717$\pm$0.53 &1.257$\pm$0.32
&6.145$\pm$1.84 &8.464$\pm$4.55 &6.109$\pm$2.49 \\
IPL-INL ($B_3$) &-0.23$\pm$0.62 &-0.22$\pm$0.27 &-0.32$\pm$0.32
&1.228$\pm$0.21 &1.149$\pm$0.20 &0.926$\pm$0.16
&7.640$\pm$1.31 &5.857$\pm$0.98 &5.151$\pm$1.82 \\
INL-OPL ($B_4$) &0.578$\pm$0.64 &0.555$\pm$0.39 &0.392$\pm$0.26
&1.546$\pm$0.28 &1.563$\pm$0.30 &1.419$\pm$0.16
&7.165$\pm$1.07 &8.194$\pm$1.36 &5.942$\pm$1.32 \\
OPL-ONL ($B_5$) &-0.04$\pm$1.08 &0.286$\pm$0.55 &-0.07$\pm$0.64
&2.371$\pm$0.76 &2.255$\pm$0.60 &2.019$\pm$0.65
&11.28$\pm$1.95 &9.858$\pm$2.76 &9.281$\pm$2.25 \\
ONL-IS ($B_6$) &3.339$\pm$1.22 &--- &-0.57$\pm$0.72
&4.484$\pm$0.50 &--- &1.442$\pm$0.34
&15.23$\pm$4.03 &--- &6.205$\pm$1.01 \\
IS-OS ($B_7$) &-0.23$\pm$0.86 &1.030$\pm$1.06 &0.350$\pm$0.50
&2.415$\pm$1.25 &2.399$\pm$1.05 &1.055$\pm$0.22
&15.95$\pm$10.2 &17.66$\pm$11.3 &6.795$\pm$4.65 \\
OS-RPE ($B_8$) &2.371$\pm$4.17 &--- &0.028$\pm$0.41
&5.927$\pm$2.34 &--- &1.821$\pm$0.47
&22.63$\pm$12.9 &--- &9.673$\pm$1.30 \\
RPE-CH ($B_9$) &3.315$\pm$2.59 &3.011$\pm$2.98 &0.027$\pm$0.35
&4.797$\pm$2.59 &5.146$\pm$2.70 &2.252$\pm$0.46
&31.23$\pm$12.9 &32.63$\pm$13.2 &13.19$\pm$3.50
\\
Overall &0.863$\pm$0.59 &0.563$\pm$0.44 &-0.11$\pm$0.22
&2.832$\pm$0.83 &2.146$\pm$0.70 &1.430$\pm$0.20
&13.75$\pm$4.72 &12.39$\pm$4.06 &7.300$\pm$0.67 \\
\bottomrule
\end{tabular}
}
\label{tb:pathologicalResults}
\end{table*}
\begin{figure}[h!]
\centering
{\includegraphics[width=0.32\textwidth]{Fig/meanPathologicalSE}}
{\includegraphics[width=0.32\textwidth]{Fig/meanPathologicalAE}}
{\includegraphics[width=0.32\textwidth]{Fig/meanPathologicalHD}}\\
Mean value\\
\vspace{5pt}
{\includegraphics[width=0.32\textwidth]{Fig/standardPathologicalSE}}
{\includegraphics[width=0.32\textwidth]{Fig/standardPathologicalAE}}
{\includegraphics[width=0.32\textwidth]{Fig/standardPathologicalHD}}\\
Standard deviation\\
\vspace{-5pt}
\caption{Plots of mean and standard derivation obtained by different methods in Table~\ref{tb:pathologicalResults} for pathological B-scans. The 1st and 2nd rows respectively denote the mean and standard derivation of the SE ($\mu m$), AE ($\mu m$) and HD ($\mu m$) for segmentation of boundary $B_1$ to $B_9$ using the PDS, Chiu's mehtod and GDM. The overall value is the average result over all boundaries. }
\label{fig:plotPathologicalResults}
\end{figure}
In the next section, the proposed GDM is used to segment the OCT volume dataset that includes samples from ten healthy adult subjects, named as Volume 1 to 10 respectively. Dufour's and OCTRIMA3D methods are also used to segment the same dataset for comparison purposes. In Figure~\ref{fig:3D4Smaple}, we demonstrate four representative segmentation results of GDM on Volume 1, 2, 7 and 9.
\begin{figure}[h!]
\centering
{\includegraphics[width=0.72\textwidth]{Fig/3D}}
\vspace{-5pt}
\caption{3D rendered images of human in vivo intra-retinal layer surfaces obtained through segmenting Spectralis SD-OCT volumes with the proposed GDM method. Samples are named Volume 1, Volume 2, Volume 7 and Volume 9. The color used for each individual retinal surface is the same as in Figure~\ref{fig:OCTBoundary}.}
\label{fig:3D4Smaple}
\end{figure}
\begin{figure}[h!]
\centering
{\includegraphics[width=0.65\textwidth]{Fig/twoB}}\\
\vspace{-5pt}
\caption{Two B-scans extracted from the Volume 4 sample. The left shows the en-face representation of the OCT scan with two lines (green and red) overlaid representing the corresponding locations of two B-scans within a volume present in the right.}
\label{fig:enfaceBscans}
\end{figure}
\begin{figure}[h!]
\centering
{\includegraphics[width=0.32\textwidth, height=0.15\textwidth]{Fig/DRes1}}
{\includegraphics[width=0.32\textwidth, height=0.15\textwidth]{Fig/OResult1}}
{\includegraphics[width=0.32\textwidth, height=0.15\textwidth]{Fig/OurRes1}}\\
{\includegraphics[width=0.32\textwidth, height=0.15\textwidth]{Fig/DRes2}}
{\includegraphics[width=0.32\textwidth, height=0.15\textwidth]{Fig/OResult2}}
{\includegraphics[width=0.32\textwidth, height=0.15\textwidth]{Fig/OurRes2}}\\
\vspace{-5pt}
\caption{The comparison between Dufour's method (left), OCTRIMA3D (middle) and GDM (right) on the two B-scans in Figure~\ref{fig:enfaceBscans}. The segmentation results by these methods are marked with red lines while the ground truth using manual labelling with green lines.}
\label{fig:2SlicesSeg}
\end{figure}
The segmentation results of the three approaches on an exemplary sample (Volume 4) are shown in two distinctive B-scans in Figure~\ref{fig:enfaceBscans} and \ref{fig:2SlicesSeg}, where one B-scan retinal structures are quite flat and the other contains the nonflat fovea region. Dufour's method has lower accuracy than the OCTIMA3D and GDM for both cases. OCTRIMA3D extends Chiu's method to 3D space and improves it by reducing the curvature in the fovea region using the inter-frame flattening technique, so the method performs very well for both flat and nonflat retinal structures. However, there still exist some obvious errors on the 5th boundary $B_5$. OCTRIMA3D is able to flatten the $B_1$ and in the meanwhile it also increases the curvature of its adjacent boundaries such as $B_5$, which might be the reason leading to the errors. Compared with the other two, the GDM's results show less green lines, verifying that the results are closer to ground truth and thus it is the most accurate among the three compared. In addition to the 2D visualisation, the 3D rendering of the results segmented by the three approaches is given in Figure~\ref{fig:compare3Dresults}. The experiment furthermore shows that Dufour's results deviate much from ground truth, while the OCTRIMA3D is better than Dufour's method and comparable to the GDM. The GDM results cover less grey ground truth and are thereby the best.
\begin{figure}[h!]
\centering
{\includegraphics[width=0.135\textwidth, height=0.085\textwidth]{Fig/Dufour1}}
{\includegraphics[width=0.135\textwidth]{Fig/OCTIMA3DO1}}
{\includegraphics[width=0.135\textwidth]{Fig/GDM1}}
{\includegraphics[width=0.135\textwidth]{Fig/ORG1}}
{\includegraphics[width=0.135\textwidth]{Fig/Dufour1_1}}
{\includegraphics[width=0.135\textwidth]{Fig/GDM1_2}}
{\includegraphics[width=0.135\textwidth]{Fig/OCTIMA3DO1_1}}
\\
{\includegraphics[width=0.135\textwidth, height=0.07\textwidth]{Fig/Dufour2}}
{\includegraphics[width=0.135\textwidth]{Fig/OCTIMA3DO2}}
{\includegraphics[width=0.135\textwidth]{Fig/GDM2}}
{\includegraphics[width=0.135\textwidth]{Fig/ORG2}}
{\includegraphics[width=0.135\textwidth]{Fig/Dufour2_1}}
{\includegraphics[width=0.135\textwidth]{Fig/OCTIMA3DO2_1}}
{\includegraphics[width=0.135\textwidth]{Fig/GDM2_2}}
\\
{\includegraphics[width=0.135\textwidth]{Fig/Dufour3}}
{\includegraphics[width=0.135\textwidth]{Fig/OCTIMA3DO3}}
{\includegraphics[width=0.135\textwidth]{Fig/GDM3}}
{\includegraphics[width=0.135\textwidth]{Fig/ORG3}}
{\includegraphics[width=0.135\textwidth]{Fig/Dufour3_1}}
{\includegraphics[width=0.135\textwidth]{Fig/OCTIMA3DO3_1}}
{\includegraphics[width=0.135\textwidth]{Fig/GDM3_2}}
\\
{\includegraphics[width=0.135\textwidth]{Fig/Dufour4}}
{\includegraphics[width=0.135\textwidth]{Fig/OCTIMA3DO5}}
{\includegraphics[width=0.135\textwidth]{Fig/GDM5}}
{\includegraphics[width=0.135\textwidth]{Fig/ORG5}}
{\includegraphics[width=0.135\textwidth]{Fig/Dufour4_1}}
{\includegraphics[width=0.135\textwidth]{Fig/OCTIMA3DO5_1}}
{\includegraphics[width=0.135\textwidth]{Fig/GDM5_2}}
\\
{\includegraphics[width=0.135\textwidth]{Fig/Dufour5}}
{\includegraphics[width=0.135\textwidth]{Fig/OCTIMA3DO6}}
{\includegraphics[width=0.135\textwidth]{Fig/GDM7}}
{\includegraphics[width=0.135\textwidth]{Fig/ORG6}}
{\includegraphics[width=0.135\textwidth]{Fig/Dufour5_1}}
{\includegraphics[width=0.135\textwidth]{Fig/OCTIMA3DO6_1}}
{\includegraphics[width=0.135\textwidth]{Fig/GDM7_2}}
\\
\vspace{10pt}
\subfigure[]{\includegraphics[width=0.135\textwidth]{Fig/Dufourall}}
\subfigure[]{\includegraphics[width=0.135\textwidth]{Fig/OCTIMA3DOAll}}
\subfigure[]{\includegraphics[width=0.135\textwidth]{Fig/GDMall}}
\subfigure[]{\includegraphics[width=0.135\textwidth]{Fig/ORGAll}}
\subfigure[]{\includegraphics[width=0.135\textwidth]{Fig/Dufourall_1}}
\subfigure[]{\includegraphics[width=0.135\textwidth]{Fig/OCTIMA3DOAll_1}}
\subfigure[]{\includegraphics[width=0.135\textwidth]{Fig/GDMall_2}}
\\
\vspace{-5pt}
\caption{The 3D comparison between Dufour's method, OCTRIMA3D and GDM by segmenting the intra-retinal layer surfaces from the Volumes 4 sample. Column (a)-(d) are respectively Dufour's results, OCTRIMA3D results, GDM results and ground truth. Column (e)-(g) are respectively the segmentation results of the three compared methods, overlaid with ground truth. Row 1-6 are the results for the individual surface $B_1$, $B_2$, $B_3$, $B_5$, $B_7$ and total retina surfaces, respectively.}
\label{fig:compare3Dresults}
\end{figure}
\newcolumntype{C}{>{\centering\arraybackslash}p{6.2em}}
\newcolumntype{g}{>{\columncolor[RGB]{195, 195, 195}}C}
\newcolumntype{k}{>{\columncolor{Gray}}C}
\begin{table*}[htbp]
\caption{The SE ($\mu m$), AE ($\mu m$) and HD ($\mu m$) calculated using the results of different methods (Dufour's method, OCTRMIA3D and GDM) and the ground truth manual segmentation, for the OPL-ONL ($B_5$) surface in each of the 10 OCT volumes.}
\vspace{-10pt}
\centering \setlength{\tabcolsep}{1pt}
\resizebox{\columnwidth}{!}{
\begin{tabular}{cgkCgkCgkC}
\toprule
&\multicolumn{3}{c}{SE ($\mu m$)} &\multicolumn{3}{c}{AE ($\mu m$)} &\multicolumn{3}{c}{HD ($\mu m$)}\\
Volume \# &Dufour et al. &OCTRIMA3D &GDM
&Dufour et al. &OCTRIMA3D &GDM
&Dufour et al. &OCTRIMA3D &GDM \\
\midrule
1 &-1.194 &0.4559 &0.3782
&2.3816 &1.3490 &1.0720
&25.688 &15.273 &10.449 \\
2 &-2.170 &-0.036 &-0.128
&4.5250 &0.9089 &0.7814
&56.667 &11.570 &7.0938 \\
3 &-2.576 &0.4182 &0.5983
&3.6129 &1.3237 &1.0989
&25.203 &16.719 &9.5326 \\
4 &-2.296 &1.0987 &0.6774
&3.8185 &1.5175 &1.0753
&51.522 &18.364 &9.6151 \\
5 &-1.680 &1.3288 &0.5909
&4.3327 &1.5012 &0.9005
&56.223 &11.889 &8.8419 \\
6 &-2.623 &1.0732 &0.2974
&4.0682 &1.4838 &0.9493
&43.070 &19.201 &9.5281 \\
7 &-2.326 &0.5294 &0.4529
&3.1506 &0.9378 &0.7433
&31.782 &8.6701 &6.4803 \\
8 &-0.636 &1.1355 &0.6833
&2.3955 &1.4455 &1.0069
&25.481 &17.930 &11.685 \\
9 &-4.206 &0.3077 &0.0859
&4.5813 &1.0780 &0.7678
&43.223 &8.9694 &5.7191 \\
10 &-2.648 &0.6701 &0.2606
&4.4903 &1.0627 &0.7877
&41.017 &11.666 &10.961 \\
\bottomrule
\end{tabular}}
\label{tb:volumeB5}
\end{table*}
\newcolumntype{C}{>{\centering\arraybackslash}p{6.2em}}
\newcolumntype{g}{>{\columncolor[RGB]{195, 195, 195}}C}
\newcolumntype{k}{>{\columncolor{Gray}}C}
\begin{table*}[htbp]
\caption{The SE ($\mu m$), AE ($\mu m$) and HD ($\mu m$) calculated using the results of different methods (Dufour's method, OCTRMIA3D and GDM) and the ground truth manual segmentation, for the IS-OS ($B_7$) surface in each of the 10 OCT volumes}
\vspace{-10pt}
\centering \setlength{\tabcolsep}{1pt}
\resizebox{\columnwidth}{!}{
\begin{tabular}{cgkCgkCgkC}
\toprule
&\multicolumn{3}{c}{SE ($\mu m$)} &\multicolumn{3}{c}{AE ($\mu m$)} &\multicolumn{3}{c}{HD ($\mu m$)}\\
Volume \# &Dufour et al. &OCTRIMA3D &GDM
&Dufour et al. &OCTRIMA3D &GDM
&Dufour et al. &OCTRIMA3D &GDM \\
\midrule
1 &-0.432 &-0.148 &-0.019
&1.1013 &0.5391 &0.4437
&16.559 &4.7616 &4.5805 \\
2 &0.7476 &-0.276 &-0.079
&2.0329 &0.5539 &0.3971
&20.309 &5.2093 &3.7743 \\
3 &-0.311 &-0.291 &-0.106
&1.4347 &0.5406 &0.4629
&18.432 &2.9790 &4.0176 \\
4 &0.3652 &-0.116 &0.3363
&1.6954 &0.5271 &0.4601
&27.853 &5.3672 &2.7882 \\
5 &0.6057 &-0.098 &0.0994
&1.7567 &0.4756 &0.3500
&26.556 &3.7573 &3.4150 \\
6 &0.9825 &-0.592 &-0.139
&2.4970 &0.7247 &0.4066
&23.487 &5.9301 &3.9297 \\
7 &-1.247 &-0.536 &0.0237
&1.3895 &0.7501 &0.3716
&10.016 &3.1398 &3.6980 \\
8 &-0.311 &-0.069 &0.1740
&1.0438 &0.4053 &0.3466
&15.044 &4.2301 &4.3940 \\
9 &-0.755 &-0.111 &0.1407
&0.8068 &0.5422 &0.3939
&3.5210 &3.4263 &3.3868 \\
10 &-0.099 &-0.220 &0.1028
&1.2941 &0.5609 &0.4246
&13.313 &3.1210 &3.5361 \\
\bottomrule
\end{tabular}
}
\label{tb:volumeB7}
\end{table*}
\newcolumntype{C}{>{\centering\arraybackslash}p{6.2em}}
\newcolumntype{g}{>{\columncolor[RGB]{195, 195, 195}}C}
\newcolumntype{k}{>{\columncolor{Gray}}C}
\begin{table*}[htbp]
\caption{The OSE ($\mu m$), OAE ($\mu m$) and OHD ($\mu m$) calculated using the results of different methods (Dufour's method, OCTRMIA3D and GDM) and the ground truth manual segmentation, for overall retina surfaces in each of the 10 OCT volumes}
\vspace{-10pt}
\centering \setlength{\tabcolsep}{1pt}
\resizebox{\columnwidth}{!}{
\begin{tabular}{cgkCgkCgkC}
\toprule
&\multicolumn{3}{c}{OSE ($\mu m$)} &\multicolumn{3}{c}{OAE ($\mu m$)} &\multicolumn{3}{c}{OHD ($\mu m$)}\\
Volume \# &Dufour et al. &OCTRIMA3D &GDM
&Dufour et al. &OCTRIMA3D &GDM
&Dufour et al. &OCTRIMA3D &GDM \\
\midrule
1 &-1.271 &0.3607 &0.4338
&1.8358 &1.1204 &0.9538
&17.486 &9.3358 &7.9163 \\
2 &-1.161 &0.0246 &0.0640
&2.5380 &0.9652 &0.7238
&29.682 &7.7987 &6.1267 \\
3 &-1.513 &-0.052 &0.3456
&2.1470 &0.9343 &0.7838
&19.985 &8.3491 &6.9920 \\
4 &-1.431 &0.4272 &0.3560
&2.5278 &1.0374 &0.8667
&31.346 &9.4042 &7.3130 \\
5 &-1.020 &0.6369 &0.5021
&2.4119 &1.0794 &0.8289
&32.607 &8.6822 &7.1379 \\
6 &-1.434 &0.4216 &0.3969
&2.6754 &1.1371 &0.8606
&28.629 &9.5267 &7.2548 \\
7 &-2.010 &0.0059 &0.3283
&2.2458 &0.9682 &0.7407
&21.788 &7.0644 &6.8279 \\
8 &-1.031 &0.5815 &0.5785
&1.7462 &1.1063 &0.9067
&17.610 &10.100 &8.5112 \\
9 &-1.951 &0.0542 &0.2014
&2.1368 &0.8771 &0.6922
&21.344 &5.7482 &5.4794 \\
10 &-1.513 &0.1022 &0.2109
&2.3315 &0.8397 &0.6596
&24.841 &6.3250 &6.7132 \\
\bottomrule
\end{tabular}
}
\label{tb:volumeOveall}
\end{table*}
\begin{figure}[h!]
\centering
{\includegraphics[width=0.28\textwidth]{Fig/SEB1}}
{\includegraphics[width=0.28\textwidth]{Fig/AEB1}}
{\includegraphics[width=0.28\textwidth]{Fig/HDB1}}\\
{\includegraphics[width=0.28\textwidth]{Fig/SEB2}}
{\includegraphics[width=0.28\textwidth]{Fig/AEB2}}
{\includegraphics[width=0.28\textwidth]{Fig/HDB2}}\\
{\includegraphics[width=0.28\textwidth]{Fig/SEB3}}
{\includegraphics[width=0.28\textwidth]{Fig/AEB3}}
{\includegraphics[width=0.28\textwidth]{Fig/HDB3}}\\
\vspace{-5pt}
\caption{Boxplots for the SE ($\mu m$), AE ($\mu m$), HD ($\mu m$), OSE ($\mu m$), OAE ($\mu m$) and OHD ($\mu m$) obtained by different methods in Table~\ref{tb:volumeB5}-\ref{tb:volumeOveall} for the 10 OCT volumes. 1st row: boxplots of Table~\ref{tb:volumeB5}; 2nd row: boxplots of Table~\ref{tb:volumeB7}; 3rd row: boxplots of Table~\ref{tb:volumeOveall}.}
\label{fig:boxPlotsVolume}
\end{figure}
Table~\ref{tb:volumeB5}-\ref{tb:volumeOveall} contain quantitative information for comparing the accuracy of the three methods on the 10 OCT volumes. Table~\ref{tb:volumeB5} lists the quantities for the surface $B_5$ around the fovea region, and Table~\ref{tb:volumeB7} presents the numerical results for the surface $B_7$ that is flatter and smoother. In Table~\ref{tb:volumeB5}, the SE quantity indicates that Dufour's method produces larger segmentation bias than the OCTRIMA3D and GDM. The SE values by the GDM are in the range of [-0.128$\mu m$ 0.6833$\mu m$], showing less variability than those by the other two methods. Moreover, the GDM leads to the smallest AE and HD quantities in all 10 cases, indicating that the GDM is the best among all the methods. Compared with those in Table~\ref{tb:volumeB5}, the quantities in Table~\ref{tb:volumeB7} show a significant improvement of all the methods. For example, the range of the HD quantity by Dufour's method has dropped from [25.688$\mu m$ 56.667$\mu m$] to [3.521$\mu m$ 27.853$\mu m$]. In addition, the accuracy gap between the OCTRIMA3D and GDM has been reduced. The HD values of Volume 3, 7 and 10 by the OCTRIMA has even become smaller than the corresponding values by the GDM. These improvements are the fact that the retinal surface $B_5$ is flat and the voxel values remain fairly constant. From the OAE and OHD in Table~\ref{tb:volumeOveall} we can observe that the accuracy of the GDM is the highest for the total retina surfaces among the existing approaches.
The corresponding boxplots of Table~\ref{tb:volumeB5}-\ref{tb:volumeOveall} are shown in Figure~\ref{fig:boxPlotsVolume}. It is clear that the proposed GDM method performs consistently better, with higher accuracy and lower error rates for both flat and nonflat retina layers. The boxplots show that there is little variation in performance across the modelled structures and that even in the worst case scenario the proposed method yields lower error rate than the average performance of other methods. Furthermore, in Figure~\ref{fig:3Dplot} we present the 3D plots of the SE, AE and HD quantities computed by the three methods on the 10 OCT volumes. The SE values by the GDM are closer to zero and the AE and HD values by it remain smaller. The overall distribution of these discrete data points also indicates that the GDM results are less oscillating. We can thus conclude from Figure~\ref{fig:3Dplot} that the GDM is the best among all the methods compared for extracting intra-reintal layer surfaces from 3D OCT volume data in terms of both accuracy and robustness.
\begin{figure}[h!]
\centering
{\includegraphics[width=0.7\textwidth]{Fig/3DSE}}\\
\vspace{-2pt}
{\includegraphics[width=0.7\textwidth]{Fig/3DAE}}\\
\vspace{-2pt}
{\includegraphics[width=0.7\textwidth]{Fig/3DHD}}\\
\vspace{-5pt}
\caption{3D plots of the SE ($\mu m$), AE ($\mu m$) and HD ($\mu m$) obtained using 10 OCT volumes using Dufour' method, OCTRMIA3D and GDM.}
\label{fig:3Dplot}
\end{figure}
\subsection{Computational Complexity Analysis}
The experimental results in Section \ref{NumericalRes} have shown that the performance of our algorithm is superior over others in terms of accuracy. In this section the performance of the different approaches in terms of the computational time is demonstrated. We implemented PDS, Chiu's method and GDM using Matlab 2014b on a Windows 7 platform with an Intel Xeon CPU E5-1620 at 3.70GHz and 32GB memory. For a $633 \times 496$ sized B-scan, with initialisation close to the true retinal boundaries, it takes 3.625s (500 iterations) for PDS to delineate two parallel boundaries. Chiu's method needs 1.962s to detect one boundary, while the GDM only takes 0.415s. Note that the time complexity of Chiu's graph search method is $O(|E|log(|V|))$, where $|V|$ and $|E|$ are the number of nodes and edges. In the context of boundary detection, $|V|= MN$ and $|E|=8MN$. Hence the time complexity of the method is $O(MNlog(MN))$. In contrast, our GDM solved using fast sweeping only has linear complexity of $O(MN)$, which is more efficient than Chiu's method. Instead of directly doing segmentation in 3D, the OCTRMIA3D explores spatial dependency between two adjacent B-scans and applies Chiu's method to each 2D frame independently. The OCTRMIA3D is thus able to track retinal boundaries in 3D volume efficiently. It has been reported in \cite{tian2015real} that the processing time of the OCTRMIA3D for the whole OCT volume of $496\times644\times51$ voxels was 26.15s, which is faster than our GDM (40.25s is used to segment a $496\times633\times10$ sized volume). However, such procedure in the OCTRMIA3D complicates the whole 3D segmentation process and might make the algorithm less general. Finally, Dufour's graph method needs 14.68s to detect the six intra-retinal layers surfaces on a $496\times633\times10$ sized volume. Dufour's method was implemented using a different programming language (C) and delineated different number of retinal surfaces from those of GDM so comparison can not be made between the two methods.
\section{Conclusion}
We have presented a new automated segmentation framework based on the geodesic distance for delineating retinal layer boundaries in 2D/3D OCT images. The framework integrates horizontal and vertical gradient information and can thus account for changes in the both directions. Further, the exponential weight function employed within the framework enhances the foveal depression regions and highlights the weak and low contrast boundaries. As a result, the proposed method is able to segment complex retinal structures with large curvatures and other irregularities caused by pathologies. Extensive numerical results, validated with ground truth, demonstrate the effectiveness of proposed framework for segmenting both normal and pathological OCT images. The proposed method has achieved higher segmentation accuracy than existing methods, such as the parameter active contour model and the graph theoretic based approaches. Ongoing research includes integrating the segmentation framework into a system for detection and quantification of retinal fractures and other diseases of the retina.
\section{Appendix}
We present the 3D fast sweeping algorithm to solve the Eikonal equation (\ref{eq:eikonalEq}). Given a seed point $s_1$, its distance function $d(x)$ satisfies the following Eikonal equation
\begin{equation}
\left|\nabla d(x)\right|=f(x), x\notin s_1 \label{eq:Eiknoal}
\end{equation}
with $d(s_1)=0$ and $f(x)=W^{-1}(x)$ where $W$ is defined in (\ref{eq:weights}). (\ref{eq:Eiknoal}) is a typical partial differential equation and it can be solved efficiently by using the fast sweeping algorithm proposed by Zhao \cite{zhao2005fast}. To do so, the Godunov upwind difference scheme is used to discretise (\ref{eq:Eiknoal}) as follows
\begin{equation}
\left[(d_{i,j,k}^n-d_{xmin}^n)^+\right]^2+\left[(d_{i,j,k}^n-d_{ymin}^n)^+\right]^2+\left[(d_{i,j,k}^n-d_{zmin}^n)^+\right]^2=f_{i,j,k}^2
\label{eq:dicreteEiknoal}
\end{equation}
In equation (\ref{eq:dicreteEiknoal}), $d_{xmin}^n=min(d_{i,j+1,k}^n, d_{i,j-1,k}^n)$, $d_{ymin}^n=min(d_{i+1,j,k}^n, d_{i-1,j,k}^n)$, $d_{zmin}^n=min(d_{i,j,k+1}^n,d_{i,j,k-1}^n)$ and
$x^+=\left\{
\begin{array}{cc}
x & x>0 \\
0 & x\leq 0
\end{array}
\right.$.
Boundary conditions need to be handled in the computational grid space. One-sided upwind difference is used for each of the 6 boundary faces of the grid space. For example, at the left boundary face, a one-sided difference along the $x$ direction is computed as
\begin{equation} \nonumber
\left[(d_{i,1,k}^n-d_{i,2,k}^n)^+\right]^2+\left[(d_{i,1,k}^n-d_{ymin}^n)^+\right]^2+\left[(d_{i,1,k}^n-d_{zmin}^n)^+\right]^2=f_{i,1,k}^2
\end{equation}
$d_{xmin}^n$, $d_{ymin}^n$ and $d_{zmin}^n$ are then sorted in increasing order and the sorted version is recorded as $a_1$, $a_2$ and $a_3$. So, the unique solution to (\ref{eq:dicreteEiknoal}) is given as follows:
\begin{equation}
d_{i,j,k}^{n+1}=min(d_{i,j,k}^n,\widetilde{d_{i,j,k}}) \label{eq:uniSolu}
\end{equation}
where $\widetilde{d}_{i,j,k}$ is a piecewise function containing three parts
\begin{equation} \nonumber
\widetilde{d_{i,j,k}} = \left\{ \begin{array}{l}
\frac{1}{3}\left( {{a_1} + {a_2} + {a_3} + \sqrt {3f_{i,j,k}^2 - {{({a_1} - {a_2})}^2} - {{({a_1} - {a_3})}^2} - {{({a_2} - {a_3})}^2}} } \right)\\
\frac{1}{2}\left( {{a_1} + {a_2} + \sqrt {2f_{i,j,k}^2 - {{({a_1} - {a_2})}^2}} } \right)\\
{a_1} + {f_{i,j,k}}
\end{array} \right.
\end{equation}
The three parts correspond to the following intervals, respectively
\begin{align*}
f_{i,j,k}^2 \ge ({a_1} &- {a_3})^2 + {({a_2} - {a_3})^2}\\
{({a_1} - {a_2})^2} \le f_{i,j,k}^2 &< {({a_1} - {a_3})^2} + {({a_2} - {a_3})^2}\\
f_{i,j,k}^2 &< {({a_1} - {a_2})^2}
\end{align*}
To solve (\ref{eq:uniSolu}), which is not in analytical form, the fast Gauss-Seidel iteration with alternating sweeping orderings is used. For initialization, the value of the seed point $s_1$ is set to zero, and this value is fixed in later calculations. The rest of the points are set to large values, and these values will be update later. The whole 3D grid is traversed in the following orders for the Gauss-Seidel iteration
\begin{align*}
(1)\; i=1:M, j=1:N, k=1:H;\; &(2)\; i=M:1, j=N:1, k=H:1\;\\
(3)\; i=M:1, j=1:N, k=1:H;\; &(4)\; i=1:M, j=N:1, k=H:1\;\\
(5)\; i=M:1, j=N:1, k=1:H;\; &(6)\; i=1:M, j=1:N, k=H:1\;\\
(7)\; i=1:M, j=N:1, k=1:H;\; &(8)\; i=M:1, j=1:N, k=H:1\;\\
\end{align*}
\bibliographystyle{unsrt}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,541
|
// Copyright 2014 Google Inc. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.devtools.build.lib.bazel.rules.genrule;
import com.google.common.collect.ImmutableMap;
import com.google.devtools.build.lib.actions.ActionExecutionContext;
import com.google.devtools.build.lib.actions.ActionOwner;
import com.google.devtools.build.lib.actions.Artifact;
import com.google.devtools.build.lib.actions.ExecException;
import com.google.devtools.build.lib.actions.ResourceSet;
import com.google.devtools.build.lib.analysis.actions.CommandLine;
import com.google.devtools.build.lib.analysis.actions.SpawnAction;
import com.google.devtools.build.lib.events.EventHandler;
import com.google.devtools.build.lib.vfs.PathFragment;
import java.util.List;
/**
* A spawn action for genrules. Genrules are handled specially in that inputs and outputs are
* checked for directories.
*/
public final class GenRuleAction extends SpawnAction {
private static final ResourceSet GENRULE_RESOURCES =
// Not chosen scientifically/carefully. 300MB memory, 100% CPU, 20% of total I/O.
ResourceSet.createWithRamCpuIo(300, 1.0, 0.0);
public GenRuleAction(ActionOwner owner,
Iterable<Artifact> inputs,
Iterable<Artifact> outputs,
List<String> argv,
ImmutableMap<String, String> environment,
ImmutableMap<String, String> executionInfo,
ImmutableMap<PathFragment, Artifact> runfilesManifests,
String progressMessage) {
super(owner, inputs, outputs, GENRULE_RESOURCES,
CommandLine.of(argv, false), environment, executionInfo, progressMessage,
runfilesManifests,
"Genrule", false, null);
}
@Override
protected void internalExecute(
ActionExecutionContext actionExecutionContext) throws ExecException, InterruptedException {
EventHandler reporter = actionExecutionContext.getExecutor().getEventHandler();
checkInputsForDirectories(reporter, actionExecutionContext.getMetadataHandler());
super.internalExecute(actionExecutionContext);
checkOutputsForDirectories(reporter);
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,616
|
package com.ibm.icu.text;
/**
* Thrown by ArabicShaping when there is a shaping error.
* @stable ICU 2.0
*/
public final class ArabicShapingException extends Exception {
// generated by serialver from JDK 1.4.1_01
static final long serialVersionUID = 5261531805497260490L;
/**
* Construct the exception with the given message
* @param message the error message for this exception
* @stable ICU 3.8
*/
public ArabicShapingException(String message) {
super(message);
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,889
|
{"url":"http:\/\/nbviewer.jupyter.org\/github\/jrjohansson\/scientific-python-lectures\/blob\/master\/Lecture-6B-HPC.ipynb","text":"Lecture 6B - Tools for high-performance computing applications\u00b6\n\nJ.R. Johansson (jrjohansson at gmail.com)\n\nThe latest version of this IPython notebook lecture is available at http:\/\/github.com\/jrjohansson\/scientific-python-lectures.\n\nThe other notebooks in this lecture series are indexed at http:\/\/jrjohansson.github.io.\n\nIn\u00a0[1]:\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\n\nmultiprocessing\u00b6\n\nPython has a built-in process-based library for concurrent computing, called multiprocessing.\n\nIn\u00a0[2]:\nimport multiprocessing\nimport os\nimport time\nimport numpy\n\nIn\u00a0[3]:\ndef task(args):\nprint(\"PID =\", os.getpid(), \", args =\", args)\n\nreturn os.getpid(), args\n\nIn\u00a0[4]:\ntask(\"test\")\n\nPID = 28995 , args = test\n\nOut[4]:\n(28995, 'test')\nIn\u00a0[5]:\npool = multiprocessing.Pool(processes=4)\n\nIn\u00a0[6]:\nresult = pool.map(task, [1,2,3,4,5,6,7,8])\n\nPID = 29006 , args = 1\nPID = 29009 , args = 4\nPID = 29007 , args = 2\nPID = 29008 , args = 3\nPID = 29006 , args = 6\nPID = 29009 , args = 5\nPID = 29007 , args = 8\nPID = 29008 , args = 7\n\nIn\u00a0[7]:\nresult\n\nOut[7]:\n[(29006, 1),\n(29007, 2),\n(29008, 3),\n(29009, 4),\n(29009, 5),\n(29006, 6),\n(29008, 7),\n(29007, 8)]\n\nThe multiprocessing package is very useful for highly parallel tasks that do not need to communicate with each other, other than when sending the initial data to the pool of processes and when and collecting the results.\n\nIPython parallel\u00b6\n\nIPython includes a very interesting and versatile parallel computing environment, which is very easy to use. It builds on the concept of ipython engines and controllers, that one can connect to and submit tasks to. To get started using this framework for parallel computing, one first have to start up an IPython cluster of engines. The easiest way to do this is to use the ipcluster command,\n\n\\$ ipcluster start -n 4\n\n\n\nOr, alternatively, from the \"Clusters\" tab on the IPython notebook dashboard page. This will start 4 IPython engines on the current host, which is useful for multicore systems. It is also possible to setup IPython clusters that spans over many nodes in a computing cluster. For more information about possible use cases, see the official documentation Using IPython for parallel computing.\n\nTo use the IPython cluster in our Python programs or notebooks, we start by creating an instance of IPython.parallel.Client:\n\nIn\u00a0[8]:\nfrom IPython.parallel import Client\n\nIn\u00a0[9]:\ncli = Client()\n\n\nUsing the 'ids' attribute we can retreive a list of ids for the IPython engines in the cluster:\n\nIn\u00a0[10]:\ncli.ids\n\nOut[10]:\n[0, 1, 2, 3]\n\nEach of these engines are ready to execute tasks. We can selectively run code on individual engines:\n\nIn\u00a0[11]:\ndef getpid():\n\"\"\" return the unique ID of the current process \"\"\"\nimport os\nreturn os.getpid()\n\nIn\u00a0[12]:\n# first try it on the notebook process\ngetpid()\n\nOut[12]:\n28995\nIn\u00a0[13]:\n# run it on one of the engines\ncli[0].apply_sync(getpid)\n\nOut[13]:\n30181\nIn\u00a0[14]:\n# run it on ALL of the engines at the same time\ncli[:].apply_sync(getpid)\n\nOut[14]:\n[30181, 30182, 30183, 30185]\n\nWe can use this cluster of IPython engines to execute tasks in parallel. The easiest way to dispatch a function to different engines is to define the function with the decorator:\n\n@view.parallel(block=True)\n\n\n\nHere, view is supposed to be the engine pool which we want to dispatch the function (task). Once our function is defined this way we can dispatch it to the engine using the map method in the resulting class (in Python, a decorator is a language construct which automatically wraps the function into another function or a class).\n\nTo see how all this works, lets look at an example:\n\nIn\u00a0[15]:\ndview = cli[:]\n\nIn\u00a0[16]:\n@dview.parallel(block=True)\n\"\"\" a dummy task that takes 'delay' seconds to finish \"\"\"\nimport os, time\n\nt0 = time.time()\npid = os.getpid()\ntime.sleep(delay)\nt1 = time.time()\n\nreturn [pid, t0, t1]\n\nIn\u00a0[17]:\n# generate random delay times for dummy tasks\ndelay_times = numpy.random.rand(4)\n\n\nNow, to map the function dummy_task to the random delay time data, we use the map method in dummy_task:\n\nIn\u00a0[18]:\ndummy_task.map(delay_times)\n\nOut[18]:\n[[30181, 1395044753.2096598, 1395044753.9150908],\n[30182, 1395044753.2084103, 1395044753.4959202],\n[30183, 1395044753.2113762, 1395044753.6453338],\n[30185, 1395044753.2130392, 1395044754.1905618]]\n\nLet's do the same thing again with many more tasks and visualize how these tasks are executed on different IPython engines:\n\nIn\u00a0[19]:\ndef visualize_tasks(results):\nres = numpy.array(results)\nfig, ax = plt.subplots(figsize=(10, res.shape[1]))\n\nyticks = []\nyticklabels = []\ntmin = min(res[:,1])\nfor n, pid in enumerate(numpy.unique(res[:,0])):\nyticks.append(n)\nyticklabels.append(\"%d\" % pid)\nfor m in numpy.where(res[:,0] == pid)[0]:\nres[m,2] - res[m,1], 0.5, color=\"green\", alpha=0.5))\n\nax.set_ylim(-.5, n+.5)\nax.set_xlim(0, max(res[:,2]) - tmin + 0.)\nax.set_yticks(yticks)\nax.set_yticklabels(yticklabels)\nax.set_ylabel(\"PID\")\nax.set_xlabel(\"seconds\")\n\nIn\u00a0[20]:\ndelay_times = numpy.random.rand(64)\n\nIn\u00a0[21]:\nresult = dummy_task.map(delay_times)\n\n\nThat's a nice and easy parallelization! We can see that we utilize all four engines quite well.\n\nBut one short coming so far is that the tasks are not load balanced, so one engine might be idle while others still have more tasks to work on.\n\nHowever, the IPython parallel environment provides a number of alternative \"views\" of the engine cluster, and there is a view that provides load balancing as well (above we have used the \"direct view\", which is why we called it \"dview\").\n\nTo obtain a load balanced view we simply use the load_balanced_view method in the engine cluster client instance cli:\n\nIn\u00a0[22]:\nlbview = cli.load_balanced_view()\n\nIn\u00a0[23]:\n@lbview.parallel(block=True)\n\"\"\" a dummy task that takes 'delay' seconds to finish \"\"\"\nimport os, time\n\nt0 = time.time()\npid = os.getpid()\ntime.sleep(delay)\nt1 = time.time()\n\nreturn [pid, t0, t1]\n\nIn\u00a0[24]:\nresult = dummy_task_load_balanced.map(delay_times)\n\n\nIn the example above we can see that the engine cluster is a bit more efficiently used, and the time to completion is shorter than in the previous example.\n\nThere are many other ways to use the IPython parallel environment. The official documentation has a nice guide:\n\nMPI\u00b6\n\nWhen more communication between processes is required, sophisticated solutions such as MPI and OpenMP are often needed. MPI is process based parallel processing library\/protocol, and can be used in Python programs through the mpi4py package:\n\nhttp:\/\/mpi4py.scipy.org\/\n\nTo use the mpi4py package we include MPI from mpi4py:\n\nfrom mpi4py import MPI\n\n\n\nA MPI python program must be started using the mpirun -n N command, where N is the number of processes that should be included in the process group.\n\nNote that the IPython parallel enviroment also has support for MPI, but to begin with we will use mpi4py and the mpirun in the follow examples.\n\nExample 1\u00b6\n\nIn\u00a0[25]:\n%%file mpitest.py\n\nfrom mpi4py import MPI\n\ncomm = MPI.COMM_WORLD\nrank = comm.Get_rank()\n\nif rank == 0:\ndata = [1.0, 2.0, 3.0, 4.0]\ncomm.send(data, dest=1, tag=11)\nelif rank == 1:\ndata = comm.recv(source=0, tag=11)\n\nprint \"rank =\", rank, \", data =\", data\n\nOverwriting mpitest.py\n\nIn\u00a0[26]:\n!mpirun -n 2 python mpitest.py\n\nrank = 0 , data = [1.0, 2.0, 3.0, 4.0]\nrank = 1 , data = [1.0, 2.0, 3.0, 4.0]\n\n\nExample 2\u00b6\n\nSend a numpy array from one process to another:\n\nIn\u00a0[27]:\n%%file mpi-numpy-array.py\n\nfrom mpi4py import MPI\nimport numpy\n\ncomm = MPI.COMM_WORLD\nrank = comm.Get_rank()\n\nif rank == 0:\ndata = numpy.random.rand(10)\ncomm.Send(data, dest=1, tag=13)\nelif rank == 1:\ndata = numpy.empty(10, dtype=numpy.float64)\ncomm.Recv(data, source=0, tag=13)\n\nprint \"rank =\", rank, \", data =\", data\n\nOverwriting mpi-numpy-array.py\n\nIn\u00a0[28]:\n!mpirun -n 2 python mpi-numpy-array.py\n\nrank = 0 , data = [ 0.71397658 0.37182268 0.25863587 0.08007216 0.50832534 0.80038331\n0.90613024 0.99535428 0.11717776 0.48353805]\nrank = 1 , data = [ 0.71397658 0.37182268 0.25863587 0.08007216 0.50832534 0.80038331\n0.90613024 0.99535428 0.11717776 0.48353805]\n\n\nExample 3: Matrix-vector multiplication\u00b6\n\nIn\u00a0[29]:\n# prepare some random data\nN = 16\nA = numpy.random.rand(N, N)\nnumpy.save(\"random-matrix.npy\", A)\nx = numpy.random.rand(N)\nnumpy.save(\"random-vector.npy\", x)\n\nIn\u00a0[30]:\n%%file mpi-matrix-vector.py\n\nfrom mpi4py import MPI\nimport numpy\n\ncomm = MPI.COMM_WORLD\nrank = comm.Get_rank()\np = comm.Get_size()\n\ndef matvec(comm, A, x):\nm = A.shape[0] \/ p\ny_part = numpy.dot(A[rank * m:(rank+1)*m], x)\ny = numpy.zeros_like(x)\ncomm.Allgather([y_part, MPI.DOUBLE], [y, MPI.DOUBLE])\nreturn y\n\ny_mpi = matvec(comm, A, x)\n\nif rank == 0:\ny = numpy.dot(A, x)\nprint(y_mpi)\nprint \"sum(y - y_mpi) =\", (y - y_mpi).sum()\n\nOverwriting mpi-matrix-vector.py\n\nIn\u00a0[31]:\n!mpirun -n 4 python mpi-matrix-vector.py\n\n[ 6.40342716 3.62421625 3.42334637 3.99854639 4.95852419 6.13378754\n5.33319708 5.42803442 5.12403754 4.87891654 2.38660728 6.72030412\n4.05218475 3.37415974 3.90903001 5.82330226]\nsum(y - y_mpi) = 0.0\n\n\nExample 4: Sum of the elements in a vector\u00b6\n\nIn\u00a0[32]:\n# prepare some random data\nN = 128\na = numpy.random.rand(N)\nnumpy.save(\"random-vector.npy\", a)\n\nIn\u00a0[33]:\n%%file mpi-psum.py\n\nfrom mpi4py import MPI\nimport numpy as np\n\ndef psum(a):\nr = MPI.COMM_WORLD.Get_rank()\nsize = MPI.COMM_WORLD.Get_size()\nm = len(a) \/ size\nlocsum = np.sum(a[r*m:(r+1)*m])\nrcvBuf = np.array(0.0, 'd')\nMPI.COMM_WORLD.Allreduce([locsum, MPI.DOUBLE], [rcvBuf, MPI.DOUBLE], op=MPI.SUM)\nreturn rcvBuf\n\ns = psum(a)\n\nif MPI.COMM_WORLD.Get_rank() == 0:\nprint \"sum =\", s, \", numpy sum =\", a.sum()\n\nOverwriting mpi-psum.py\n\nIn\u00a0[34]:\n!mpirun -n 4 python mpi-psum.py\n\nsum = 64.948311241 , numpy sum = 64.948311241\n\n\nOpenMP\u00b6\n\nWhat about OpenMP? OpenMP is a standard and widely used thread-based parallel API that unfortunaltely is not useful directly in Python. The reason is that the CPython implementation use a global interpreter lock, making it impossible to simultaneously run several Python threads. Threads are therefore not useful for parallel computing in Python, unless it is only used to wrap compiled code that do the OpenMP parallelization (Numpy can do something like that).\n\nThis is clearly a limitation in the Python interpreter, and as a consequence all parallelization in Python must use processes (not threads).\n\nHowever, there is a way around this that is not that painful. When calling out to compiled code the GIL is released, and it is possible to write Python-like code in Cython where we can selectively release the GIL and do OpenMP computations.\n\nIn\u00a0[35]:\nN_core = multiprocessing.cpu_count()\n\nprint(\"This system has %d cores\" % N_core)\n\nThis system has 12 cores\n\n\nHere is a simple example that shows how OpenMP can be used via cython:\n\nIn\u00a0[36]:\n%load_ext cythonmagic\n\nIn\u00a0[37]:\n%%cython -f -c-fopenmp --link-args=-fopenmp -c-g\n\ncimport cython\ncimport numpy\nfrom cython.parallel import prange, parallel\ncimport openmp\n\ndef cy_openmp_test():\n\ncdef int n, N\n\n# release GIL so that we can use OpenMP\nwith nogil, parallel():\nwith gil:\n\nIn\u00a0[38]:\ncy_openmp_test()\n\nNumber of threads 12: thread number 0\n\n\nExample: matrix vector multiplication\u00b6\n\nIn\u00a0[39]:\n# prepare some random data\nN = 4 * N_core\n\nM = numpy.random.rand(N, N)\nx = numpy.random.rand(N)\ny = numpy.zeros_like(x)\n\n\nLet's first look at a simple implementation of matrix-vector multiplication in Cython:\n\nIn\u00a0[40]:\n%%cython\n\ncimport cython\ncimport numpy\nimport numpy\n\n@cython.boundscheck(False)\n@cython.wraparound(False)\ndef cy_matvec(numpy.ndarray[numpy.float64_t, ndim=2] M,\nnumpy.ndarray[numpy.float64_t, ndim=1] x,\nnumpy.ndarray[numpy.float64_t, ndim=1] y):\n\ncdef int i, j, n = len(x)\n\nfor i from 0 <= i < n:\nfor j from 0 <= j < n:\ny[i] += M[i, j] * x[j]\n\nreturn y\n\nIn\u00a0[41]:\n# check that we get the same results\ny = numpy.zeros_like(x)\ncy_matvec(M, x, y)\nnumpy.dot(M, x) - y\n\nOut[41]:\narray([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n0., 0., 0., 0., 0., 0., 0., 0., 0.])\nIn\u00a0[42]:\n%timeit numpy.dot(M, x)\n\n100000 loops, best of 3: 2.93 \u00b5s per loop\n\nIn\u00a0[43]:\n%timeit cy_matvec(M, x, y)\n\n100000 loops, best of 3: 5.4 \u00b5s per loop\n\n\nThe Cython implementation here is a bit slower than numpy.dot, but not by much, so if we can use multiple cores with OpenMP it should be possible to beat the performance of numpy.dot.\n\nIn\u00a0[44]:\n%%cython -f -c-fopenmp --link-args=-fopenmp -c-g\n\ncimport cython\ncimport numpy\nfrom cython.parallel import parallel\ncimport openmp\n\n@cython.boundscheck(False)\n@cython.wraparound(False)\ndef cy_matvec_omp(numpy.ndarray[numpy.float64_t, ndim=2] M,\nnumpy.ndarray[numpy.float64_t, ndim=1] x,\nnumpy.ndarray[numpy.float64_t, ndim=1] y):\n\ncdef int i, j, n = len(x), N, r, m\n\n# release GIL, so that we can use OpenMP\nwith nogil, parallel():\nm = n \/ N\n\nfor i from 0 <= i < m:\nfor j from 0 <= j < n:\ny[r * m + i] += M[r * m + i, j] * x[j]\n\nreturn y\n\nIn\u00a0[45]:\n# check that we get the same results\ny = numpy.zeros_like(x)\ncy_matvec_omp(M, x, y)\nnumpy.dot(M, x) - y\n\nOut[45]:\narray([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n0., 0., 0., 0., 0., 0., 0., 0., 0.])\nIn\u00a0[46]:\n%timeit numpy.dot(M, x)\n\n100000 loops, best of 3: 2.95 \u00b5s per loop\n\nIn\u00a0[47]:\n%timeit cy_matvec_omp(M, x, y)\n\n1000 loops, best of 3: 209 \u00b5s per loop\n\n\nNow, this implementation is much slower than numpy.dot for this problem size, because of overhead associated with OpenMP and threading, etc. But let's look at the how the different implementations compare with larger matrix sizes:\n\nIn\u00a0[48]:\nN_vec = numpy.arange(25, 2000, 25) * N_core\n\nIn\u00a0[49]:\nduration_ref = numpy.zeros(len(N_vec))\nduration_cy = numpy.zeros(len(N_vec))\nduration_cy_omp = numpy.zeros(len(N_vec))\n\nfor idx, N in enumerate(N_vec):\n\nM = numpy.random.rand(N, N)\nx = numpy.random.rand(N)\ny = numpy.zeros_like(x)\n\nt0 = time.time()\nnumpy.dot(M, x)\nduration_ref[idx] = time.time() - t0\n\nt0 = time.time()\ncy_matvec(M, x, y)\nduration_cy[idx] = time.time() - t0\n\nt0 = time.time()\ncy_matvec_omp(M, x, y)\nduration_cy_omp[idx] = time.time() - t0\n\nIn\u00a0[50]:\nfig, ax = plt.subplots(figsize=(12, 6))\n\nax.loglog(N_vec, duration_ref, label='numpy')\nax.loglog(N_vec, duration_cy, label='cython')\nax.loglog(N_vec, duration_cy_omp, label='cython+openmp')\n\nax.legend(loc=2)\nax.set_yscale(\"log\")\nax.set_ylabel(\"matrix-vector multiplication duration\")\nax.set_xlabel(\"matrix size\");\n\n\nFor large problem sizes the the cython+OpenMP implementation is faster than numpy.dot.\n\nWith this simple implementation, the speedup for large problem sizes is about:\n\nIn\u00a0[51]:\n((duration_ref \/ duration_cy_omp)[-10:]).mean()\n\nOut[51]:\n3.0072232987815148\n\nObviously one could do a better job with more effort, since the theoretical limit of the speed-up is:\n\nIn\u00a0[52]:\nN_core\n\nOut[52]:\n12\n\nOpenCL\u00b6\n\nOpenCL is an API for heterogenous computing, for example using GPUs for numerical computations. There is a python package called pyopencl that allows OpenCL code to be compiled, loaded and executed on the compute units completely from within Python. This is a nice way to work with OpenCL, because the time-consuming computations should be done on the compute units in compiled code, and in this Python only server as a control language.\n\nIn\u00a0[53]:\n%%file opencl-dense-mv.py\n\nimport pyopencl as cl\nimport numpy\nimport time\n\n# problem size\nn = 10000\n\n# platform\nplatform_list = cl.get_platforms()\nplatform = platform_list[0]\n\n# device\ndevice_list = platform.get_devices()\ndevice = device_list[0]\n\nif False:\nprint(\"Platform name:\" + platform.name)\nprint(\"Platform version:\" + platform.version)\nprint(\"Device name:\" + device.name)\nprint(\"Device type:\" + cl.device_type.to_string(device.type))\nprint(\"Device memory: \" + str(device.global_mem_size\/\/1024\/\/1024) + ' MB')\nprint(\"Device max clock speed:\" + str(device.max_clock_frequency) + ' MHz')\nprint(\"Device compute units:\" + str(device.max_compute_units))\n\n# context\nctx = cl.Context([device]) # or we can use cl.create_some_context()\n\n# command queue\nqueue = cl.CommandQueue(ctx)\n\n# kernel\nKERNEL_CODE = \"\"\"\n\/\/\n\/\/ Matrix-vector multiplication: r = m * v\n\/\/\n#define N %(mat_size)d\n__kernel\nvoid dmv_cl(__global float *m, __global float *v, __global float *r)\n{\nint i, gid = get_global_id(0);\n\nr[gid] = 0;\nfor (i = 0; i < N; i++)\n{\nr[gid] += m[gid * N + i] * v[i];\n}\n}\n\"\"\"\n\nkernel_params = {\"mat_size\": n}\nprogram = cl.Program(ctx, KERNEL_CODE % kernel_params).build()\n\n# data\nA = numpy.random.rand(n, n)\nx = numpy.random.rand(n, 1)\n\n# host buffers\nh_y = numpy.empty(numpy.shape(x)).astype(numpy.float32)\nh_A = numpy.real(A).astype(numpy.float32)\nh_x = numpy.real(x).astype(numpy.float32)\n\n# device buffers\nmf = cl.mem_flags\nd_A_buf = cl.Buffer(ctx, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=h_A)\nd_x_buf = cl.Buffer(ctx, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=h_x)\nd_y_buf = cl.Buffer(ctx, mf.WRITE_ONLY, size=h_y.nbytes)\n\n# execute OpenCL code\nt0 = time.time()\nevent = program.dmv_cl(queue, h_y.shape, None, d_A_buf, d_x_buf, d_y_buf)\nevent.wait()\ncl.enqueue_copy(queue, h_y, d_y_buf)\nt1 = time.time()\n\nprint \"opencl elapsed time =\", (t1-t0)\n\n# Same calculation with numpy\nt0 = time.time()\ny = numpy.dot(h_A, h_x)\nt1 = time.time()\n\nprint \"numpy elapsed time =\", (t1-t0)\n\n# see if the results are the same\nprint \"max deviation =\", numpy.abs(y-h_y).max()\n\nOverwriting opencl-dense-mv.py\n\nIn\u00a0[54]:\n!python opencl-dense-mv.py\n\n\/usr\/local\/lib\/python2.7\/dist-packages\/pyopencl-2012.1-py2.7-linux-x86_64.egg\/pyopencl\/__init__.py:36: CompilerWarning: Non-empty compiler output encountered. Set the environment variable PYOPENCL_COMPILER_OUTPUT=1 to see more.\n\"to see more.\", CompilerWarning)\nopencl elapsed time = 0.0188570022583\nnumpy elapsed time = 0.0755031108856\nmax deviation = 0.0136719\n\n\nVersions\u00b6\n\nIn\u00a0[55]:\n%load_ext version_information\n\n%version_information numpy, mpi4py, Cython\n\nOut[55]:\nSoftwareVersion\nPython3.3.2+ (default, Oct 9 2013, 14:50:09) [GCC 4.8.1]\nIPython2.0.0-b1\nOSposix [linux]\nnumpy1.9.0.dev-d4c7c3a\nmpi4py1.3.1\nCython0.20.post0\nMon Mar 17 17:32:10 2014 JST","date":"2018-02-18 05:37:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2053363025188446, \"perplexity\": 7682.081408842335}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-09\/segments\/1518891811655.65\/warc\/CC-MAIN-20180218042652-20180218062652-00184.warc.gz\"}"}
| null | null |
{"url":"https:\/\/www.deepdyve.com\/lp\/ou_press\/optimal-discrimination-designs-for-semiparametric-models-aAlGH7I0em","text":"Optimal discrimination designs for semiparametric models\n\nOptimal discrimination designs for semiparametric models Summary Much work on optimal discrimination designs assumes that the models of interest are fully specified, apart from unknown parameters. Recent work allows errors in the models to be nonnormally distributed but still requires the specification of the mean structures. Otsu (2008) proposed optimal discriminating designs for semiparametric models by generalizing the Kullback\u2013Leibler optimality criterion proposed by L\u00f3pez-Fidalgo et al. (2007). This paper develops a relatively simple strategy for finding an optimal discrimination design. We also formulate equivalence theorems to confirm optimality of a design and derive relations between optimal designs found here for discriminating semiparametric models and those commonly used in optimal discrimination design problems. 1. Introduction Optimal discrimination design problems have recently appeared in cognitive science (Covagnaro et al., 2010), psychology (Myung & Pitt, 2009) and chemical engineering (Alberton et al., 2011). A main motivation for such research is that in a scientific study, we often do not know the true underlying model that drives the responses but experts may have a number of candidate models that they believe should be adequate for studying the process. An informed and well-constructed design provides valuable information, so constructing an optimal design to find the most appropriate model among a few plausible models is important. In applications, the optimal discrimination design provides guidance on how data should be collected efficiently to infer the most plausible model before other inferential procedures are employed to attain the study objectives using the identified model. Our work concerns the first part of such an approach, where the goal is to determine the most appropriate design to discriminate between the models. The statistical theory for studying optimal discrimination designs dates back to the 1970s. An early reference is Atkinson & Fedorov (1975a,b), who proposed $$T$$-optimal designs to discriminate between models when errors are normally distributed. $$T$$-optimality assumes a known null model and we wish to test whether a rival parametric model with unknown parameters holds. When models are all parametric, the likelihood ratio test is typically used to discriminate between the models. The noncentrality parameter of the chi-squared distribution of the test statistic contains the unknown parameters from the alternative model and is proportional to the $$T$$-optimality criterion (Atkinson & Fedorov, 1975a; Wiens, 2009). Since a larger noncentrality parameter provides a more powerful test, $$T$$-optimal designs maximize the minimum value of the noncentrality parameter, where the minimum is taken over all possible values of the parameters in the alternative model. The $$T$$-optimality criterion is not differentiable and finding optimal discrimination designs under the maximin design criterion can be challenging even when relatively simple models are involved; see, for example, Dette et al. (2012, 2017b). Constructing efficient algorithms for finding $$T$$-optimal designs is likewise difficult in general, despite recent progress (Braess & Dette, 2013; Dette et al., 2015, 2017a; Aletti et al., 2016; Tommasi et al., 2016). Recent advances in tackling discrimination design problems include the following. The frequently criticized unrealistic assumption in the $$T$$-optimality criterion that requires a known model in the null hypothesis is now removed (Jamsen et al., 2013) and the class of models of interest now includes generalized linear models (Waterhouse et al., 2008). Methodologies are also available for finding a variety of optimal discriminating designs for multivariate dynamic models (Ucinski & Bogacka, 2005), Bayesian optimal designs for model discrimination (Felsenstein, 1992; Tommasi & L\u00f3pez-Fidalgo, 2010; Dette et al., 2015), dual-objective optimal discrimination designs (Ng & Chick, 2004; Atkinson, 2008; Alberton et al., 2011; Abd El-Monsef & Seyam, 2011), optimal designs that discriminate between models with correlated errors (Campos-Barreiro & Lopez-Fidalgo, 2016) and adaptive designs for model discrimination (Myung & Pitt, 2009). References that describe alternative approaches and properties of optimal discrimination designs include L\u00f3pez-Fidalgo et al. (2007), Dette & Titoff (2009) and Dette et al. (2015). All references cited so far require a parametric conditional distribution of the response. This raises the question as to whether $$T$$-optimal discrimination designs are robust with respect to misspecification of this distribution. Some answers are provided by Wiens (2009), Ghosh & Dutta (2013) and Dette et al. (2013). Otsu (2008) proposed a new optimality criterion for discriminating between models, which is similar in spirit to the classical $$T$$-optimality criterion and its extensions but does not require an exact specification of the conditional distribution. Optimal discrimination designs were found using the duality relationships in entropy-like minimization problems (Borwein & Lewis, 1991) and the resulting optimal designs are called semiparametric optimal discrimination designs. 2. Semiparametric discrimination designs Following Kiefer (1974), we focus on approximate designs, which are probability measures defined on user-selected design space $$\\mathcal{X}$$. If an approximate design has $$k$$ support points at $$x_1,\\ldots, x_k$$ with corresponding weights $$\\omega_1,\\ldots, \\omega_k$$ and the total number of observations allowed for the study is $$n$$, then approximately $$n \\omega_i$$ observations are taken at $$x_1,\\ldots, x_k$$. In practice, each $$n \\omega_i$$ is rounded to an integer $$n_i$$ so that $$n_i$$ observations are taken at $$x_1,\\ldots, x_k$$ subject to $$\\sum_{i=1}^kn_i=n$$. Let $$Y$$ be the continuous response variable and let $$x$$ denote a vector of explanatory variables defined on a given compact design space $$\\mathcal{X}$$. Suppose the density of $$Y$$ with respect to the Lebesgue measure is $$f(y;x)$$ and we want to construct efficient designs for discriminating between two competing models. L\u00f3pez-Fidalgo et al. (2007) assumed that there are two parametric densities, say $$f_j(y;x,\\theta_j)$$, where the parameter $$\\theta_j$$ varies in a compact parameter space $$\\Theta_j\\ (\\,j=1,2)$$. To fix ideas, we ignore nuisance parameters which may be present in the models. The Kullback\u2013Leibler divergence measures the discrepancy between the two densities and is given by \u00a0 $$I_{1,2} (x, f_1, f_2, \\theta_1, \\theta_2) = \\int f_1 (y ; x, \\theta_1) \\log \\frac {f_1 (\\,y ; x, \\theta_1)}{f_2(\\,y ; x,\\theta_2)}\\,{\\rm d}y\\text{.}$$ (1) L\u00f3pez-Fidalgo et al. (2007) assumed that the model $$f_1$$ is the true model with a fixed parameter vector $$\\bar\\theta_1$$ and call a design a local Kullback\u2013Leibler-optimal discriminating design for the models $$f_{1}$$ and $$f_{2}$$ if it maximizes the criterion \u00a0 $${\\rm KL}_{1,2} (\\xi, \\overline \\theta_{1}) = \\inf_{\\theta_{2}\\in \\Theta_2} \\int_{\\mathcal{X}} I_{1,2} (x, f_{1},f_{2} , {\\overline{\\theta}_{1}}, \\theta_{2})\\,\\xi ({\\rm d}x)$$ (2) over all designs on the design space $$\\mathcal{X}$$. Such a Kullback\u2013Leibler-optimal design maximizes the power of the likelihood ratio test for the hypothesis \u00a0 \\begin{equation*} H_0: f(y;x) = f_2(y;x,\\theta_2)\\,\\,\\mbox{versus}\\,\\,H_1: f(y;x) = f_1(y; x, \\bar\\theta_1) \\end{equation*} in the worst-case scenario when $$\\theta_2 \\in \\Theta_2$$ (L\u00f3pez-Fidalgo et al., 2007, p. 233). Otsu (2008) proposed a design criterion for discriminating between a parametric model defined by its density and another semiparametric model. The set-up is more general than that in L\u00f3pez-Fidalgo et al. (2007), who assumed that $$f_1$$ and $$f_2$$ are known and one of the parametric models is fully specified. Specifically, suppose that the conditional mean of the density $$f_j (y;x,\\theta_j)$$ is \u00a0 $$\\eta_j (x, \\theta_j) = \\int y f_j (y;x,\\theta_j) {\\rm d}y \\quad (j=1,2)$$ and its support set is \u00a0 $$\\mathcal{S}_{f_j, \\theta_j , x} = \\big\\{ y \\,:\\, f_j(y;x,\\theta_j) > 0 \\big\\} \\quad (j=1,2)\\text{.}$$ (3) Further, let $$f_1(y;x, \\bar\\theta_1)$$ be a parametric density with a fixed parameter $$\\bar\\theta_1$$. Define \u00a0 \\begin{align*} \\mathcal{F}_{2 ,x,\\theta_2} = \\left\\{f_2 : \\int f_2(y;x,\\theta_2)\\, {\\rm d}y = 1, \\; \\int y f_2(y;x,\\theta_2)\\,{\\rm d}y = \\eta_2(x,\\theta_2), \\,\\mathcal{S}_{f_2, \\theta_2 , x} = \\mathcal{S}_{f_1, \\overline{\\theta}_1 , x} \\right\\}\\!, \\end{align*} which is the class of all conditional densities at the point $$x$$ with parameter $$\\theta_2$$ and conditional mean $$\\eta_2(x,\\theta_2)$$. Consider the set obtained from $$\\mathcal{F}_{2 ,x,\\theta_2}$$ by letting the ranges of $$x$$ and $$\\theta_2$$ vary over all their possible values, i.e., \u00a0 \\begin{align*} \\mathcal{F}_{2 } = \\bigcup_{x \\in \\mathcal{X}} \\bigcup_{\\theta_2 \\in \\Theta_2} \\mathcal{F}_{2 ,x,\\theta_2}, \\end{align*} and call a design $$\\xi^*$$ semiparametric optimal for discriminating between the model $$f_1(y;x, \\bar\\theta_1)$$ and models in the class $$\\mathcal{F}_2$$ if it maximizes \u00a0 \\begin{align} K_{1}(\\xi,{\\bar\\theta_1}) = \\inf_{\\theta_2 \\in \\Theta_2} \\int_{\\mathcal{X}} \\inf_{f_2 \\in \\mathcal{F}_{2, x,\\theta_2} } I_{1,2} (x, f_1,f_2 , { \\overline{\\theta}_1}, \\theta_2) \\, \\xi({\\rm d}x) \\end{align} (4) among all approximate designs on $$\\mathcal{X}$$. This is a local optimality criterion in the sense of Chernoff (1953), as it depends on the parameter $$\\bar\\theta_1$$. Another possibility is to fix the family of conditional densities for $$f_2(y;x,\\theta_2)$$, where the form of $$f_2$$ is known apart from the values of $$\\theta_2$$. Define \u00a0 \\begin{align*} \\mathcal{F}_{1, x,\\bar\\theta_1} &= \\left\\{ f_1 : \\int f_1(y;x, \\bar\\theta_1)\\, {\\rm d}y = 1, \\int y f_1(y;x,{ \\bar\\theta_1})\\,{\\rm d}y = \\eta_1(x, \\bar\\theta_1), \\, \\mathcal{S}_{f_1, \\bar\\theta_1 , x} = \\mathcal{S}_{f_2, \\theta_2 , x} \\right\\}\\!, \\end{align*} which is the class of all conditional densities with parameter $$\\overline{\\theta}_1$$ and conditional mean $$\\eta_1(x,\\overline{\\theta}_1)$$. For fixed $$\\bar\\theta_1$$, let \u00a0 \\begin{align*} \\mathcal{F}_1 = \\bigcup_{x \\in \\mathcal{X} } \\mathcal{F}_{1 ,x,\\bar\\theta_1} \\end{align*} and call a design $$\\xi^*$$ locally semiparametric optimal for discriminating between the family of models $$f_2(y;x, \\theta_2)$$ and the class $$\\mathcal{F}_{1}$$ if it maximizes \u00a0 \\begin{align} K_{2}(\\xi, {\\bar\\theta_1}) = \\inf_{\\theta_2 \\in \\Theta_2} \\int_{X} \\inf_{f_1 \\in \\mathcal{F}_{1 ,x,\\bar\\theta_1}} I_{1,2} (x, f_1,f_2 , { \\bar\\theta_1}, \\theta_2) \\, \\xi({\\rm d}x) \\end{align} (5) among all approximate designs on $$\\mathcal{X}$$. In the following discussion we refer to designs that maximize the criteria $$K_{1}$$ and $$K_{2}$$ as semiparametric Kullback\u2013Leibler-optimal discriminating designs of type 1 and type 2, respectively. We assume, for the sake of simplicity, that $$f_1(y;x,\\theta_1)$$, $$f_2(y;x,\\theta_2)$$, $$\\eta_1(x,\\theta_1)$$$$\\eta_2(x,\\theta_2)$$ are differentiable with respect to $$y$$, $$x$$, $$\\theta_1$$ and $$\\theta_2$$, though these assumptions could be relaxed if necessary. In Theorem 3.1 of his paper, Otsu (2008) derived explicit forms for the two criteria. For criterion (4), he obtained \u00a0 \\begin{align} K_{1}(\\xi,{ \\bar\\theta_1}) = \\inf_{\\theta_2 \\in \\Theta_2} \\int_{X} \\left( \\mu + 1 + \\int \\log \\left[ -\\mu - \\lambda \\{y - \\eta_2(x,\\theta_2)\\} \\right] f_1(y;x,{ \\bar\\theta_1})\\,{\\rm d}y \\right) \\xi({\\rm d}x), \\end{align} (6) where the constants $$\\lambda$$ and $$\\mu$$ depend on $$x$$, $$\\bar\\theta_1$$ and $$\\theta_2$$ are roots of the system of equations \u00a0 \\begin{align} - \\int \\frac{f_1(y;x,{ \\bar\\theta_1})}{\\mu + \\lambda \\{y - \\eta_2(x,\\theta_2)\\}}\\,{\\rm d}y = 1, \\quad \\int \\frac{\\{y - \\eta_2(x,\\theta_2)\\}f_1(y;x,{ \\bar\\theta_1})}{\\mu + \\lambda \\{y - \\eta_2(x,\\theta_2)\\}}\\,{\\rm d}y = 0 \\end{align} (7) that satisfy the constraint $$\\mu + \\lambda \\{y - \\eta_2(x,\\theta_2)\\} < 0 \\; \\mbox{ for all } y \\in \\mathcal{S}_{f_1, \\bar\\theta_1, x}\\text{.}$$ A similar result can be obtained for criterion (5) (Otsu, 2008, Theorem 3.2). Below we simplify Otsu\u2019s approach, show that the inner optimization problems in (4) and (5) can be reduced to solving a single equation, and derive simpler expressions for criteria (4) and (5) that facilitate the computation of the semiparametric optimal discriminating designs. Theorem 1. (i) Assume that for each $$x \\in \\mathcal{X}$$ the support of the conditional density $$f_1(y;x,\\bar\\theta_1)$$ is an interval, i.e., $$\\mathcal{S}_{f_1, \\bar\\theta_1, x} = [y_{x,\\min}, y_{x,\\max}]$$, such that $$y_{x, \\min} < \\eta_2(x, \\theta_2) < y_{x, \\max}$$ for all $$\\theta_2 \\in \\Theta_2$$. Assume further that for all $$x \\in \\mathcal{X}$$ and for all $$\\theta_2 \\in \\Theta_2$$, the equation\u00a0 \\begin{align} \\int \\frac{f_1(y;x, \\bar\\theta_1)}{1 + \\lambda \\left\\{ y - \\eta_2(x,\\theta_2) \\right\\} }\\,{\\rm d}y = 1 \\end{align} (8)has a unique nonzero root $$\\overline{\\lambda}(x, \\bar\\theta_1, \\theta_2)$$ that satisfies\u00a0 \\begin{align} -\\frac{1}{y_{x, \\max} - \\eta_2{(x, \\theta_2)}} < \\overline{\\lambda}(x, \\bar\\theta_1, \\theta_2) < -\\frac{1}{y_{x, \\min} - \\eta_2 (x,\\theta_2)}\\text{.} \\end{align} (9)Criterion (4) then takes the form\u00a0 \\begin{align} K_{1}(\\xi, \\overline{\\theta}_1) & = \\inf_{\\theta_2 \\in \\Theta_2} \\int_\\mathcal{X} \\int f_1(y;x, \\bar\\theta_1) \\log\\frac{f_1 (y;x, \\bar\\theta_1)}{f_2^*(y;x,\\theta_{2})}\\,{\\rm d}y\\,\\xi({\\rm d}x) \\end{align} (10)and the optimal density $$f_2^*$$ in (4) is\u00a0 \\begin{align} f_2^*(y;x, \\theta_2) = \\frac{f_1(y;x, \\overline \\theta_1)}{1 + \\overline{\\lambda}(x, \\bar\\theta_1, \\theta_2) \\left\\{y - \\eta_2(x,\\theta_2)\\right\\}}\\text{.} \\end{align} (11) (ii) Assume that the integrals\u00a0 \\begin{align*} \\int f_2(y;x, \\theta_2) \\exp(-\\lambda y)\\,{\\rm d}y, \\quad \\,\\int y f_2(y;x, \\theta_2) \\exp(-\\lambda y)\\,{\\rm d}y \\end{align*}exist for all $$x \\in \\mathcal{X}$$ and for all $$\\lambda$$. Criterion (5) takes the form\u00a0 \\begin{align} K_{2}(\\xi, {\\bar\\theta_1}) = \\inf_{\\theta_2 \\in \\Theta_2} \\int_\\mathcal{X} \\int f_1^*(y;x, \\bar\\theta_1) \\log\\frac{f_1^*(y;x, \\bar\\theta_1)}{f_2(y;x,\\theta_{2})}\\,{\\rm d}y\\,\\xi({\\rm d}x) \\end{align} (12)and the optimal density $$f_1^*$$ in (5) is given by\u00a0 \\begin{align} f_1^*(y;x, \\bar\\theta_1) = \\frac{f_2(y;x,\\theta_2) \\exp\\left\\{-\\overline{\\lambda}(x, {\\bar\\theta_1}, \\theta_2) y \\right\\} }{\\int f_2(y;x,\\theta_2) \\exp\\left\\{-\\overline{\\lambda}(x, \\bar\\theta_1, \\theta_2) y \\right\\} {\\rm d}y}, \\end{align} (13)where $$\\overline{\\lambda}_x = \\overline{\\lambda}(x, \\bar\\theta_1, \\theta_2)$$ is the nonzero root of the equation\u00a0 \\begin{align} \\frac{\\int y f_2(y;x, \\theta_2) \\exp(-\\lambda y)\\,{\\rm d}y}{\\int f_2(y;x, \\theta_2) \\exp(-\\lambda y)\\,{\\rm d}y} = \\eta_1(x, \\bar\\theta_1)\\text{.} \\end{align} (14) The main implication of Theorem 1 is that we first solve equations (8) and (14) numerically for $$\\lambda$$. As this has to be done for several values of $$\\theta_2$$ it is quite demanding, though not so computationally expensive as finding the solution of the two equations in (7) for Otsu\u2019s approach. For solving (8), it is natural to assume that $$\\lambda < 0$$ if $$\\eta_1(x,\\bar\\theta_1) < \\eta_2(x,\\theta_2)$$, because if $$y \\in \\mathcal{S}_{f_1, \\bar\\theta_1, x}$$, the function $$1\/[1+\\lambda \\left\\{y - \\eta_2(x,\\theta_2)\\right\\}]$$ is increasing and so allows us to shift the average of the function $$f_1(y;x,\\bar\\theta_1)\/[1+\\lambda \\left\\{y - \\eta_2(x,\\theta_2)\\right\\}]$$ to the right. Similarly, if $$\\eta_1(x,\\bar\\theta_1) > \\eta_2(x,\\theta_2)$$, we search for $$\\lambda > 0$$. The following lemma formalizes this consideration and its proof, and all other proofs are deferred to the final section. Lemma 1. Assume that $$v_2^2(x,\\theta_2) = \\int \\left\\{ y - \\eta_2(x,\\theta_2) \\right\\}^2 f_2(y;x, \\theta_2)\\,{\\rm d}y$$ exists and is positive. If $$\\overline{\\lambda}$$ solves (8) and satisfies (9), $$\\overline{\\lambda}$$has the same sign as the difference$$\\eta_1(x,\\theta_1) - \\eta_2(x,\\theta_2)$$. Example 1. Let $$f_1(y;x,\\bar\\theta_1)$$ be the truncated normal density $$\\mathcal{N}\\{\\eta(x,\\bar\\theta_1), 1\\}$$ on the interval $$[-3+\\eta_1(x,\\bar\\theta_1), 3 + \\eta_1(x,\\bar\\theta_1)]$$. This density is a function of $$\\eta_1(x,\\bar\\theta_1)$$ and it follows from (11) that the optimal density $$f_2^*(y;x,\\theta_2)$$ is a function of $$\\eta_1(x, \\overline{\\theta}_1)$$ and $$\\eta_2(x,\\theta_2)$$. Figure 1 displays the function $$f_2^*$$ for $$\\eta_1(x,\\overline{\\theta}_1) \\equiv 0$$ and different values of $$\\eta_2(x,\\theta_2)$$ on the interval $$[-3,3]$$. Fig. 1. View largeDownload slide Density $$f_1$$ (solid line) and the solution $$f_2^*$$ in (11) (dotted line), where $$f_1$$ is the truncated standard normal distribution on the interval $$[-3,3]$$ and $$\\eta_1 (x, \\bar \\theta_1) =0$$: (a) $$\\eta_2 (x, \\theta_2)= 0.5$$ ($$\\bar \\lambda = -0.395$$); (b) $$\\eta_2 (x, \\theta_2)= 0.4$$ ($$\\bar \\lambda = -0.3522$$); (c) $$\\eta_2 (x, \\theta_2)= 0.3$$ ($$\\bar \\lambda = -0.2841$$). Fig. 1. View largeDownload slide Density $$f_1$$ (solid line) and the solution $$f_2^*$$ in (11) (dotted line), where $$f_1$$ is the truncated standard normal distribution on the interval $$[-3,3]$$ and $$\\eta_1 (x, \\bar \\theta_1) =0$$: (a) $$\\eta_2 (x, \\theta_2)= 0.5$$ ($$\\bar \\lambda = -0.395$$); (b) $$\\eta_2 (x, \\theta_2)= 0.4$$ ($$\\bar \\lambda = -0.3522$$); (c) $$\\eta_2 (x, \\theta_2)= 0.3$$ ($$\\bar \\lambda = -0.2841$$). The main difference between our approach and that of Otsu (2008) is that we provide an easier and quicker way to compute the quantity \u00a0 $$\\inf_{f_2 \\in \\mathcal{F}_{2, x,\\theta_2} } I_{1,2}(x,f_1,f_2,{\\bar\\theta_1},\\theta_2)\\text{.}$$ (15) This difference has very important implications for the numerical calculation of the semiparametric discrimination designs. To be precise, the result in Otsu (2008) requires us to solve the two nonlinear equations in (7) numerically for all design points $$x$$ involved in the determination of the optimal design maximizing criterion (5) and all parameter values $$\\theta_2 \\in \\Theta_2$$ involved in the minimization of the simplified version (6) derived by Otsu (2008). From a numerical viewpoint, it is very challenging to tackle this unstable problem because the solution depends sensitively on the specification of an initial point for the iterative procedure to solve (7). In contrast, Theorem 1 reduces the problem to the solution of one nonlinear equation, which can be found, for example, by a bisection search or a golden ratio search. The numerical instability becomes apparent also in the numerical study in \u00a7 5, where we tried to compare the two methods in three examples. There we implemented Newton\u2019s method to find the solution of the system of two equations in (7) required by Otsu\u2019s method. We observed that for many values of the explanatory variable $$x$$, the function in (15) could not be computed because the Newton method did not converge to the solution of system (7) that satisfies the condition $$\\mu + \\lambda \\left\\{y - \\eta_2(x, \\theta_2)\\right\\} < 0$$. Such a problem was even observed in cases where we used a starting point in the iteration which is very close to the solution determined by the new method proposed in this paper. As a consequence, in many examples the semiparametric optimal discrimination design could not be determined by the algorithm of Otsu (2008). Moreover, we observe that in the cases where Otsu\u2019s method was able to determine the solution of the two nonlinear equations in (15), our method is still, on average, about two times faster; see Example 4. 3. Equivalence theorems Equivalence theorems are useful because they confirm optimality of a design among all designs on the given design space $$\\mathcal{X}$$. These tools exist if the criterion is a convex or concave function over the set of all approximate designs on $$\\mathcal{X}$$, and their derivations are discussed in design monographs (Silvey, 1980; Pukelsheim, 2006). The next theorem states the equivalence results for the semiparametric Kullback\u2013Leibler-optimal discriminating designs. Theorem 2. Suppose that the conditions of Theorem 1 hold and the infimum in (4) and (5) is attained at a unique point $$\\theta_2^* \\in \\Theta_2$$ for the optimal design $$\\xi^*$$. $${\\rm (a)}$$ A design $$\\xi^*$$ is a semiparametric Kullback\u2013Leibler-optimal discriminating design of type $$1$$ if and only if\u00a0 $$I_{ 1,2}(x,f_1,f_2^*,{ \\bar\\theta_1},\\theta_2^*) - \\int_\\mathcal{X} I_{ 1,2}(x,f_1,f_2^*,{ \\bar\\theta_1},\\theta_2^*) \\, \\xi^*({\\rm d}x)\\leqslant\\,0, \\quad x \\in \\mathcal{X},$$ (16)with equality at the support points of $$\\xi^*$$. Here $$I_{ 1,2}(x,f_1,f_2,{ \\bar\\theta_1},\\theta_2)$$ is defined in (1), \u00a0 \\begin{align*} \\theta_2^* = \\mathop {{\\rm{arg\\,inf}}}\\limits_{{\\theta _2} \\in {\\Theta _2}} \\int_\\mathcal{X} I_{ 1,2}(x,f_1,f_2^*,{ \\bar\\theta_1},\\theta_2) \\, \\xi^*({\\rm d}x), \\quad f_2^*(y;x,\\theta_2) = \\frac{f_1(y;x,{ \\bar\\theta_1})}{1+{\\overline\\lambda}\\left\\{y-\\eta_2(x,\\theta_2) \\right\\} }, \\end{align*}and $$\\overline\\lambda$$ is found from (8). Moreover, there is equality in (16) for all support points of $$\\xi^*$$.\u2002 $${\\rm (b)}$$ A design $$\\xi^*$$ is a semiparametric Kullback\u2013Leibler-optimal discriminating design of type $$2$$ if and only if\u00a0 $$I_{ 1,2}(x,f_1^*,f_2,{\\bar\\theta_1,\\theta_2^{*}}) - \\int_{\\mathcal{X}}I_{ 1,2}(x,f_1^*,f_2,{ \\bar\\theta_1,\\theta_2^{*}}) \\, \\xi^*({\\rm d}x) \\leqslant\\,0, \\quad x \\in \\mathcal{X},$$ (17)with equality at the support points of $$\\xi^*$$. Here\u00a0 \\begin{align*} {\\theta_2^{*}} &= \\mathop {{\\rm{arg inf}}}\\limits_{{\\theta _2} \\in {\\Theta _2}} \\int_{\\mathcal{X}} I_{ 1,2}(x,f_1^*,f_2,{\\bar\\theta_1,\\theta_2}) \\, \\xi^*({\\rm d}x), \\quad f_1^*(y;x,\\bar\\theta_1) = \\frac{f_2(y;x,\\theta_2) \\exp(-{\\overline\\lambda} y)}{\\int f_2(y;x,\\theta_2) \\exp(-{\\overline\\lambda} y)\\,{\\rm d}y}, \\end{align*}and $$\\overline\\lambda$$ is found from (14). Moreover, there is equality in (17) for all support points of $$\\xi^*$$. Theorem 2 is a direct consequence of the equivalence theorem for Kullback\u2013Leibler-optimal designs from L\u00f3pez-Fidalgo et al. (2007). Part (a) states that $$K_{1}(\\xi,{\\bar\\theta_1})$$ is the Kullback\u2013Leibler criterion for discrimination between $$f_1(y;x,\\bar\\theta_1)$$ and $$f_2^*(y;x,\\theta_2)$$ defined in (11). Part (b) states that $$K_{2}(\\xi,{\\bar\\theta_1})$$ is the Kullback\u2013Leibler criterion for discrimination between $$f_1^*(y;x,\\bar\\theta_1)$$ defined in (13) and $$f_2(y;x,\\theta_2)$$. Following convention in the case where all models are parametric, we call the function on the left-hand side of (16) or (17) the sensitivity function of the design under investigation. Clearly, different design criteria lead to different sensitivity functions for the same design. The usefulness of the equivalence theorem is that if the sensitivity function of a design does not satisfy the conditions required in the equivalence theorem, then the design is not optimal under the given criterion. Figure 2 illustrates these sensitivity plots. Fig. 2. View largeDownload slide Plots of the sensitivity functions of the following discrimination designs: (a) $$T$$-optimal, (b) Kullback\u2013Leibler-optimal, (c) semiparametric Kullback\u2013Leibler-optimal of type 1 and (d) semiparametric Kullback\u2013 Leibler-optimal of type 2, from Table 1. Fig. 2. View largeDownload slide Plots of the sensitivity functions of the following discrimination designs: (a) $$T$$-optimal, (b) Kullback\u2013Leibler-optimal, (c) semiparametric Kullback\u2013Leibler-optimal of type 1 and (d) semiparametric Kullback\u2013 Leibler-optimal of type 2, from Table 1. 4. Connections with the $$T$$-optimality criterion We now show that under homoscedastic symmetrically distributed errors, the semiparametric optimal design for discriminating between the model $$f_1(y;x, \\bar\\theta_1)$$ and the class $$\\mathcal{F}_2$$ coincides with the $$T$$-optimal design proposed by Atkinson & Fedorov (1975a). We first recall the classical set-up for finding an optimal design to discriminate between two models, where we assume that the mean functions in the models are known and the parameters in the null model are fixed at, say, $$\\bar\\theta_1$$. When errors in both models are normally distributed, a $$T$$-optimal discrimination design $$\\xi_T^*$$ maximizes the criterion \u00a0 $$\\inf_{\\theta_2 \\in \\Theta_2} \\int_{\\mathcal{X}} \\{\\eta_1(x,\\bar\\theta_1) - \\eta_2(x,\\theta_2)\\}^2 \\xi({\\rm d}x)$$ (18) among all designs on $$\\mathcal{X}$$ (Atkinson & Fedorov, 1975a). Throughout this section, we assume that the infimum in (18) is attained at a unique point $$\\theta_2^*$$ when $$\\xi = \\xi^*_T$$. Using arguments like those in Wiens (2009), it can be shown that the power of the likelihood ratio test for the hypotheses \u00a0 $$H_0: \\eta(x) = \\eta_2(x,\\theta_2) \\,\\,\\mbox{versus} \\,\\, H_1: \\eta(x) =\\eta_1(x,\\bar\\theta_1)$$ (19) is an increasing function of the quantity in (18). Our next result gives a sufficient condition for the $$T$$-optimal discriminating design to be a semiparametric optimal design in the sense of \u00a7 2. Theorem 3. Suppose that the assumptions of Theorem 1 (i) hold and $$f_1(y;x,\\bar\\theta_1)$$ satisfies\u00a0 \\begin{equation*} f_1(y;x,\\bar\\theta_1) = g\\{y - \\eta_1(x,\\bar\\theta_1)\\}, \\end{equation*}where $$g$$ is a symmetric density function supported in the interval $$[-a,a]$$, i.e., $$f_1$$ has support $$[-a+\\eta_1(x,\\bar\\theta_1), a + \\eta_1(x,\\bar\\theta_1)]$$. The $$T$$-optimal discriminating design maximizing criterion (18) is a semiparametric Kullback\u2013Leibler-optimal discriminating design of type $$1$$.\u2002 A similar result is available for the semiparametric Kullback\u2013Leibler-optimal discriminating designs of type 2. Suppose that $$f_2(y;x,\\theta_2)$$ and $$f_1(y;x,\\bar\\theta_1)$$ are normal distributions $$\\mathcal{N} \\{\\eta_2(x,\\theta_2) , v^2_2(x,\\theta_2)\\}$$ and $$\\mathcal{N} \\{\\eta_1(x,\\bar\\theta_1) , v^2_2(x,\\theta_2)\\}$$, respectively. It can be shown that the power of the likelihood ratio test for hypotheses (19) is an increasing function of \u00a0 \\begin{align} {\\textrm{KL}}_{1,2}(\\xi, \\overline \\theta_1)= \\inf_{\\theta_2 \\in \\Theta_2} \\int_{\\mathcal{X}} \\frac{\\{\\eta_1(x,\\bar\\theta_1) - \\eta_2(x,\\theta_2)\\}^2}{v^2_2(x,\\theta_2)} \\xi({\\rm d}x) \\end{align} (20) where $${\\textrm{KL}}_{1,2}(\\xi, \\overline \\theta_1)$$ is the Kullback\u2013Leibler criterion defined in (2). The next result shows that this design is also a semiparametric Kullback\u2013Leibler-optimal discriminating design of type 2. Theorem 4. Suppose that $$f_2(y;x,\\theta_2)$$ is a normal density with mean $$\\eta_2(x,\\theta_2)$$ and variance $$v_2^2(x,\\theta_2)$$. The best approximation $$f_1^*(y;x,\\bar\\theta_1)$$ is a normal density with mean $$\\eta_1(x,\\bar\\theta_1)$$ and variance $$v_2^2(x,\\theta_2)$$, and the optimal design maximizing (20) is a semiparametric Kullback\u2013Leibler-optimal discriminating design of type $$2$$ and vice versa.\u2002 5. Numerical results We now illustrate the new techniques for finding semiparametric optimal designs using three examples. From \u00a7 2, the first step is to solve equations (8) and (14) efficiently. In the second step, any numerical method that determines Kullback\u2013Leibler-optimal discrimination designs can be adapted to solve the minimax problems obtained from Theorem 1 because the representations (10) and (12) have the same structure as the Kullback\u2013Leibler optimality criteria considered in L\u00f3pez-Fidalgo et al. (2007). The second step defines a very challenging problem, and some recent results and algorithms for Kullback\u2013Leibler optimality criteria can be found in Stegmaier et al. (2013), Braess & Dette (2013), Dette et al. (2015) and Dette et al. (2017a). Below we focus on the first step because our aim is to find new semiparametric designs. In the second step, we use an adaptation of the first-order algorithm of Atkinson & Fedorov (1975a), which is not the most efficient algorithm but is very easy to implement. Let $$\\delta$$ be a user-selected positive constant. By Lemma 1 and inequality (9), we solve equation (8) in the following regions: if $$\\eta_1 (x,\\bar\\theta_1) = \\eta_2 (x,\\theta_2)$$, set $$\\lambda = 0$$; if $$\\eta_1 (x,\\bar\\theta_1) < \\eta_2 (x,\\theta_2)$$, choose a solution in the interval $${{\\Lambda}^-=[-1\/\\{y_{x,\\max}-\\eta_2(x,\\theta_2)\\}, -\\delta]}$$; if $$\\eta_1 (x,\\bar\\theta_1) > \\eta_2 (x,\\theta_2)$$, choose a solution in the interval $${{\\Lambda}^+=[\\delta, -1\/\\{y_{x,\\min}-\\eta_2(x,\\theta_2)\\}]}$$. Similarly, the solution of (14) can be obtained as follows. We search for $$\\lambda > 0$$ if $$\\eta_1(x,\\bar\\theta_1) < \\eta_2(x,\\theta_2)$$ so that $$\\lambda$$ shifts the predefined density $$f_2(y;x,\\theta_2)$$ to the left, and search for $$\\lambda < 0$$ if $$\\eta_1(x,\\bar\\theta_1) > \\eta_2(x,\\theta_2)$$. If $$\\delta$$ is chosen to be a small enough positive constant and $$\\beta$$ is a user-selected large positive constant, we can assume that the solution of (14) is in $$[-\\beta,+\\beta]$$. We suggest searching for the numerical solution of equation (14) in the following regions: if $$\\eta_1 (x,\\bar\\theta_1) = \\eta_2 (x,\\theta_2)$$, set $$\\lambda = 0$$; if $$\\eta_1 (x,\\bar\\theta_1) < \\eta_2 (x,\\theta_2)$$, choose a solution in the interval $${\\Lambda}^+ = [+\\delta, +\\beta]$$; if $$\\eta_1 (x,\\bar\\theta_1) > \\eta_2 (x,\\theta_2)$$, choose a solution in the interval $$\\Lambda^- = [-\\beta,-\\delta]$$. We now present two examples, where the $$T$$-optimal and semiparametric Kullback\u2013Leibler-optimal designs are determined numerically and are different. Example 2. Consider the optimal design problem from L\u00f3pez-Fidalgo et al. (2007), where they wanted to discriminate between the two models \u00a0 \\begin{align} \\eta_1(x,\\theta_1) = \\theta_{1,1} x + \\frac{\\theta_{1,2} x}{x + \\theta_{1,3}}, \\quad \\eta_2(x,\\theta_2) = \\frac{\\theta_{2,1} x}{x + \\theta_{2,2}}\\text{.} \\end{align} (21) The design space for both models is the interval $$[0.1,5]$$ and we assume that the first model has fixed parameters $$\\overline{\\theta}_1 = (1,1,1)$$. We construct four different types of optimal discrimination design for this problem: a $$T$$-optimal design; a Kullback\u2013Leibler-optimal design for lognormal errors, with fixed variances $$v^2_1(x,\\bar\\theta_1) = v^2_2(x,\\theta_2) = 0.1$$; a semiparametric Kullback\u2013Leibler-optimal discriminating design of type 1 for a mildly truncated lognormal density $$f_1(y;x,\\bar\\theta_1)$$ with location $$\\mu_1(x,\\bar\\theta_1)$$ and scale $$\\sigma^2_1(x,\\bar\\theta_1)$$; and a semiparametric Kullback\u2013Leibler-optimal discriminating design of type 2 for a mildly truncated lognormal density $$f_2(y;x,\\theta_2)$$ with location $$\\mu_2(x,\\theta_2)$$ and scale $$\\sigma^2_2(x,\\theta_2)$$, where \u00a0 \\begin{align*} \\mu_i(x,\\theta) = \\log \\eta_i(x,\\theta) - \\frac{1}{2} \\sigma^2_i(x,\\theta)\\,\\,\\text{and}\\,\\, \\sigma^2_i(x,\\theta) = \\log\\left\\{ 1+v^2_i(x,\\theta)\/\\eta_i^2(x,\\theta) \\right\\} \\quad (i=1,2)\\text{.} \\end{align*} The ranges for those densities are the intervals from $$Q_1(0.0001,x,\\bar\\theta_1)$$ to $$Q_1(0.9999,x,\\bar\\theta_1)$$ and from $$Q_2(0.0001,x,\\theta_2)$$ to $$Q_2(0.9999\\,x,\\theta_2)$$ respectively, where $$Q_i(p,x,\\theta)$$ is the quantile function of the ordinary lognormal density with mean $$\\eta_i(x,\\theta)$$ and variance $$v^2_i(x,\\theta) = 0.1$$. We note that because of the mild truncation, $$\\eta_1(x,\\bar\\theta_1)$$ and $$\\eta_2(x,\\theta_2)$$ are not exactly the means of the densities $$f_1(y;x,\\bar\\theta_1)$$ and $$f_2(y;x,\\theta_2)$$, respectively, but are very close to them. Table 1 displays the optimal discrimination designs under the four different criteria, along with the optimal parameter $$\\theta_2^*$$ of the second model corresponding to the minimal value with respect to the parameter $$\\theta_2$$. All four types of optimal discrimination designs are different, with the smallest support point of the Kullback\u2013Leibler-optimal design being noticeably different from those of the other three designs. The semiparametric Kullback\u2013Leibler-optimal discriminating design of type 2 has nearly the same support as the $$T$$-optimal design. Figure 2 shows the sensitivity functions of the four optimal designs and confirms their optimality. Table 1. Optimal discrimination designs for the two models in (21) Design type\u00a0 \u00a0\u00a0 $$\\xi^*$$\u00a0 \u00a0\u00a0 $$\\theta_2^*$$\u00a0 $$T$$-optimal\u00a0 $$x$$\u00a0 0.508\u00a0 2.992\u00a0 5.000\u00a0 (22.564, 14.637)\u00a0 $$w$$\u00a0 0.580\u00a0 0.298\u00a0 0.122\u00a0 $$KL$$-optimal\u00a0 $$x$$\u00a0 0.218\u00a0 2.859\u00a0 5.000\u00a0 (21.112, 13.436)\u00a0 $$w$$\u00a0 0.629\u00a0 0.260\u00a0 0.111\u00a0 $$SKL_{1}$$-optimal\u00a0 $$x$$\u00a0 0.454\u00a0 2.961\u00a0 5.000\u00a0 (22.045, 14.197)\u00a0 $$w$$\u00a0 0.531\u00a0 0.344\u00a0 0.125\u00a0 $$SKL_{2}$$-optimal\u00a0 $$x$$\u00a0 0.509\u00a0 2.994\u00a0 5.000\u00a0 (22.824, 14.857)\u00a0 $$w$$\u00a0 0.611\u00a0 0.273\u00a0 0.116\u00a0 Design type\u00a0 \u00a0\u00a0 $$\\xi^*$$\u00a0 \u00a0\u00a0 $$\\theta_2^*$$\u00a0 $$T$$-optimal\u00a0 $$x$$\u00a0 0.508\u00a0 2.992\u00a0 5.000\u00a0 (22.564, 14.637)\u00a0 $$w$$\u00a0 0.580\u00a0 0.298\u00a0 0.122\u00a0 $$KL$$-optimal\u00a0 $$x$$\u00a0 0.218\u00a0 2.859\u00a0 5.000\u00a0 (21.112, 13.436)\u00a0 $$w$$\u00a0 0.629\u00a0 0.260\u00a0 0.111\u00a0 $$SKL_{1}$$-optimal\u00a0 $$x$$\u00a0 0.454\u00a0 2.961\u00a0 5.000\u00a0 (22.045, 14.197)\u00a0 $$w$$\u00a0 0.531\u00a0 0.344\u00a0 0.125\u00a0 $$SKL_{2}$$-optimal\u00a0 $$x$$\u00a0 0.509\u00a0 2.994\u00a0 5.000\u00a0 (22.824, 14.857)\u00a0 $$w$$\u00a0 0.611\u00a0 0.273\u00a0 0.116\u00a0 $$KL$$, Kullback\u2013Leibler; $$SKL_{i}$$, semiparametric Kullback\u2013Leibler of type $$i$$. Table 2 displays the four different types of efficiencies of the $$T$$-, Kullback\u2013Leibler-, and semiparametric Kullback\u2013Leibler-optimal discriminating designs. Small changes in the design can have large effects, and the $$T$$- and Kullback\u2013Leibler-optimal discrimination designs are not very robust under a variation of the criteria, where the Kullback\u2013Leibler-optimal discrimination design has slight advantages. On the other hand, the semiparametric Kullback\u2013Leibler-optimal discriminating design of type 1 yields moderate efficiencies, about $$75 \\%$$, with respect to the $$T$$- and Kullback\u2013Leibler optimality criteria. Table 2. Efficiencies of optimal discrimination designs for the two models in (21) under various optimality criteria. For example, the value $$0.321$$ in the first row is the efficiency of the Kullback\u2013Leibler-optimal design with respect to the $$T$$-optimality criterion\u2002 \u00a0 $$T$$-optimal\u00a0 $$KL$$-optimal\u00a0 $$SKL_{1}$$-optimal\u00a0 $$SKL_{2}$$-optimal\u00a0 $$T$$-criterion\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$321\u00a0 0$${\\cdot}$$741\u00a0 0$${\\cdot}$$830\u00a0 $$KL$$-criterion\u00a0 0$${\\cdot}$$739\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$796\u00a0 0$${\\cdot}$$650\u00a0 $$SKL_{1}$$-criterion\u00a0 0$${\\cdot}$$552\u00a0 0$${\\cdot}$$544\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$454\u00a0 $$SKL_{2}$$-criterion\u00a0 0$${\\cdot}$$876\u00a0 0$${\\cdot}$$254\u00a0 0$${\\cdot}$$633\u00a0 1$${\\cdot}$$000\u00a0 \u00a0 $$T$$-optimal\u00a0 $$KL$$-optimal\u00a0 $$SKL_{1}$$-optimal\u00a0 $$SKL_{2}$$-optimal\u00a0 $$T$$-criterion\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$321\u00a0 0$${\\cdot}$$741\u00a0 0$${\\cdot}$$830\u00a0 $$KL$$-criterion\u00a0 0$${\\cdot}$$739\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$796\u00a0 0$${\\cdot}$$650\u00a0 $$SKL_{1}$$-criterion\u00a0 0$${\\cdot}$$552\u00a0 0$${\\cdot}$$544\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$454\u00a0 $$SKL_{2}$$-criterion\u00a0 0$${\\cdot}$$876\u00a0 0$${\\cdot}$$254\u00a0 0$${\\cdot}$$633\u00a0 1$${\\cdot}$$000\u00a0 $$KL$$, Kullback\u2013Leibler; $$SKL_{i}$$, semiparametric Kullback\u2013Leibler of type $$i$$. Example 3. Consider a similar problem with a function $$\\eta_1(x,\\theta_1)$$ taken from Wiens (2009). The two models of interest are \u00a0 \\begin{align} \\eta_1(x,\\theta_1) = \\theta_{1,1} \\big \\{ 1 - \\exp(-\\theta_{1,2} x) \\big\\}, \\quad \\eta_2(x,\\theta_2) = \\frac{\\theta_{2,1} x}{\\theta_{2,2}+ x}, \\end{align} (22) where the design space is $$\\mathcal{X} = [0.1,5]$$. Here we fix the parameters of the first model in (22) to $$\\overline \\theta_1 = (1,1)$$ and determine the $$T$$-optimal, Kullback\u2013Leibler-optimal for lognormal errors, and semiparametric Kullback\u2013Leibler-optimal discriminating designs of type 1 and type 2 for mildly truncated lognormal errors. The error variances for the Kullback\u2013Leibler-optimal discrimination design are $$v_1^2(x,\\bar\\theta_1) = v_2^2(x,\\theta_2) =0.02$$; for the semiparametric Kullback\u2013Leibler-optimal discriminating design of type 1 the variance is $$v_1^2(x,\\bar\\theta_1) = 0.02$$, and for the semiparametric Kullback\u2013Leibler-optimal discriminating design of type 2 the variance is $$v_2^2(x,\\theta_2) = 0.02$$. Table 3 displays the various optimal designs, along with the minimal values of the parameters $$\\theta_2^*$$ and $$\\theta_2^{*}$$ in the second model sought in the criterion. The optimality of the numerically determined $$T$$-optimal, Kullback\u2013Leibler-optimal and semiparametric Kullback\u2013Leibler-optimal discriminating designs of type 1 and type 2 can be verified by plotting the corresponding sensitivity functions. We again observe substantial differences between the optimal discrimination designs with respect to the different criteria. A comparison of the efficiencies of the optimal designs with respect to the different criteria in Table 4 shows a similar picture as in the first example. In particular, we note that for our two examples, the other optimal discrimination designs are especially sensitive to the Kullback\u2013Leibler optimality criteria; in the first example their Kullback\u2013Leibler efficiences are at best $$54\\%$$, and in the second example their Kullback\u2013Leibler efficiencies are not higher than $$40\\%$$. One reason may be that the smallest support point of the Kullback\u2013Leibler-optimal design is noticeably smaller than the minimum support point of each of the other three optimal designs. Table 3. Optimal discrimination designs for the two models in (22) Design type\u00a0 \u00a0\u00a0 $$\\xi^*$$\u00a0 \u00a0\u00a0 $$\\theta_2^*$$\u00a0 $$T$$-optimal\u00a0 $$x$$\u00a0 0.308\u00a0 2.044\u00a0 5.000\u00a0 $$(1.223, 0.948)$$\u00a0 $$w$$\u00a0 0.316\u00a0 0.428\u00a0 0.256\u00a0 $$KL$$-optimal\u00a0 $$x$$\u00a0 0.136\u00a0 1.902\u00a0 5.000\u00a0 $$(1.244, 1.020)$$\u00a0 $$w$$\u00a0 0.297\u00a0 0.457\u00a0 0.252\u00a0 $$SKL_{1}$$-optimal\u00a0 $$x$$\u00a0 0.395\u00a0 2.090\u00a0 5.000\u00a0 $$(1.216, 0.920)$$\u00a0 $$w$$\u00a0 0.396\u00a0 0.355\u00a0 0.249\u00a0 $$SKL_{2}$$-optimal\u00a0 $$x$$\u00a0 0.308\u00a0 2.044\u00a0 5.000\u00a0 $$(1.225, 0.956)$$\u00a0 $$w$$\u00a0 0.289\u00a0 0.458\u00a0 0.253\u00a0 Design type\u00a0 \u00a0\u00a0 $$\\xi^*$$\u00a0 \u00a0\u00a0 $$\\theta_2^*$$\u00a0 $$T$$-optimal\u00a0 $$x$$\u00a0 0.308\u00a0 2.044\u00a0 5.000\u00a0 $$(1.223, 0.948)$$\u00a0 $$w$$\u00a0 0.316\u00a0 0.428\u00a0 0.256\u00a0 $$KL$$-optimal\u00a0 $$x$$\u00a0 0.136\u00a0 1.902\u00a0 5.000\u00a0 $$(1.244, 1.020)$$\u00a0 $$w$$\u00a0 0.297\u00a0 0.457\u00a0 0.252\u00a0 $$SKL_{1}$$-optimal\u00a0 $$x$$\u00a0 0.395\u00a0 2.090\u00a0 5.000\u00a0 $$(1.216, 0.920)$$\u00a0 $$w$$\u00a0 0.396\u00a0 0.355\u00a0 0.249\u00a0 $$SKL_{2}$$-optimal\u00a0 $$x$$\u00a0 0.308\u00a0 2.044\u00a0 5.000\u00a0 $$(1.225, 0.956)$$\u00a0 $$w$$\u00a0 0.289\u00a0 0.458\u00a0 0.253\u00a0 $$KL$$, Kullback\u2013Leibler; $$SKL_{i}$$, semiparametric Kullback\u2013Leibler of type $$i$$. Table 4. Efficiencies of optimal discrimination designs for the two models in (22) under various optimality criteria\u2002 \u00a0 $$T$$-optimal\u00a0 $$KL$$-optimal\u00a0 $$SKL_{1}$$-optimal\u00a0 $$SKL_{2}$$-optimal\u00a0 $$T$$-criterion\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$266\u00a0 0$${\\cdot}$$663\u00a0 0$${\\cdot}$$858\u00a0 $$KL$$-criterion\u00a0 0$${\\cdot}$$786\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$565\u00a0 0$${\\cdot}$$879\u00a0 $$SKL_{1}$$-criterion\u00a0 0$${\\cdot}$$407\u00a0 0$${\\cdot}$$346\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$388\u00a0 $$SKL_{2}$$-criterion\u00a0 0$${\\cdot}$$882\u00a0 0$${\\cdot}$$396\u00a0 0$${\\cdot}$$608\u00a0 1$${\\cdot}$$000\u00a0 \u00a0 $$T$$-optimal\u00a0 $$KL$$-optimal\u00a0 $$SKL_{1}$$-optimal\u00a0 $$SKL_{2}$$-optimal\u00a0 $$T$$-criterion\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$266\u00a0 0$${\\cdot}$$663\u00a0 0$${\\cdot}$$858\u00a0 $$KL$$-criterion\u00a0 0$${\\cdot}$$786\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$565\u00a0 0$${\\cdot}$$879\u00a0 $$SKL_{1}$$-criterion\u00a0 0$${\\cdot}$$407\u00a0 0$${\\cdot}$$346\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$388\u00a0 $$SKL_{2}$$-criterion\u00a0 0$${\\cdot}$$882\u00a0 0$${\\cdot}$$396\u00a0 0$${\\cdot}$$608\u00a0 1$${\\cdot}$$000\u00a0 $$KL$$, Kullback\u2013Leibler; $$SKL_{i}$$, semiparametric Kullback\u2013Leibler of type $$i$$. Example 4. It is difficult to compare our approach with that of Otsu (2008) because of the computational difficulties described at the end of \u00a7 2. In particular, the latter algorithm is often not able to determine the semiparametric Kullback\u2013Leibler-optimal discrimination design. For instance, in the situations considered in Examples 2 and 3 we were unable to obtain convergence. For a comparison of the speed of the methods we therefore have to choose a relatively simple example for which a comparison of both methods is possible. For this purpose we again considered models (22) and constructed the semiparametric Kullback\u2013Leibler-optimal discriminating designs of type 1. For the density $$f_1(y;x,\\bar\\theta_1)$$ we used the density of the random variable $$\\eta_1(x,\\theta_1) + (\\varepsilon -m)$$, where the distribution of $$\\varepsilon$$ is lognormal with location and scale parameters $$0$$ and $$1$$, respectively, truncated to the interval between the $$0.1\\%$$ and $$90\\%$$ quantiles. The constant $$m$$ is chosen such that $$E (\\varepsilon - m)=0$$. The semiparametric Kullback\u2013Leibler-optimal discriminating design of type 1 is supported at $$0.308, 2.044$$ and $$5.000$$ with weights $$0.323, 0.415$$ and $$0.262$$, respectively. It has the same support as the $$T$$-optimal discrimination design but the weights are different. It took about $$540$$ seconds for the approach proposed in this paper and about $$1230$$ seconds for Otsu\u2019s method to find the optimal design. In both cases, we used an adaptation of the Atkinson\u2013Fedorov algorithm in our search. So, even in this simple example, the computational differences are substantial. 6. Conclusions Much of the present work on optimal design for discriminating between models assumes that the models are fully parametric. Our work allows the alternative models to be nonparametric, where only their mean functions have to be specified, apart from the parameter values. Our approach is simpler and more reliable than other approaches in the literature for tackling such challenging and more realistic optimal discrimination design problems. We expect potential applications of our work to systems biology, where frequently the underlying model generating the responses is unknown and very complex. In practice, the mean response is approximated in a few ways and these approximations become the conditional means of nonparametric models that need to be efficiently discriminated to arrive at a plausible model. The optimal design method presented here will save costs by helping biological researchers to efficiently determine an adequate mean model among several postulated. There are also rich opportunities for further methodological research. For example, an important problem is to relax the assumption that the set $$\\mathcal{S}_{f_1, \\overline \\theta, x}$$ defined in (3) is fixed for each $$x$$, so that the method can be applied to a broader class of conditional densities. Acknowledgement We are grateful to the reviewers for their constructive comments on the first version of our paper. Dette and Guchenko were supported by the Deutsche Forschungsgemeinschaft. Dette and Wong were partially supported by the National Institute of General Medical Sciences of the U.S. National Institutes of Health. Melas and Guchenko were partially supported by St. Petersburg State University and the Russian Foundation for Basic Research. Supplementary material Supplementary material available at Biometrika online contains all proofs. References Abd El-Monsef M. M. E. & Seyam M. M. ( 2011). CDT-optimum designs for model discrimination, parameter estimation and estimation of a parametric function. J. Statist. Plan. Infer.\u00a0 141, 639\u2013 43. Google Scholar CrossRef Search ADS \u00a0 Alberton A. L. , Schwaab M., Labao M. W. N. & Pinto J. C. ( 2011). Experimental design for the joint model discrimination and precise parameter estimation through information measures. Chem. Eng. Sci.\u00a0 66, 1940\u2013 52. Google Scholar CrossRef Search ADS \u00a0 Aletti G. , May C. & Tommaci C. ( 2016). KL-optimum designs: Theoretical properties and practical computation. Statist. Comp.\u00a0 26, 107\u2013 17. Google Scholar CrossRef Search ADS \u00a0 Atkinson A. C. ( 2008). $${DT}$$-optimum designs for model discrimination and parameter estimation. J. Statist. Plan. Infer.\u00a0 138, 56\u2013 64. Google Scholar CrossRef Search ADS \u00a0 Atkinson A. C. & Fedorov V. V. ( 1975a). The designs of experiments for discriminating between two rival models. Biometrika\u00a0 62, 57\u2013 70. Google Scholar CrossRef Search ADS \u00a0 Atkinson A. C. & Fedorov V. V. ( 1975b). Optimal design: Experiments for discriminating between several models. Biometrika\u00a0 62, 289\u2013 303. Borwein J. M. & Lewis A. S. ( 1991). Duality relationships for entropy-like minimization problems. SIAM J. Contr. Optimiz.\u00a0 29, 325\u2013 38. Google Scholar CrossRef Search ADS \u00a0 Braess D. & Dette H. ( 2013). Optimal discriminating designs for several competing regression models. Ann. Statist.\u00a0 41, 897\u2013 922. Google Scholar CrossRef Search ADS \u00a0 Campos-Barreiro S. & Lopez-Fidalgo J. ( 2016). KL-optimal experimental design for discriminating between two growth models applied to a beef farm. Math. Biosci. Eng.\u00a0 13, 67\u2013 82. Google Scholar CrossRef Search ADS PubMed\u00a0 Chernoff H. ( 1953). Locally optimal designs for estimating parameters. Ann. Math. Statist.\u00a0 24, 586\u2013 602. Google Scholar CrossRef Search ADS \u00a0 Covagnaro D. R. , Myung J. I., Pitt M. A. & Kujala J. V. ( 2010). Adaptive design optimization: A mutual information-based approach to model discrimination in cognitive science. Neural Comp.\u00a0 22, 887\u2013 905. Google Scholar CrossRef Search ADS \u00a0 Dette H. , Guchenko R. & Melas V. B. ( 2017a). Efficient computation of Bayesian optimal discriminating designs. J. Comp. Graph. Statist.\u00a0 26, 424\u2013 33. Google Scholar CrossRef Search ADS \u00a0 Dette H. , Melas V. B. & Guchenko R. ( 2015). Bayesian $$T$$-optimal discriminating designs. Ann. Statist.\u00a0 43, 1959\u2013 85. Google Scholar CrossRef Search ADS \u00a0 Dette H. , Melas V. B. & Shpilev P. ( 2012). $${T}$$-optimal designs for discrimination between two polynomial models. Ann. Statist.\u00a0 40, 188\u2013 205. Google Scholar CrossRef Search ADS \u00a0 Dette H. , Melas V. B. & Shpilev P. ( 2013). Robust $$T$$-optimal discriminating designs. Ann. Statist.\u00a0 41, 1693\u2013 715. Google Scholar CrossRef Search ADS \u00a0 Dette H. , Melas V. B. & Shpilev P. ( 2017b). $${T}$$-optimal discriminating designs for Fourier regression models. Comp. Statist. Data Anal.\u00a0 113, 196\u2013 206. Google Scholar CrossRef Search ADS \u00a0 Dette H. & Titoff S. ( 2009). Optimal discrimination designs. Ann. Statist.\u00a0 37, 2056\u2013 82. Google Scholar CrossRef Search ADS \u00a0 Felsenstein K. ( 1992). Optimal Bayesian design for discrimination among rival models. Comp. Statist. Data Anal.\u00a0 14, 427\u2013 36. Google Scholar CrossRef Search ADS \u00a0 Ghosh S. & Dutta S. ( 2013). Robustness of designs for model discrimination. J. Mult. Anal.\u00a0 115, 193\u2013 203. Google Scholar CrossRef Search ADS \u00a0 Jamsen K. M. , Duffull S. B., Tarning J., Price R. N. & Simpson J. ( 2013). A robust design for identification of the parasite clearance estimator. Malaria J.\u00a0 12, 410\u2013 6. Google Scholar CrossRef Search ADS \u00a0 Kiefer J. ( 1974). General equivalence theory for optimum designs (approximate theory). Ann. Statist.\u00a0 2, 849\u2013 79. Google Scholar CrossRef Search ADS \u00a0 L\u00f3pez-Fidalgo J. , Tommasi C. & Trandafir P. C. ( 2007). An optimal experimental design criterion for discriminating between non-normal models. J. R. Statist. Soc. B\u00a0 69, 231\u2013 42. Google Scholar CrossRef Search ADS \u00a0 Myung J. I. & Pitt M. A. ( 2009). Optimal experimental design for model discrimination. Psychol. Rev.\u00a0 116, 499\u2013 518. Google Scholar CrossRef Search ADS PubMed\u00a0 Ng S. H. & Chick S. E. ( 2004). Design of follow-up experiments for improving model discrimination and parameter estimation. Naval Res. Logist.\u00a0 2, 1\u2013 11. Otsu T. ( 2008). Optimal experimental design criterion for discriminating semi-parametric models. J. Statist. Plan. Infer.\u00a0 138, 4141\u2013 50. Google Scholar CrossRef Search ADS \u00a0 Pukelsheim F. ( 2006). Optimal Design of Experiments\u00a0. Philadelphia: SIAM. Google Scholar CrossRef Search ADS \u00a0 Silvey S. ( 1980). Optimal Design\u00a0. London: Chapman & Hall. Google Scholar CrossRef Search ADS \u00a0 Stegmaier J. , Skanda D. & Lebiedz D. ( 2013). Robust optimal design of experiments for model discrimination using an interactive software tool. PLOS ONE\u00a0 8, e55723, https:\/\/doi.org\/10.1371\/journal.pone.0055723. Google Scholar CrossRef Search ADS PubMed\u00a0 Tommasi C. & L\u00f3pez-Fidalgo J. ( 2010). Bayesian optimum designs for discriminating between models with any distribution. Comp. Statist. Data Anal.\u00a0 54, 143\u2013 50. Google Scholar CrossRef Search ADS \u00a0 Tommasi C. , Martin-Martin R. & Lopez-Fidalgo J. ( 2016). Max-min optimal discriminating designs for several statistical models. Statist. Comp.\u00a0 26, 1163\u2013 72. Google Scholar CrossRef Search ADS \u00a0 Ucinski D. & Bogacka B. ( 2005). $$T$$-optimum designs for discrimination between two multiresponse dynamic models. J. R. Statist. Soc.\u00a0 67, 3\u2013 18. Google Scholar CrossRef Search ADS \u00a0 Waterhouse T. H. , Woods D. C., Eccleston J. A. & Lewis S. M. ( 2008). Design selection criteria for discrimination\/estimation for nested models and a binomial response. J. Statist. Plan. Infer.\u00a0 138, 132\u2013 44. Google Scholar CrossRef Search ADS \u00a0 Wiens D.P. ( 2009). Robust discrimination designs. J. R. Statist. Soc.\u00a0 71, 805\u2013 29. Google Scholar CrossRef Search ADS \u00a0 \u00a9 2017 Biometrika Trust http:\/\/www.deepdyve.com\/assets\/images\/DeepDyve-Logo-lg.png Biometrika Oxford University Press\n\nOptimal discrimination designs for semiparametric models\n\n, Volume 105 (1) \u2013 Mar 1, 2018\n14 pages\n\n\/lp\/ou_press\/optimal-discrimination-designs-for-semiparametric-models-aAlGH7I0em\nPublisher\nOxford University Press\n\u00a9 2017 Biometrika Trust\nISSN\n0006-3444\neISSN\n1464-3510\nD.O.I.\n10.1093\/biomet\/asx058\nPublisher site\nSee Article on Publisher Site\n\nAbstract\n\nSummary Much work on optimal discrimination designs assumes that the models of interest are fully specified, apart from unknown parameters. Recent work allows errors in the models to be nonnormally distributed but still requires the specification of the mean structures. Otsu (2008) proposed optimal discriminating designs for semiparametric models by generalizing the Kullback\u2013Leibler optimality criterion proposed by L\u00f3pez-Fidalgo et al. (2007). This paper develops a relatively simple strategy for finding an optimal discrimination design. We also formulate equivalence theorems to confirm optimality of a design and derive relations between optimal designs found here for discriminating semiparametric models and those commonly used in optimal discrimination design problems. 1. Introduction Optimal discrimination design problems have recently appeared in cognitive science (Covagnaro et al., 2010), psychology (Myung & Pitt, 2009) and chemical engineering (Alberton et al., 2011). A main motivation for such research is that in a scientific study, we often do not know the true underlying model that drives the responses but experts may have a number of candidate models that they believe should be adequate for studying the process. An informed and well-constructed design provides valuable information, so constructing an optimal design to find the most appropriate model among a few plausible models is important. In applications, the optimal discrimination design provides guidance on how data should be collected efficiently to infer the most plausible model before other inferential procedures are employed to attain the study objectives using the identified model. Our work concerns the first part of such an approach, where the goal is to determine the most appropriate design to discriminate between the models. The statistical theory for studying optimal discrimination designs dates back to the 1970s. An early reference is Atkinson & Fedorov (1975a,b), who proposed $$T$$-optimal designs to discriminate between models when errors are normally distributed. $$T$$-optimality assumes a known null model and we wish to test whether a rival parametric model with unknown parameters holds. When models are all parametric, the likelihood ratio test is typically used to discriminate between the models. The noncentrality parameter of the chi-squared distribution of the test statistic contains the unknown parameters from the alternative model and is proportional to the $$T$$-optimality criterion (Atkinson & Fedorov, 1975a; Wiens, 2009). Since a larger noncentrality parameter provides a more powerful test, $$T$$-optimal designs maximize the minimum value of the noncentrality parameter, where the minimum is taken over all possible values of the parameters in the alternative model. The $$T$$-optimality criterion is not differentiable and finding optimal discrimination designs under the maximin design criterion can be challenging even when relatively simple models are involved; see, for example, Dette et al. (2012, 2017b). Constructing efficient algorithms for finding $$T$$-optimal designs is likewise difficult in general, despite recent progress (Braess & Dette, 2013; Dette et al., 2015, 2017a; Aletti et al., 2016; Tommasi et al., 2016). Recent advances in tackling discrimination design problems include the following. The frequently criticized unrealistic assumption in the $$T$$-optimality criterion that requires a known model in the null hypothesis is now removed (Jamsen et al., 2013) and the class of models of interest now includes generalized linear models (Waterhouse et al., 2008). Methodologies are also available for finding a variety of optimal discriminating designs for multivariate dynamic models (Ucinski & Bogacka, 2005), Bayesian optimal designs for model discrimination (Felsenstein, 1992; Tommasi & L\u00f3pez-Fidalgo, 2010; Dette et al., 2015), dual-objective optimal discrimination designs (Ng & Chick, 2004; Atkinson, 2008; Alberton et al., 2011; Abd El-Monsef & Seyam, 2011), optimal designs that discriminate between models with correlated errors (Campos-Barreiro & Lopez-Fidalgo, 2016) and adaptive designs for model discrimination (Myung & Pitt, 2009). References that describe alternative approaches and properties of optimal discrimination designs include L\u00f3pez-Fidalgo et al. (2007), Dette & Titoff (2009) and Dette et al. (2015). All references cited so far require a parametric conditional distribution of the response. This raises the question as to whether $$T$$-optimal discrimination designs are robust with respect to misspecification of this distribution. Some answers are provided by Wiens (2009), Ghosh & Dutta (2013) and Dette et al. (2013). Otsu (2008) proposed a new optimality criterion for discriminating between models, which is similar in spirit to the classical $$T$$-optimality criterion and its extensions but does not require an exact specification of the conditional distribution. Optimal discrimination designs were found using the duality relationships in entropy-like minimization problems (Borwein & Lewis, 1991) and the resulting optimal designs are called semiparametric optimal discrimination designs. 2. Semiparametric discrimination designs Following Kiefer (1974), we focus on approximate designs, which are probability measures defined on user-selected design space $$\\mathcal{X}$$. If an approximate design has $$k$$ support points at $$x_1,\\ldots, x_k$$ with corresponding weights $$\\omega_1,\\ldots, \\omega_k$$ and the total number of observations allowed for the study is $$n$$, then approximately $$n \\omega_i$$ observations are taken at $$x_1,\\ldots, x_k$$. In practice, each $$n \\omega_i$$ is rounded to an integer $$n_i$$ so that $$n_i$$ observations are taken at $$x_1,\\ldots, x_k$$ subject to $$\\sum_{i=1}^kn_i=n$$. Let $$Y$$ be the continuous response variable and let $$x$$ denote a vector of explanatory variables defined on a given compact design space $$\\mathcal{X}$$. Suppose the density of $$Y$$ with respect to the Lebesgue measure is $$f(y;x)$$ and we want to construct efficient designs for discriminating between two competing models. L\u00f3pez-Fidalgo et al. (2007) assumed that there are two parametric densities, say $$f_j(y;x,\\theta_j)$$, where the parameter $$\\theta_j$$ varies in a compact parameter space $$\\Theta_j\\ (\\,j=1,2)$$. To fix ideas, we ignore nuisance parameters which may be present in the models. The Kullback\u2013Leibler divergence measures the discrepancy between the two densities and is given by \u00a0 $$I_{1,2} (x, f_1, f_2, \\theta_1, \\theta_2) = \\int f_1 (y ; x, \\theta_1) \\log \\frac {f_1 (\\,y ; x, \\theta_1)}{f_2(\\,y ; x,\\theta_2)}\\,{\\rm d}y\\text{.}$$ (1) L\u00f3pez-Fidalgo et al. (2007) assumed that the model $$f_1$$ is the true model with a fixed parameter vector $$\\bar\\theta_1$$ and call a design a local Kullback\u2013Leibler-optimal discriminating design for the models $$f_{1}$$ and $$f_{2}$$ if it maximizes the criterion \u00a0 $${\\rm KL}_{1,2} (\\xi, \\overline \\theta_{1}) = \\inf_{\\theta_{2}\\in \\Theta_2} \\int_{\\mathcal{X}} I_{1,2} (x, f_{1},f_{2} , {\\overline{\\theta}_{1}}, \\theta_{2})\\,\\xi ({\\rm d}x)$$ (2) over all designs on the design space $$\\mathcal{X}$$. Such a Kullback\u2013Leibler-optimal design maximizes the power of the likelihood ratio test for the hypothesis \u00a0 \\begin{equation*} H_0: f(y;x) = f_2(y;x,\\theta_2)\\,\\,\\mbox{versus}\\,\\,H_1: f(y;x) = f_1(y; x, \\bar\\theta_1) \\end{equation*} in the worst-case scenario when $$\\theta_2 \\in \\Theta_2$$ (L\u00f3pez-Fidalgo et al., 2007, p. 233). Otsu (2008) proposed a design criterion for discriminating between a parametric model defined by its density and another semiparametric model. The set-up is more general than that in L\u00f3pez-Fidalgo et al. (2007), who assumed that $$f_1$$ and $$f_2$$ are known and one of the parametric models is fully specified. Specifically, suppose that the conditional mean of the density $$f_j (y;x,\\theta_j)$$ is \u00a0 $$\\eta_j (x, \\theta_j) = \\int y f_j (y;x,\\theta_j) {\\rm d}y \\quad (j=1,2)$$ and its support set is \u00a0 $$\\mathcal{S}_{f_j, \\theta_j , x} = \\big\\{ y \\,:\\, f_j(y;x,\\theta_j) > 0 \\big\\} \\quad (j=1,2)\\text{.}$$ (3) Further, let $$f_1(y;x, \\bar\\theta_1)$$ be a parametric density with a fixed parameter $$\\bar\\theta_1$$. Define \u00a0 \\begin{align*} \\mathcal{F}_{2 ,x,\\theta_2} = \\left\\{f_2 : \\int f_2(y;x,\\theta_2)\\, {\\rm d}y = 1, \\; \\int y f_2(y;x,\\theta_2)\\,{\\rm d}y = \\eta_2(x,\\theta_2), \\,\\mathcal{S}_{f_2, \\theta_2 , x} = \\mathcal{S}_{f_1, \\overline{\\theta}_1 , x} \\right\\}\\!, \\end{align*} which is the class of all conditional densities at the point $$x$$ with parameter $$\\theta_2$$ and conditional mean $$\\eta_2(x,\\theta_2)$$. Consider the set obtained from $$\\mathcal{F}_{2 ,x,\\theta_2}$$ by letting the ranges of $$x$$ and $$\\theta_2$$ vary over all their possible values, i.e., \u00a0 \\begin{align*} \\mathcal{F}_{2 } = \\bigcup_{x \\in \\mathcal{X}} \\bigcup_{\\theta_2 \\in \\Theta_2} \\mathcal{F}_{2 ,x,\\theta_2}, \\end{align*} and call a design $$\\xi^*$$ semiparametric optimal for discriminating between the model $$f_1(y;x, \\bar\\theta_1)$$ and models in the class $$\\mathcal{F}_2$$ if it maximizes \u00a0 \\begin{align} K_{1}(\\xi,{\\bar\\theta_1}) = \\inf_{\\theta_2 \\in \\Theta_2} \\int_{\\mathcal{X}} \\inf_{f_2 \\in \\mathcal{F}_{2, x,\\theta_2} } I_{1,2} (x, f_1,f_2 , { \\overline{\\theta}_1}, \\theta_2) \\, \\xi({\\rm d}x) \\end{align} (4) among all approximate designs on $$\\mathcal{X}$$. This is a local optimality criterion in the sense of Chernoff (1953), as it depends on the parameter $$\\bar\\theta_1$$. Another possibility is to fix the family of conditional densities for $$f_2(y;x,\\theta_2)$$, where the form of $$f_2$$ is known apart from the values of $$\\theta_2$$. Define \u00a0 \\begin{align*} \\mathcal{F}_{1, x,\\bar\\theta_1} &= \\left\\{ f_1 : \\int f_1(y;x, \\bar\\theta_1)\\, {\\rm d}y = 1, \\int y f_1(y;x,{ \\bar\\theta_1})\\,{\\rm d}y = \\eta_1(x, \\bar\\theta_1), \\, \\mathcal{S}_{f_1, \\bar\\theta_1 , x} = \\mathcal{S}_{f_2, \\theta_2 , x} \\right\\}\\!, \\end{align*} which is the class of all conditional densities with parameter $$\\overline{\\theta}_1$$ and conditional mean $$\\eta_1(x,\\overline{\\theta}_1)$$. For fixed $$\\bar\\theta_1$$, let \u00a0 \\begin{align*} \\mathcal{F}_1 = \\bigcup_{x \\in \\mathcal{X} } \\mathcal{F}_{1 ,x,\\bar\\theta_1} \\end{align*} and call a design $$\\xi^*$$ locally semiparametric optimal for discriminating between the family of models $$f_2(y;x, \\theta_2)$$ and the class $$\\mathcal{F}_{1}$$ if it maximizes \u00a0 \\begin{align} K_{2}(\\xi, {\\bar\\theta_1}) = \\inf_{\\theta_2 \\in \\Theta_2} \\int_{X} \\inf_{f_1 \\in \\mathcal{F}_{1 ,x,\\bar\\theta_1}} I_{1,2} (x, f_1,f_2 , { \\bar\\theta_1}, \\theta_2) \\, \\xi({\\rm d}x) \\end{align} (5) among all approximate designs on $$\\mathcal{X}$$. In the following discussion we refer to designs that maximize the criteria $$K_{1}$$ and $$K_{2}$$ as semiparametric Kullback\u2013Leibler-optimal discriminating designs of type 1 and type 2, respectively. We assume, for the sake of simplicity, that $$f_1(y;x,\\theta_1)$$, $$f_2(y;x,\\theta_2)$$, $$\\eta_1(x,\\theta_1)$$$$\\eta_2(x,\\theta_2)$$ are differentiable with respect to $$y$$, $$x$$, $$\\theta_1$$ and $$\\theta_2$$, though these assumptions could be relaxed if necessary. In Theorem 3.1 of his paper, Otsu (2008) derived explicit forms for the two criteria. For criterion (4), he obtained \u00a0 \\begin{align} K_{1}(\\xi,{ \\bar\\theta_1}) = \\inf_{\\theta_2 \\in \\Theta_2} \\int_{X} \\left( \\mu + 1 + \\int \\log \\left[ -\\mu - \\lambda \\{y - \\eta_2(x,\\theta_2)\\} \\right] f_1(y;x,{ \\bar\\theta_1})\\,{\\rm d}y \\right) \\xi({\\rm d}x), \\end{align} (6) where the constants $$\\lambda$$ and $$\\mu$$ depend on $$x$$, $$\\bar\\theta_1$$ and $$\\theta_2$$ are roots of the system of equations \u00a0 \\begin{align} - \\int \\frac{f_1(y;x,{ \\bar\\theta_1})}{\\mu + \\lambda \\{y - \\eta_2(x,\\theta_2)\\}}\\,{\\rm d}y = 1, \\quad \\int \\frac{\\{y - \\eta_2(x,\\theta_2)\\}f_1(y;x,{ \\bar\\theta_1})}{\\mu + \\lambda \\{y - \\eta_2(x,\\theta_2)\\}}\\,{\\rm d}y = 0 \\end{align} (7) that satisfy the constraint $$\\mu + \\lambda \\{y - \\eta_2(x,\\theta_2)\\} < 0 \\; \\mbox{ for all } y \\in \\mathcal{S}_{f_1, \\bar\\theta_1, x}\\text{.}$$ A similar result can be obtained for criterion (5) (Otsu, 2008, Theorem 3.2). Below we simplify Otsu\u2019s approach, show that the inner optimization problems in (4) and (5) can be reduced to solving a single equation, and derive simpler expressions for criteria (4) and (5) that facilitate the computation of the semiparametric optimal discriminating designs. Theorem 1. (i) Assume that for each $$x \\in \\mathcal{X}$$ the support of the conditional density $$f_1(y;x,\\bar\\theta_1)$$ is an interval, i.e., $$\\mathcal{S}_{f_1, \\bar\\theta_1, x} = [y_{x,\\min}, y_{x,\\max}]$$, such that $$y_{x, \\min} < \\eta_2(x, \\theta_2) < y_{x, \\max}$$ for all $$\\theta_2 \\in \\Theta_2$$. Assume further that for all $$x \\in \\mathcal{X}$$ and for all $$\\theta_2 \\in \\Theta_2$$, the equation\u00a0 \\begin{align} \\int \\frac{f_1(y;x, \\bar\\theta_1)}{1 + \\lambda \\left\\{ y - \\eta_2(x,\\theta_2) \\right\\} }\\,{\\rm d}y = 1 \\end{align} (8)has a unique nonzero root $$\\overline{\\lambda}(x, \\bar\\theta_1, \\theta_2)$$ that satisfies\u00a0 \\begin{align} -\\frac{1}{y_{x, \\max} - \\eta_2{(x, \\theta_2)}} < \\overline{\\lambda}(x, \\bar\\theta_1, \\theta_2) < -\\frac{1}{y_{x, \\min} - \\eta_2 (x,\\theta_2)}\\text{.} \\end{align} (9)Criterion (4) then takes the form\u00a0 \\begin{align} K_{1}(\\xi, \\overline{\\theta}_1) & = \\inf_{\\theta_2 \\in \\Theta_2} \\int_\\mathcal{X} \\int f_1(y;x, \\bar\\theta_1) \\log\\frac{f_1 (y;x, \\bar\\theta_1)}{f_2^*(y;x,\\theta_{2})}\\,{\\rm d}y\\,\\xi({\\rm d}x) \\end{align} (10)and the optimal density $$f_2^*$$ in (4) is\u00a0 \\begin{align} f_2^*(y;x, \\theta_2) = \\frac{f_1(y;x, \\overline \\theta_1)}{1 + \\overline{\\lambda}(x, \\bar\\theta_1, \\theta_2) \\left\\{y - \\eta_2(x,\\theta_2)\\right\\}}\\text{.} \\end{align} (11) (ii) Assume that the integrals\u00a0 \\begin{align*} \\int f_2(y;x, \\theta_2) \\exp(-\\lambda y)\\,{\\rm d}y, \\quad \\,\\int y f_2(y;x, \\theta_2) \\exp(-\\lambda y)\\,{\\rm d}y \\end{align*}exist for all $$x \\in \\mathcal{X}$$ and for all $$\\lambda$$. Criterion (5) takes the form\u00a0 \\begin{align} K_{2}(\\xi, {\\bar\\theta_1}) = \\inf_{\\theta_2 \\in \\Theta_2} \\int_\\mathcal{X} \\int f_1^*(y;x, \\bar\\theta_1) \\log\\frac{f_1^*(y;x, \\bar\\theta_1)}{f_2(y;x,\\theta_{2})}\\,{\\rm d}y\\,\\xi({\\rm d}x) \\end{align} (12)and the optimal density $$f_1^*$$ in (5) is given by\u00a0 \\begin{align} f_1^*(y;x, \\bar\\theta_1) = \\frac{f_2(y;x,\\theta_2) \\exp\\left\\{-\\overline{\\lambda}(x, {\\bar\\theta_1}, \\theta_2) y \\right\\} }{\\int f_2(y;x,\\theta_2) \\exp\\left\\{-\\overline{\\lambda}(x, \\bar\\theta_1, \\theta_2) y \\right\\} {\\rm d}y}, \\end{align} (13)where $$\\overline{\\lambda}_x = \\overline{\\lambda}(x, \\bar\\theta_1, \\theta_2)$$ is the nonzero root of the equation\u00a0 \\begin{align} \\frac{\\int y f_2(y;x, \\theta_2) \\exp(-\\lambda y)\\,{\\rm d}y}{\\int f_2(y;x, \\theta_2) \\exp(-\\lambda y)\\,{\\rm d}y} = \\eta_1(x, \\bar\\theta_1)\\text{.} \\end{align} (14) The main implication of Theorem 1 is that we first solve equations (8) and (14) numerically for $$\\lambda$$. As this has to be done for several values of $$\\theta_2$$ it is quite demanding, though not so computationally expensive as finding the solution of the two equations in (7) for Otsu\u2019s approach. For solving (8), it is natural to assume that $$\\lambda < 0$$ if $$\\eta_1(x,\\bar\\theta_1) < \\eta_2(x,\\theta_2)$$, because if $$y \\in \\mathcal{S}_{f_1, \\bar\\theta_1, x}$$, the function $$1\/[1+\\lambda \\left\\{y - \\eta_2(x,\\theta_2)\\right\\}]$$ is increasing and so allows us to shift the average of the function $$f_1(y;x,\\bar\\theta_1)\/[1+\\lambda \\left\\{y - \\eta_2(x,\\theta_2)\\right\\}]$$ to the right. Similarly, if $$\\eta_1(x,\\bar\\theta_1) > \\eta_2(x,\\theta_2)$$, we search for $$\\lambda > 0$$. The following lemma formalizes this consideration and its proof, and all other proofs are deferred to the final section. Lemma 1. Assume that $$v_2^2(x,\\theta_2) = \\int \\left\\{ y - \\eta_2(x,\\theta_2) \\right\\}^2 f_2(y;x, \\theta_2)\\,{\\rm d}y$$ exists and is positive. If $$\\overline{\\lambda}$$ solves (8) and satisfies (9), $$\\overline{\\lambda}$$has the same sign as the difference$$\\eta_1(x,\\theta_1) - \\eta_2(x,\\theta_2)$$. Example 1. Let $$f_1(y;x,\\bar\\theta_1)$$ be the truncated normal density $$\\mathcal{N}\\{\\eta(x,\\bar\\theta_1), 1\\}$$ on the interval $$[-3+\\eta_1(x,\\bar\\theta_1), 3 + \\eta_1(x,\\bar\\theta_1)]$$. This density is a function of $$\\eta_1(x,\\bar\\theta_1)$$ and it follows from (11) that the optimal density $$f_2^*(y;x,\\theta_2)$$ is a function of $$\\eta_1(x, \\overline{\\theta}_1)$$ and $$\\eta_2(x,\\theta_2)$$. Figure 1 displays the function $$f_2^*$$ for $$\\eta_1(x,\\overline{\\theta}_1) \\equiv 0$$ and different values of $$\\eta_2(x,\\theta_2)$$ on the interval $$[-3,3]$$. Fig. 1. View largeDownload slide Density $$f_1$$ (solid line) and the solution $$f_2^*$$ in (11) (dotted line), where $$f_1$$ is the truncated standard normal distribution on the interval $$[-3,3]$$ and $$\\eta_1 (x, \\bar \\theta_1) =0$$: (a) $$\\eta_2 (x, \\theta_2)= 0.5$$ ($$\\bar \\lambda = -0.395$$); (b) $$\\eta_2 (x, \\theta_2)= 0.4$$ ($$\\bar \\lambda = -0.3522$$); (c) $$\\eta_2 (x, \\theta_2)= 0.3$$ ($$\\bar \\lambda = -0.2841$$). Fig. 1. View largeDownload slide Density $$f_1$$ (solid line) and the solution $$f_2^*$$ in (11) (dotted line), where $$f_1$$ is the truncated standard normal distribution on the interval $$[-3,3]$$ and $$\\eta_1 (x, \\bar \\theta_1) =0$$: (a) $$\\eta_2 (x, \\theta_2)= 0.5$$ ($$\\bar \\lambda = -0.395$$); (b) $$\\eta_2 (x, \\theta_2)= 0.4$$ ($$\\bar \\lambda = -0.3522$$); (c) $$\\eta_2 (x, \\theta_2)= 0.3$$ ($$\\bar \\lambda = -0.2841$$). The main difference between our approach and that of Otsu (2008) is that we provide an easier and quicker way to compute the quantity \u00a0 $$\\inf_{f_2 \\in \\mathcal{F}_{2, x,\\theta_2} } I_{1,2}(x,f_1,f_2,{\\bar\\theta_1},\\theta_2)\\text{.}$$ (15) This difference has very important implications for the numerical calculation of the semiparametric discrimination designs. To be precise, the result in Otsu (2008) requires us to solve the two nonlinear equations in (7) numerically for all design points $$x$$ involved in the determination of the optimal design maximizing criterion (5) and all parameter values $$\\theta_2 \\in \\Theta_2$$ involved in the minimization of the simplified version (6) derived by Otsu (2008). From a numerical viewpoint, it is very challenging to tackle this unstable problem because the solution depends sensitively on the specification of an initial point for the iterative procedure to solve (7). In contrast, Theorem 1 reduces the problem to the solution of one nonlinear equation, which can be found, for example, by a bisection search or a golden ratio search. The numerical instability becomes apparent also in the numerical study in \u00a7 5, where we tried to compare the two methods in three examples. There we implemented Newton\u2019s method to find the solution of the system of two equations in (7) required by Otsu\u2019s method. We observed that for many values of the explanatory variable $$x$$, the function in (15) could not be computed because the Newton method did not converge to the solution of system (7) that satisfies the condition $$\\mu + \\lambda \\left\\{y - \\eta_2(x, \\theta_2)\\right\\} < 0$$. Such a problem was even observed in cases where we used a starting point in the iteration which is very close to the solution determined by the new method proposed in this paper. As a consequence, in many examples the semiparametric optimal discrimination design could not be determined by the algorithm of Otsu (2008). Moreover, we observe that in the cases where Otsu\u2019s method was able to determine the solution of the two nonlinear equations in (15), our method is still, on average, about two times faster; see Example 4. 3. Equivalence theorems Equivalence theorems are useful because they confirm optimality of a design among all designs on the given design space $$\\mathcal{X}$$. These tools exist if the criterion is a convex or concave function over the set of all approximate designs on $$\\mathcal{X}$$, and their derivations are discussed in design monographs (Silvey, 1980; Pukelsheim, 2006). The next theorem states the equivalence results for the semiparametric Kullback\u2013Leibler-optimal discriminating designs. Theorem 2. Suppose that the conditions of Theorem 1 hold and the infimum in (4) and (5) is attained at a unique point $$\\theta_2^* \\in \\Theta_2$$ for the optimal design $$\\xi^*$$. $${\\rm (a)}$$ A design $$\\xi^*$$ is a semiparametric Kullback\u2013Leibler-optimal discriminating design of type $$1$$ if and only if\u00a0 $$I_{ 1,2}(x,f_1,f_2^*,{ \\bar\\theta_1},\\theta_2^*) - \\int_\\mathcal{X} I_{ 1,2}(x,f_1,f_2^*,{ \\bar\\theta_1},\\theta_2^*) \\, \\xi^*({\\rm d}x)\\leqslant\\,0, \\quad x \\in \\mathcal{X},$$ (16)with equality at the support points of $$\\xi^*$$. Here $$I_{ 1,2}(x,f_1,f_2,{ \\bar\\theta_1},\\theta_2)$$ is defined in (1), \u00a0 \\begin{align*} \\theta_2^* = \\mathop {{\\rm{arg\\,inf}}}\\limits_{{\\theta _2} \\in {\\Theta _2}} \\int_\\mathcal{X} I_{ 1,2}(x,f_1,f_2^*,{ \\bar\\theta_1},\\theta_2) \\, \\xi^*({\\rm d}x), \\quad f_2^*(y;x,\\theta_2) = \\frac{f_1(y;x,{ \\bar\\theta_1})}{1+{\\overline\\lambda}\\left\\{y-\\eta_2(x,\\theta_2) \\right\\} }, \\end{align*}and $$\\overline\\lambda$$ is found from (8). Moreover, there is equality in (16) for all support points of $$\\xi^*$$.\u2002 $${\\rm (b)}$$ A design $$\\xi^*$$ is a semiparametric Kullback\u2013Leibler-optimal discriminating design of type $$2$$ if and only if\u00a0 $$I_{ 1,2}(x,f_1^*,f_2,{\\bar\\theta_1,\\theta_2^{*}}) - \\int_{\\mathcal{X}}I_{ 1,2}(x,f_1^*,f_2,{ \\bar\\theta_1,\\theta_2^{*}}) \\, \\xi^*({\\rm d}x) \\leqslant\\,0, \\quad x \\in \\mathcal{X},$$ (17)with equality at the support points of $$\\xi^*$$. Here\u00a0 \\begin{align*} {\\theta_2^{*}} &= \\mathop {{\\rm{arg inf}}}\\limits_{{\\theta _2} \\in {\\Theta _2}} \\int_{\\mathcal{X}} I_{ 1,2}(x,f_1^*,f_2,{\\bar\\theta_1,\\theta_2}) \\, \\xi^*({\\rm d}x), \\quad f_1^*(y;x,\\bar\\theta_1) = \\frac{f_2(y;x,\\theta_2) \\exp(-{\\overline\\lambda} y)}{\\int f_2(y;x,\\theta_2) \\exp(-{\\overline\\lambda} y)\\,{\\rm d}y}, \\end{align*}and $$\\overline\\lambda$$ is found from (14). Moreover, there is equality in (17) for all support points of $$\\xi^*$$. Theorem 2 is a direct consequence of the equivalence theorem for Kullback\u2013Leibler-optimal designs from L\u00f3pez-Fidalgo et al. (2007). Part (a) states that $$K_{1}(\\xi,{\\bar\\theta_1})$$ is the Kullback\u2013Leibler criterion for discrimination between $$f_1(y;x,\\bar\\theta_1)$$ and $$f_2^*(y;x,\\theta_2)$$ defined in (11). Part (b) states that $$K_{2}(\\xi,{\\bar\\theta_1})$$ is the Kullback\u2013Leibler criterion for discrimination between $$f_1^*(y;x,\\bar\\theta_1)$$ defined in (13) and $$f_2(y;x,\\theta_2)$$. Following convention in the case where all models are parametric, we call the function on the left-hand side of (16) or (17) the sensitivity function of the design under investigation. Clearly, different design criteria lead to different sensitivity functions for the same design. The usefulness of the equivalence theorem is that if the sensitivity function of a design does not satisfy the conditions required in the equivalence theorem, then the design is not optimal under the given criterion. Figure 2 illustrates these sensitivity plots. Fig. 2. View largeDownload slide Plots of the sensitivity functions of the following discrimination designs: (a) $$T$$-optimal, (b) Kullback\u2013Leibler-optimal, (c) semiparametric Kullback\u2013Leibler-optimal of type 1 and (d) semiparametric Kullback\u2013 Leibler-optimal of type 2, from Table 1. Fig. 2. View largeDownload slide Plots of the sensitivity functions of the following discrimination designs: (a) $$T$$-optimal, (b) Kullback\u2013Leibler-optimal, (c) semiparametric Kullback\u2013Leibler-optimal of type 1 and (d) semiparametric Kullback\u2013 Leibler-optimal of type 2, from Table 1. 4. Connections with the $$T$$-optimality criterion We now show that under homoscedastic symmetrically distributed errors, the semiparametric optimal design for discriminating between the model $$f_1(y;x, \\bar\\theta_1)$$ and the class $$\\mathcal{F}_2$$ coincides with the $$T$$-optimal design proposed by Atkinson & Fedorov (1975a). We first recall the classical set-up for finding an optimal design to discriminate between two models, where we assume that the mean functions in the models are known and the parameters in the null model are fixed at, say, $$\\bar\\theta_1$$. When errors in both models are normally distributed, a $$T$$-optimal discrimination design $$\\xi_T^*$$ maximizes the criterion \u00a0 $$\\inf_{\\theta_2 \\in \\Theta_2} \\int_{\\mathcal{X}} \\{\\eta_1(x,\\bar\\theta_1) - \\eta_2(x,\\theta_2)\\}^2 \\xi({\\rm d}x)$$ (18) among all designs on $$\\mathcal{X}$$ (Atkinson & Fedorov, 1975a). Throughout this section, we assume that the infimum in (18) is attained at a unique point $$\\theta_2^*$$ when $$\\xi = \\xi^*_T$$. Using arguments like those in Wiens (2009), it can be shown that the power of the likelihood ratio test for the hypotheses \u00a0 $$H_0: \\eta(x) = \\eta_2(x,\\theta_2) \\,\\,\\mbox{versus} \\,\\, H_1: \\eta(x) =\\eta_1(x,\\bar\\theta_1)$$ (19) is an increasing function of the quantity in (18). Our next result gives a sufficient condition for the $$T$$-optimal discriminating design to be a semiparametric optimal design in the sense of \u00a7 2. Theorem 3. Suppose that the assumptions of Theorem 1 (i) hold and $$f_1(y;x,\\bar\\theta_1)$$ satisfies\u00a0 \\begin{equation*} f_1(y;x,\\bar\\theta_1) = g\\{y - \\eta_1(x,\\bar\\theta_1)\\}, \\end{equation*}where $$g$$ is a symmetric density function supported in the interval $$[-a,a]$$, i.e., $$f_1$$ has support $$[-a+\\eta_1(x,\\bar\\theta_1), a + \\eta_1(x,\\bar\\theta_1)]$$. The $$T$$-optimal discriminating design maximizing criterion (18) is a semiparametric Kullback\u2013Leibler-optimal discriminating design of type $$1$$.\u2002 A similar result is available for the semiparametric Kullback\u2013Leibler-optimal discriminating designs of type 2. Suppose that $$f_2(y;x,\\theta_2)$$ and $$f_1(y;x,\\bar\\theta_1)$$ are normal distributions $$\\mathcal{N} \\{\\eta_2(x,\\theta_2) , v^2_2(x,\\theta_2)\\}$$ and $$\\mathcal{N} \\{\\eta_1(x,\\bar\\theta_1) , v^2_2(x,\\theta_2)\\}$$, respectively. It can be shown that the power of the likelihood ratio test for hypotheses (19) is an increasing function of \u00a0 \\begin{align} {\\textrm{KL}}_{1,2}(\\xi, \\overline \\theta_1)= \\inf_{\\theta_2 \\in \\Theta_2} \\int_{\\mathcal{X}} \\frac{\\{\\eta_1(x,\\bar\\theta_1) - \\eta_2(x,\\theta_2)\\}^2}{v^2_2(x,\\theta_2)} \\xi({\\rm d}x) \\end{align} (20) where $${\\textrm{KL}}_{1,2}(\\xi, \\overline \\theta_1)$$ is the Kullback\u2013Leibler criterion defined in (2). The next result shows that this design is also a semiparametric Kullback\u2013Leibler-optimal discriminating design of type 2. Theorem 4. Suppose that $$f_2(y;x,\\theta_2)$$ is a normal density with mean $$\\eta_2(x,\\theta_2)$$ and variance $$v_2^2(x,\\theta_2)$$. The best approximation $$f_1^*(y;x,\\bar\\theta_1)$$ is a normal density with mean $$\\eta_1(x,\\bar\\theta_1)$$ and variance $$v_2^2(x,\\theta_2)$$, and the optimal design maximizing (20) is a semiparametric Kullback\u2013Leibler-optimal discriminating design of type $$2$$ and vice versa.\u2002 5. Numerical results We now illustrate the new techniques for finding semiparametric optimal designs using three examples. From \u00a7 2, the first step is to solve equations (8) and (14) efficiently. In the second step, any numerical method that determines Kullback\u2013Leibler-optimal discrimination designs can be adapted to solve the minimax problems obtained from Theorem 1 because the representations (10) and (12) have the same structure as the Kullback\u2013Leibler optimality criteria considered in L\u00f3pez-Fidalgo et al. (2007). The second step defines a very challenging problem, and some recent results and algorithms for Kullback\u2013Leibler optimality criteria can be found in Stegmaier et al. (2013), Braess & Dette (2013), Dette et al. (2015) and Dette et al. (2017a). Below we focus on the first step because our aim is to find new semiparametric designs. In the second step, we use an adaptation of the first-order algorithm of Atkinson & Fedorov (1975a), which is not the most efficient algorithm but is very easy to implement. Let $$\\delta$$ be a user-selected positive constant. By Lemma 1 and inequality (9), we solve equation (8) in the following regions: if $$\\eta_1 (x,\\bar\\theta_1) = \\eta_2 (x,\\theta_2)$$, set $$\\lambda = 0$$; if $$\\eta_1 (x,\\bar\\theta_1) < \\eta_2 (x,\\theta_2)$$, choose a solution in the interval $${{\\Lambda}^-=[-1\/\\{y_{x,\\max}-\\eta_2(x,\\theta_2)\\}, -\\delta]}$$; if $$\\eta_1 (x,\\bar\\theta_1) > \\eta_2 (x,\\theta_2)$$, choose a solution in the interval $${{\\Lambda}^+=[\\delta, -1\/\\{y_{x,\\min}-\\eta_2(x,\\theta_2)\\}]}$$. Similarly, the solution of (14) can be obtained as follows. We search for $$\\lambda > 0$$ if $$\\eta_1(x,\\bar\\theta_1) < \\eta_2(x,\\theta_2)$$ so that $$\\lambda$$ shifts the predefined density $$f_2(y;x,\\theta_2)$$ to the left, and search for $$\\lambda < 0$$ if $$\\eta_1(x,\\bar\\theta_1) > \\eta_2(x,\\theta_2)$$. If $$\\delta$$ is chosen to be a small enough positive constant and $$\\beta$$ is a user-selected large positive constant, we can assume that the solution of (14) is in $$[-\\beta,+\\beta]$$. We suggest searching for the numerical solution of equation (14) in the following regions: if $$\\eta_1 (x,\\bar\\theta_1) = \\eta_2 (x,\\theta_2)$$, set $$\\lambda = 0$$; if $$\\eta_1 (x,\\bar\\theta_1) < \\eta_2 (x,\\theta_2)$$, choose a solution in the interval $${\\Lambda}^+ = [+\\delta, +\\beta]$$; if $$\\eta_1 (x,\\bar\\theta_1) > \\eta_2 (x,\\theta_2)$$, choose a solution in the interval $$\\Lambda^- = [-\\beta,-\\delta]$$. We now present two examples, where the $$T$$-optimal and semiparametric Kullback\u2013Leibler-optimal designs are determined numerically and are different. Example 2. Consider the optimal design problem from L\u00f3pez-Fidalgo et al. (2007), where they wanted to discriminate between the two models \u00a0 \\begin{align} \\eta_1(x,\\theta_1) = \\theta_{1,1} x + \\frac{\\theta_{1,2} x}{x + \\theta_{1,3}}, \\quad \\eta_2(x,\\theta_2) = \\frac{\\theta_{2,1} x}{x + \\theta_{2,2}}\\text{.} \\end{align} (21) The design space for both models is the interval $$[0.1,5]$$ and we assume that the first model has fixed parameters $$\\overline{\\theta}_1 = (1,1,1)$$. We construct four different types of optimal discrimination design for this problem: a $$T$$-optimal design; a Kullback\u2013Leibler-optimal design for lognormal errors, with fixed variances $$v^2_1(x,\\bar\\theta_1) = v^2_2(x,\\theta_2) = 0.1$$; a semiparametric Kullback\u2013Leibler-optimal discriminating design of type 1 for a mildly truncated lognormal density $$f_1(y;x,\\bar\\theta_1)$$ with location $$\\mu_1(x,\\bar\\theta_1)$$ and scale $$\\sigma^2_1(x,\\bar\\theta_1)$$; and a semiparametric Kullback\u2013Leibler-optimal discriminating design of type 2 for a mildly truncated lognormal density $$f_2(y;x,\\theta_2)$$ with location $$\\mu_2(x,\\theta_2)$$ and scale $$\\sigma^2_2(x,\\theta_2)$$, where \u00a0 \\begin{align*} \\mu_i(x,\\theta) = \\log \\eta_i(x,\\theta) - \\frac{1}{2} \\sigma^2_i(x,\\theta)\\,\\,\\text{and}\\,\\, \\sigma^2_i(x,\\theta) = \\log\\left\\{ 1+v^2_i(x,\\theta)\/\\eta_i^2(x,\\theta) \\right\\} \\quad (i=1,2)\\text{.} \\end{align*} The ranges for those densities are the intervals from $$Q_1(0.0001,x,\\bar\\theta_1)$$ to $$Q_1(0.9999,x,\\bar\\theta_1)$$ and from $$Q_2(0.0001,x,\\theta_2)$$ to $$Q_2(0.9999\\,x,\\theta_2)$$ respectively, where $$Q_i(p,x,\\theta)$$ is the quantile function of the ordinary lognormal density with mean $$\\eta_i(x,\\theta)$$ and variance $$v^2_i(x,\\theta) = 0.1$$. We note that because of the mild truncation, $$\\eta_1(x,\\bar\\theta_1)$$ and $$\\eta_2(x,\\theta_2)$$ are not exactly the means of the densities $$f_1(y;x,\\bar\\theta_1)$$ and $$f_2(y;x,\\theta_2)$$, respectively, but are very close to them. Table 1 displays the optimal discrimination designs under the four different criteria, along with the optimal parameter $$\\theta_2^*$$ of the second model corresponding to the minimal value with respect to the parameter $$\\theta_2$$. All four types of optimal discrimination designs are different, with the smallest support point of the Kullback\u2013Leibler-optimal design being noticeably different from those of the other three designs. The semiparametric Kullback\u2013Leibler-optimal discriminating design of type 2 has nearly the same support as the $$T$$-optimal design. Figure 2 shows the sensitivity functions of the four optimal designs and confirms their optimality. Table 1. Optimal discrimination designs for the two models in (21) Design type\u00a0 \u00a0\u00a0 $$\\xi^*$$\u00a0 \u00a0\u00a0 $$\\theta_2^*$$\u00a0 $$T$$-optimal\u00a0 $$x$$\u00a0 0.508\u00a0 2.992\u00a0 5.000\u00a0 (22.564, 14.637)\u00a0 $$w$$\u00a0 0.580\u00a0 0.298\u00a0 0.122\u00a0 $$KL$$-optimal\u00a0 $$x$$\u00a0 0.218\u00a0 2.859\u00a0 5.000\u00a0 (21.112, 13.436)\u00a0 $$w$$\u00a0 0.629\u00a0 0.260\u00a0 0.111\u00a0 $$SKL_{1}$$-optimal\u00a0 $$x$$\u00a0 0.454\u00a0 2.961\u00a0 5.000\u00a0 (22.045, 14.197)\u00a0 $$w$$\u00a0 0.531\u00a0 0.344\u00a0 0.125\u00a0 $$SKL_{2}$$-optimal\u00a0 $$x$$\u00a0 0.509\u00a0 2.994\u00a0 5.000\u00a0 (22.824, 14.857)\u00a0 $$w$$\u00a0 0.611\u00a0 0.273\u00a0 0.116\u00a0 Design type\u00a0 \u00a0\u00a0 $$\\xi^*$$\u00a0 \u00a0\u00a0 $$\\theta_2^*$$\u00a0 $$T$$-optimal\u00a0 $$x$$\u00a0 0.508\u00a0 2.992\u00a0 5.000\u00a0 (22.564, 14.637)\u00a0 $$w$$\u00a0 0.580\u00a0 0.298\u00a0 0.122\u00a0 $$KL$$-optimal\u00a0 $$x$$\u00a0 0.218\u00a0 2.859\u00a0 5.000\u00a0 (21.112, 13.436)\u00a0 $$w$$\u00a0 0.629\u00a0 0.260\u00a0 0.111\u00a0 $$SKL_{1}$$-optimal\u00a0 $$x$$\u00a0 0.454\u00a0 2.961\u00a0 5.000\u00a0 (22.045, 14.197)\u00a0 $$w$$\u00a0 0.531\u00a0 0.344\u00a0 0.125\u00a0 $$SKL_{2}$$-optimal\u00a0 $$x$$\u00a0 0.509\u00a0 2.994\u00a0 5.000\u00a0 (22.824, 14.857)\u00a0 $$w$$\u00a0 0.611\u00a0 0.273\u00a0 0.116\u00a0 $$KL$$, Kullback\u2013Leibler; $$SKL_{i}$$, semiparametric Kullback\u2013Leibler of type $$i$$. Table 2 displays the four different types of efficiencies of the $$T$$-, Kullback\u2013Leibler-, and semiparametric Kullback\u2013Leibler-optimal discriminating designs. Small changes in the design can have large effects, and the $$T$$- and Kullback\u2013Leibler-optimal discrimination designs are not very robust under a variation of the criteria, where the Kullback\u2013Leibler-optimal discrimination design has slight advantages. On the other hand, the semiparametric Kullback\u2013Leibler-optimal discriminating design of type 1 yields moderate efficiencies, about $$75 \\%$$, with respect to the $$T$$- and Kullback\u2013Leibler optimality criteria. Table 2. Efficiencies of optimal discrimination designs for the two models in (21) under various optimality criteria. For example, the value $$0.321$$ in the first row is the efficiency of the Kullback\u2013Leibler-optimal design with respect to the $$T$$-optimality criterion\u2002 \u00a0 $$T$$-optimal\u00a0 $$KL$$-optimal\u00a0 $$SKL_{1}$$-optimal\u00a0 $$SKL_{2}$$-optimal\u00a0 $$T$$-criterion\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$321\u00a0 0$${\\cdot}$$741\u00a0 0$${\\cdot}$$830\u00a0 $$KL$$-criterion\u00a0 0$${\\cdot}$$739\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$796\u00a0 0$${\\cdot}$$650\u00a0 $$SKL_{1}$$-criterion\u00a0 0$${\\cdot}$$552\u00a0 0$${\\cdot}$$544\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$454\u00a0 $$SKL_{2}$$-criterion\u00a0 0$${\\cdot}$$876\u00a0 0$${\\cdot}$$254\u00a0 0$${\\cdot}$$633\u00a0 1$${\\cdot}$$000\u00a0 \u00a0 $$T$$-optimal\u00a0 $$KL$$-optimal\u00a0 $$SKL_{1}$$-optimal\u00a0 $$SKL_{2}$$-optimal\u00a0 $$T$$-criterion\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$321\u00a0 0$${\\cdot}$$741\u00a0 0$${\\cdot}$$830\u00a0 $$KL$$-criterion\u00a0 0$${\\cdot}$$739\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$796\u00a0 0$${\\cdot}$$650\u00a0 $$SKL_{1}$$-criterion\u00a0 0$${\\cdot}$$552\u00a0 0$${\\cdot}$$544\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$454\u00a0 $$SKL_{2}$$-criterion\u00a0 0$${\\cdot}$$876\u00a0 0$${\\cdot}$$254\u00a0 0$${\\cdot}$$633\u00a0 1$${\\cdot}$$000\u00a0 $$KL$$, Kullback\u2013Leibler; $$SKL_{i}$$, semiparametric Kullback\u2013Leibler of type $$i$$. Example 3. Consider a similar problem with a function $$\\eta_1(x,\\theta_1)$$ taken from Wiens (2009). The two models of interest are \u00a0 \\begin{align} \\eta_1(x,\\theta_1) = \\theta_{1,1} \\big \\{ 1 - \\exp(-\\theta_{1,2} x) \\big\\}, \\quad \\eta_2(x,\\theta_2) = \\frac{\\theta_{2,1} x}{\\theta_{2,2}+ x}, \\end{align} (22) where the design space is $$\\mathcal{X} = [0.1,5]$$. Here we fix the parameters of the first model in (22) to $$\\overline \\theta_1 = (1,1)$$ and determine the $$T$$-optimal, Kullback\u2013Leibler-optimal for lognormal errors, and semiparametric Kullback\u2013Leibler-optimal discriminating designs of type 1 and type 2 for mildly truncated lognormal errors. The error variances for the Kullback\u2013Leibler-optimal discrimination design are $$v_1^2(x,\\bar\\theta_1) = v_2^2(x,\\theta_2) =0.02$$; for the semiparametric Kullback\u2013Leibler-optimal discriminating design of type 1 the variance is $$v_1^2(x,\\bar\\theta_1) = 0.02$$, and for the semiparametric Kullback\u2013Leibler-optimal discriminating design of type 2 the variance is $$v_2^2(x,\\theta_2) = 0.02$$. Table 3 displays the various optimal designs, along with the minimal values of the parameters $$\\theta_2^*$$ and $$\\theta_2^{*}$$ in the second model sought in the criterion. The optimality of the numerically determined $$T$$-optimal, Kullback\u2013Leibler-optimal and semiparametric Kullback\u2013Leibler-optimal discriminating designs of type 1 and type 2 can be verified by plotting the corresponding sensitivity functions. We again observe substantial differences between the optimal discrimination designs with respect to the different criteria. A comparison of the efficiencies of the optimal designs with respect to the different criteria in Table 4 shows a similar picture as in the first example. In particular, we note that for our two examples, the other optimal discrimination designs are especially sensitive to the Kullback\u2013Leibler optimality criteria; in the first example their Kullback\u2013Leibler efficiences are at best $$54\\%$$, and in the second example their Kullback\u2013Leibler efficiencies are not higher than $$40\\%$$. One reason may be that the smallest support point of the Kullback\u2013Leibler-optimal design is noticeably smaller than the minimum support point of each of the other three optimal designs. Table 3. Optimal discrimination designs for the two models in (22) Design type\u00a0 \u00a0\u00a0 $$\\xi^*$$\u00a0 \u00a0\u00a0 $$\\theta_2^*$$\u00a0 $$T$$-optimal\u00a0 $$x$$\u00a0 0.308\u00a0 2.044\u00a0 5.000\u00a0 $$(1.223, 0.948)$$\u00a0 $$w$$\u00a0 0.316\u00a0 0.428\u00a0 0.256\u00a0 $$KL$$-optimal\u00a0 $$x$$\u00a0 0.136\u00a0 1.902\u00a0 5.000\u00a0 $$(1.244, 1.020)$$\u00a0 $$w$$\u00a0 0.297\u00a0 0.457\u00a0 0.252\u00a0 $$SKL_{1}$$-optimal\u00a0 $$x$$\u00a0 0.395\u00a0 2.090\u00a0 5.000\u00a0 $$(1.216, 0.920)$$\u00a0 $$w$$\u00a0 0.396\u00a0 0.355\u00a0 0.249\u00a0 $$SKL_{2}$$-optimal\u00a0 $$x$$\u00a0 0.308\u00a0 2.044\u00a0 5.000\u00a0 $$(1.225, 0.956)$$\u00a0 $$w$$\u00a0 0.289\u00a0 0.458\u00a0 0.253\u00a0 Design type\u00a0 \u00a0\u00a0 $$\\xi^*$$\u00a0 \u00a0\u00a0 $$\\theta_2^*$$\u00a0 $$T$$-optimal\u00a0 $$x$$\u00a0 0.308\u00a0 2.044\u00a0 5.000\u00a0 $$(1.223, 0.948)$$\u00a0 $$w$$\u00a0 0.316\u00a0 0.428\u00a0 0.256\u00a0 $$KL$$-optimal\u00a0 $$x$$\u00a0 0.136\u00a0 1.902\u00a0 5.000\u00a0 $$(1.244, 1.020)$$\u00a0 $$w$$\u00a0 0.297\u00a0 0.457\u00a0 0.252\u00a0 $$SKL_{1}$$-optimal\u00a0 $$x$$\u00a0 0.395\u00a0 2.090\u00a0 5.000\u00a0 $$(1.216, 0.920)$$\u00a0 $$w$$\u00a0 0.396\u00a0 0.355\u00a0 0.249\u00a0 $$SKL_{2}$$-optimal\u00a0 $$x$$\u00a0 0.308\u00a0 2.044\u00a0 5.000\u00a0 $$(1.225, 0.956)$$\u00a0 $$w$$\u00a0 0.289\u00a0 0.458\u00a0 0.253\u00a0 $$KL$$, Kullback\u2013Leibler; $$SKL_{i}$$, semiparametric Kullback\u2013Leibler of type $$i$$. Table 4. Efficiencies of optimal discrimination designs for the two models in (22) under various optimality criteria\u2002 \u00a0 $$T$$-optimal\u00a0 $$KL$$-optimal\u00a0 $$SKL_{1}$$-optimal\u00a0 $$SKL_{2}$$-optimal\u00a0 $$T$$-criterion\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$266\u00a0 0$${\\cdot}$$663\u00a0 0$${\\cdot}$$858\u00a0 $$KL$$-criterion\u00a0 0$${\\cdot}$$786\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$565\u00a0 0$${\\cdot}$$879\u00a0 $$SKL_{1}$$-criterion\u00a0 0$${\\cdot}$$407\u00a0 0$${\\cdot}$$346\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$388\u00a0 $$SKL_{2}$$-criterion\u00a0 0$${\\cdot}$$882\u00a0 0$${\\cdot}$$396\u00a0 0$${\\cdot}$$608\u00a0 1$${\\cdot}$$000\u00a0 \u00a0 $$T$$-optimal\u00a0 $$KL$$-optimal\u00a0 $$SKL_{1}$$-optimal\u00a0 $$SKL_{2}$$-optimal\u00a0 $$T$$-criterion\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$266\u00a0 0$${\\cdot}$$663\u00a0 0$${\\cdot}$$858\u00a0 $$KL$$-criterion\u00a0 0$${\\cdot}$$786\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$565\u00a0 0$${\\cdot}$$879\u00a0 $$SKL_{1}$$-criterion\u00a0 0$${\\cdot}$$407\u00a0 0$${\\cdot}$$346\u00a0 1$${\\cdot}$$000\u00a0 0$${\\cdot}$$388\u00a0 $$SKL_{2}$$-criterion\u00a0 0$${\\cdot}$$882\u00a0 0$${\\cdot}$$396\u00a0 0$${\\cdot}$$608\u00a0 1$${\\cdot}$$000\u00a0 $$KL$$, Kullback\u2013Leibler; $$SKL_{i}$$, semiparametric Kullback\u2013Leibler of type $$i$$. Example 4. It is difficult to compare our approach with that of Otsu (2008) because of the computational difficulties described at the end of \u00a7 2. In particular, the latter algorithm is often not able to determine the semiparametric Kullback\u2013Leibler-optimal discrimination design. For instance, in the situations considered in Examples 2 and 3 we were unable to obtain convergence. For a comparison of the speed of the methods we therefore have to choose a relatively simple example for which a comparison of both methods is possible. For this purpose we again considered models (22) and constructed the semiparametric Kullback\u2013Leibler-optimal discriminating designs of type 1. For the density $$f_1(y;x,\\bar\\theta_1)$$ we used the density of the random variable $$\\eta_1(x,\\theta_1) + (\\varepsilon -m)$$, where the distribution of $$\\varepsilon$$ is lognormal with location and scale parameters $$0$$ and $$1$$, respectively, truncated to the interval between the $$0.1\\%$$ and $$90\\%$$ quantiles. The constant $$m$$ is chosen such that $$E (\\varepsilon - m)=0$$. The semiparametric Kullback\u2013Leibler-optimal discriminating design of type 1 is supported at $$0.308, 2.044$$ and $$5.000$$ with weights $$0.323, 0.415$$ and $$0.262$$, respectively. It has the same support as the $$T$$-optimal discrimination design but the weights are different. It took about $$540$$ seconds for the approach proposed in this paper and about $$1230$$ seconds for Otsu\u2019s method to find the optimal design. In both cases, we used an adaptation of the Atkinson\u2013Fedorov algorithm in our search. So, even in this simple example, the computational differences are substantial. 6. Conclusions Much of the present work on optimal design for discriminating between models assumes that the models are fully parametric. Our work allows the alternative models to be nonparametric, where only their mean functions have to be specified, apart from the parameter values. Our approach is simpler and more reliable than other approaches in the literature for tackling such challenging and more realistic optimal discrimination design problems. We expect potential applications of our work to systems biology, where frequently the underlying model generating the responses is unknown and very complex. In practice, the mean response is approximated in a few ways and these approximations become the conditional means of nonparametric models that need to be efficiently discriminated to arrive at a plausible model. The optimal design method presented here will save costs by helping biological researchers to efficiently determine an adequate mean model among several postulated. There are also rich opportunities for further methodological research. For example, an important problem is to relax the assumption that the set $$\\mathcal{S}_{f_1, \\overline \\theta, x}$$ defined in (3) is fixed for each $$x$$, so that the method can be applied to a broader class of conditional densities. Acknowledgement We are grateful to the reviewers for their constructive comments on the first version of our paper. Dette and Guchenko were supported by the Deutsche Forschungsgemeinschaft. Dette and Wong were partially supported by the National Institute of General Medical Sciences of the U.S. National Institutes of Health. Melas and Guchenko were partially supported by St. Petersburg State University and the Russian Foundation for Basic Research. Supplementary material Supplementary material available at Biometrika online contains all proofs. References Abd El-Monsef M. M. E. & Seyam M. M. ( 2011). CDT-optimum designs for model discrimination, parameter estimation and estimation of a parametric function. J. Statist. Plan. Infer.\u00a0 141, 639\u2013 43. Google Scholar CrossRef Search ADS \u00a0 Alberton A. L. , Schwaab M., Labao M. W. N. & Pinto J. C. ( 2011). Experimental design for the joint model discrimination and precise parameter estimation through information measures. Chem. Eng. Sci.\u00a0 66, 1940\u2013 52. Google Scholar CrossRef Search ADS \u00a0 Aletti G. , May C. & Tommaci C. ( 2016). KL-optimum designs: Theoretical properties and practical computation. Statist. Comp.\u00a0 26, 107\u2013 17. Google Scholar CrossRef Search ADS \u00a0 Atkinson A. C. ( 2008). $${DT}$$-optimum designs for model discrimination and parameter estimation. J. Statist. Plan. Infer.\u00a0 138, 56\u2013 64. Google Scholar CrossRef Search ADS \u00a0 Atkinson A. C. & Fedorov V. V. ( 1975a). The designs of experiments for discriminating between two rival models. Biometrika\u00a0 62, 57\u2013 70. Google Scholar CrossRef Search ADS \u00a0 Atkinson A. C. & Fedorov V. V. ( 1975b). Optimal design: Experiments for discriminating between several models. Biometrika\u00a0 62, 289\u2013 303. Borwein J. M. & Lewis A. S. ( 1991). Duality relationships for entropy-like minimization problems. SIAM J. Contr. Optimiz.\u00a0 29, 325\u2013 38. Google Scholar CrossRef Search ADS \u00a0 Braess D. & Dette H. ( 2013). Optimal discriminating designs for several competing regression models. Ann. Statist.\u00a0 41, 897\u2013 922. Google Scholar CrossRef Search ADS \u00a0 Campos-Barreiro S. & Lopez-Fidalgo J. ( 2016). KL-optimal experimental design for discriminating between two growth models applied to a beef farm. Math. Biosci. Eng.\u00a0 13, 67\u2013 82. Google Scholar CrossRef Search ADS PubMed\u00a0 Chernoff H. ( 1953). Locally optimal designs for estimating parameters. Ann. Math. Statist.\u00a0 24, 586\u2013 602. Google Scholar CrossRef Search ADS \u00a0 Covagnaro D. R. , Myung J. I., Pitt M. A. & Kujala J. V. ( 2010). Adaptive design optimization: A mutual information-based approach to model discrimination in cognitive science. Neural Comp.\u00a0 22, 887\u2013 905. Google Scholar CrossRef Search ADS \u00a0 Dette H. , Guchenko R. & Melas V. B. ( 2017a). Efficient computation of Bayesian optimal discriminating designs. J. Comp. Graph. Statist.\u00a0 26, 424\u2013 33. Google Scholar CrossRef Search ADS \u00a0 Dette H. , Melas V. B. & Guchenko R. ( 2015). Bayesian $$T$$-optimal discriminating designs. Ann. Statist.\u00a0 43, 1959\u2013 85. Google Scholar CrossRef Search ADS \u00a0 Dette H. , Melas V. B. & Shpilev P. ( 2012). $${T}$$-optimal designs for discrimination between two polynomial models. Ann. Statist.\u00a0 40, 188\u2013 205. Google Scholar CrossRef Search ADS \u00a0 Dette H. , Melas V. B. & Shpilev P. ( 2013). Robust $$T$$-optimal discriminating designs. Ann. Statist.\u00a0 41, 1693\u2013 715. Google Scholar CrossRef Search ADS \u00a0 Dette H. , Melas V. B. & Shpilev P. ( 2017b). $${T}$$-optimal discriminating designs for Fourier regression models. Comp. Statist. Data Anal.\u00a0 113, 196\u2013 206. Google Scholar CrossRef Search ADS \u00a0 Dette H. & Titoff S. ( 2009). Optimal discrimination designs. Ann. Statist.\u00a0 37, 2056\u2013 82. Google Scholar CrossRef Search ADS \u00a0 Felsenstein K. ( 1992). Optimal Bayesian design for discrimination among rival models. Comp. Statist. Data Anal.\u00a0 14, 427\u2013 36. Google Scholar CrossRef Search ADS \u00a0 Ghosh S. & Dutta S. ( 2013). Robustness of designs for model discrimination. J. Mult. Anal.\u00a0 115, 193\u2013 203. Google Scholar CrossRef Search ADS \u00a0 Jamsen K. M. , Duffull S. B., Tarning J., Price R. N. & Simpson J. ( 2013). A robust design for identification of the parasite clearance estimator. Malaria J.\u00a0 12, 410\u2013 6. Google Scholar CrossRef Search ADS \u00a0 Kiefer J. ( 1974). General equivalence theory for optimum designs (approximate theory). Ann. Statist.\u00a0 2, 849\u2013 79. Google Scholar CrossRef Search ADS \u00a0 L\u00f3pez-Fidalgo J. , Tommasi C. & Trandafir P. C. ( 2007). An optimal experimental design criterion for discriminating between non-normal models. J. R. Statist. Soc. B\u00a0 69, 231\u2013 42. Google Scholar CrossRef Search ADS \u00a0 Myung J. I. & Pitt M. A. ( 2009). Optimal experimental design for model discrimination. Psychol. Rev.\u00a0 116, 499\u2013 518. Google Scholar CrossRef Search ADS PubMed\u00a0 Ng S. H. & Chick S. E. ( 2004). Design of follow-up experiments for improving model discrimination and parameter estimation. Naval Res. Logist.\u00a0 2, 1\u2013 11. Otsu T. ( 2008). Optimal experimental design criterion for discriminating semi-parametric models. J. Statist. Plan. Infer.\u00a0 138, 4141\u2013 50. Google Scholar CrossRef Search ADS \u00a0 Pukelsheim F. ( 2006). Optimal Design of Experiments\u00a0. Philadelphia: SIAM. Google Scholar CrossRef Search ADS \u00a0 Silvey S. ( 1980). Optimal Design\u00a0. London: Chapman & Hall. Google Scholar CrossRef Search ADS \u00a0 Stegmaier J. , Skanda D. & Lebiedz D. ( 2013). Robust optimal design of experiments for model discrimination using an interactive software tool. PLOS ONE\u00a0 8, e55723, https:\/\/doi.org\/10.1371\/journal.pone.0055723. Google Scholar CrossRef Search ADS PubMed\u00a0 Tommasi C. & L\u00f3pez-Fidalgo J. ( 2010). Bayesian optimum designs for discriminating between models with any distribution. Comp. Statist. Data Anal.\u00a0 54, 143\u2013 50. Google Scholar CrossRef Search ADS \u00a0 Tommasi C. , Martin-Martin R. & Lopez-Fidalgo J. ( 2016). Max-min optimal discriminating designs for several statistical models. Statist. Comp.\u00a0 26, 1163\u2013 72. Google Scholar CrossRef Search ADS \u00a0 Ucinski D. & Bogacka B. ( 2005). $$T$$-optimum designs for discrimination between two multiresponse dynamic models. J. R. Statist. Soc.\u00a0 67, 3\u2013 18. Google Scholar CrossRef Search ADS \u00a0 Waterhouse T. H. , Woods D. C., Eccleston J. A. & Lewis S. M. ( 2008). Design selection criteria for discrimination\/estimation for nested models and a binomial response. J. Statist. Plan. Infer.\u00a0 138, 132\u2013 44. Google Scholar CrossRef Search ADS \u00a0 Wiens D.P. ( 2009). Robust discrimination designs. J. R. Statist. Soc.\u00a0 71, 805\u2013 29. Google Scholar CrossRef Search ADS \u00a0 \u00a9 2017 Biometrika Trust\n\nJournal\n\nBiometrikaOxford University Press\n\nPublished: Mar 1, 2018\n\nYou\u2019re reading a free preview. Subscribe to read the entire article.\n\nDeepDyve is your personal research library\n\nIt\u2019s your single place to instantly\ndiscover and read the research\nthat matters to you.\n\nover 18 million articles from more than\n15,000 peer-reviewed journals.\n\nAll for just $49\/month Explore the DeepDyve Library Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve Freelancer DeepDyve Pro Price FREE$49\/month\n\\$360\/year\n\nSave searches from\nPubMed\n\nCreate lists to\n\nExport lists, citations","date":"2018-07-22 09:15:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 42, \"equation\": 24, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9183549284934998, \"perplexity\": 2006.0604071756497}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-30\/segments\/1531676593142.83\/warc\/CC-MAIN-20180722080925-20180722100925-00349.warc.gz\"}"}
| null | null |
{"url":"https:\/\/ncatlab.org\/nlab\/show\/presentations+of+%28infinity%2C1%29-sheaf+%28infinity%2C1%29-toposes","text":"# nLab presentations of (infinity,1)-sheaf (infinity,1)-toposes\n\n### Context\n\n#### $(\\infty,1)$-Topos Theory\n\n(\u221e,1)-topos theory\n\n## Constructions\n\nstructures in a cohesive (\u221e,1)-topos\n\n# Idea\n\nAn (\u221e,1)-sheaf \u2013 often called a \u221e-stack \u2013 is the (\u221e,1)-categorical analog of a sheaf. Just like a category of sheaves is a topos, an (\u221e,1)-category of (\u221e,1)-sheaves is an (\u221e,1)-topos.\n\nThere is good motivation for sheaves, cohomology and higher stacks.\n\nHere we recall basic definitions and then concentrate on 1-categorical models that present (\u221e,1)-categories of \u221e-stacks.\n\nWhat we describe is effectively the old theory of the model structure on simplicial presheaves seen in the new light of Higher Topos Theory.\n\n# Plan\n\nWe proceed as follow.\n\n# Sheaf toposes\n\nIt is helpful to briefly recall the story that we want to tell in the category theory context, because in the full higher category theory context it will be literally the same with all notions such as adjoint functor, exact functor etc suitably regarded in the context of (\u221e,1)-functors.\n\n## Presheaves\n\nConsider a category $C$ that we want to think as a category of \u201ctest spaces\u201d. Classical choices would be $C =$ Top, the category of topological spaces, $C =$ Diff, the category of smooth manifolds or $C = Op(X)$, the category of open subsets of some topological space $X$.\n\nLet Set be the category of sets. We write\n\n$PSh(C) := [C^{op}, Set] := Func(C^{op}, Set)$\n\nfor the category of presheaves on $C$. This is like a category of very general spaces modeled on $C$ as described at motivation for sheaves, cohomology and higher stacks.\n\n## Sheaves\n\nIn fact, this is a bit too general for most purposes: the objects of $PSh(C)$ may be very non-local in that they don\u2019t respect the way test objects in $C$ are supposed to glue together. The full subcategory on those presheaves that do respect some kind of gluing of test objects is the category of sheaves.\n\n###### Definition\n\nA category of sheaves on $C$ is a category $Sh(C)$ equipped with a geometric embedding into $PSh(C)$\n\n$Sh(C) \\stackrel{\\leftarrow}{\\to} PSh(C) \\,.$\n\nRecall that this means that\n\nin other words that\n\nIn view of our models for $\\infty$-sheaves it is of importance that this implies an equivalence characterization\n\n###### Proposition\n\nThe category $Sh(C)$ is equivalent to the full subcategory of $S$-local presheaves, where $S$ is the set of local isomorphisms.\n\nAnother useful kind of geometric embeddings is that of the point:\n\nlet ${*}$ be the category with a single morphism (the identity on a single object). Then $PSh({*}) \\simeq Sh({*}) \\simeq Set$. Geometric embeddings\n\n$x : Sh({*}) \\stackrel{\\leftarrow}{\\to} Sh(C)$\n\nare called points of $Sh(C)$. We say that $Sh(C)$ has enough points if isomorphisms of sheaves can be tested on points\n\n$(f : A \\stackrel{\\simeq}{\\to} B)\\in Sh(C) \\;\\; \\Leftrightarrow \\;\\; \\forall x : (x^* f : x^* A \\stackrel{\\simeq}{\\to} x^* B) \\,.$\n\nThis is the situation we shall concentrate on here.\n\n\u2022 The topos $Sh(Diff)$ has enough points, one for every $n \\in \\mathbb{N}$.\n\n\u2022 The topos $Sh(Op(X))$ has enough points: one for every ordinary point of $X$.\n\nIf $Sh(C)$ has enough points, we may characterize sheaves in yet another way, which is the one that directly suggests the local model structure on simplicial presheaves discussed below:\n\n###### Proposition\n\nLet $S \\subset Mor(PSh(C))$ be the set of stalkwise isomorphisms, i.e. those morphisms $f : A \\to B$ of presheaves such that for all points $x$ the morphism $x^* f : x^* A \\to x^* B$ is an isomorphism (of sets).\n\nIf $Sh(C)$ has enough points, then $Sh(C)$ is equivalent to the full subcategory of $S$-local presheaves.\n\nThe local model structure on simplicial presheaves that we are going to describe is obtained from this description of sheaves by\n\nSo the model structures we shall encounter are plausible guesses. What is less trivial is that this plausible structure indeed presents the fully general notion of (\u221e,1)-sheaf\/\u221e-stack.\n\nThis fully general notion we introduce now.\n\n# $(\\infty,1)$-categories and their presentation\n\nAn ordinary locally small category is a category enriched over the category Set of sets.\n\nAn (\u221e,0)-category is an \u221e-groupoid which we think of as modeled by a simplicial set that is a Kan complex.\n\nRecall that there is a notion of nerve and realization\n\n$N : SSet\\text{-}Cat \\stackrel{\\leftarrow}{\\to} SSet : |-|$\n$\\Delta_{SSet-Cat} : \\Delta \\to SSet-Cat$\n\nwhere the nerve operation $N$ is called the homotopy coherent nerve of simplicially enriched categories.\n\n###### Definition ($(\\infty,1)$-category)\n\nAn (\u221e,1)-category is a category enriched over $\\infty$-groupoids, i.e. an SSet-enriched category all whose hom-objects happen to be Kan complexes.\n\nGiven two $(\\infty,1)$-categories $\\mathbf{C}$ and $\\mathbf{D}$ the (\u221e,1)-functor $(\\infty,1)$-category is\n\n$Func(\\mathbf{C}, \\mathbf{D}) := |SSet(N(\\mathbf{C}), N(\\mathbf{D}))| \\,.$\n\nThis is indeed itself an $(\\infty,1)$-category (HTT, prop 1.2.7.3).\n\nThe (\u221e,1)-category of (\u221e,1)-categories $(\\infty,1)Cat$ is that whose\n\n\u2022 objects are $(\\infty,1)$-categories;\n\n\u2022 for $\\mathbf{C}$ and $\\mathbf{D}$ two $(\\infty,1)$-categories the $\\infty$-groupoid $(\\infty,1)Cat(\\mathbf{C}, \\mathbf{D})$ is the maximal Kan complex inside the simplicial set of maps between the homotopy coherent nerves\n\n$(\\infty,1)Cat(\\mathbf{C}, \\mathbf{D}) := Core( SSet(N(\\mathbf{C}), N(\\mathbf{D})) ) \\,.$\n\nExamples\n\n\u2022 Using the monoidal embedding $const : Set \\hookrightarrow \\infty Grpdf \\subset SSet$ every ordinary category is an $(\\infty,1)$-category.\n\n\u2022 The $(\\infty,1)$-category $\\infty Grpd$ (\u221eGrpd) is the full SSet]-subcategory of [[SSet? on Kan complexes.\n\n###### Definition (homotopy category)\n\nThe simplicial connected components functor\n\n$\\pi_0 : SSet \\to Set$\n\nis strong monoidal and hence induces a functor\n\n$H : (\\infty,1)Cat \\to Cat \\,.$\n\nThe image $H(\\mathbf{C})$ of an $(\\infty,1)$-category $\\mathbf{C}$ with $H(\\mathbf{C})(x,y) = \\pi_0(\\mathbf{C}(x,y))$ is the homotopy category of an (\u221e,1)-category.\n\nTwo $(\\infty,1)$-categories $\\mathbf{C}$ and $\\mathbf{D}$ are equivalent if they are isomorphic in $H((\\infty,1)Cat)$\n\n$(f : \\mathbf{C} \\to \\mathbf{D} \\;is equivalence) \\;\\; \\Leftrightarrow \\;\\; (H_{(\\infty,1)Cat}(f) : \\mathbf{C} \\to \\mathbf{D} \\; is isomorphism) \\,.$\n\n## Presentations\n\nIt is often convenient to present $(\\infty,1)$-categories by 1-categorical models.\n\n###### Definition ($(\\infty,1)$-category presented by a model category)\n\nFor $\\mathbf{A}$ a combinatorial simplicial model category, the $(\\infty,1)$-category presented by it is the full subcategory $\\mathbf{A}^\\circ \\subset \\mathbf{A}$ on objects that are both cofibrant and fibrant.\n\nRemark The axioms of a simplicial model category ensure that the hom-simplicial sets of $\\mathbf{A}^\\circ$ are indeed Kan complexes. (for instance HTT, remark 3.1.8).\n\n###### Proposition (HTT, remark A.3.7.7)\n\nLet $\\mathbf{A}$ and $\\mathbf{B}$ be combinatorial simplicial model categories. Then the corresponding $(\\infty,1)$-categories $\\mathbf{A}^\\circ$ and $\\mathbf{B}^\\circ$ are equivalent precisely if there is a sequence of SSet-enriched Quillen equivalences\n\n$\\mathbf{A} \\stackrel{\\leftarrow}{\\to} \\stackrel{\\to}{\\leftarrow} \\stackrel{\\leftarrow}{\\to} \\cdots \\mathbf{B} \\,.$\n\n# $(\\infty,1)$-Sheaf $(\\infty,1)$-toposes\n\nThere is now an obvious definition of $(\\infty,1)$-categories of $(\\infty,1)$-presheaves and of $(\\infty,1)$-sheaves by interpreting the 1-categorical story in the $(\\infty,1)$-categorical context.\n\n## $(\\infty,1)$-Presheaves\n\nNow we generalize the above from sheaves to (\u221e,1)-sheaves also known as \u221e-stacks.\n\n###### Definition ($(\\infty,1)$-presheaves)\n\nThe $(\\infty,1)$-category of (\u221e,1)-presheaves on $C$ is\n\n$PSh_\\infty(C) := [C^{op}, \\infty Grpd] = Func( C^{op}, \\infty Grpd ) \\,.$\n###### Proposition (models for $(\\infty,1)$-presheaves) (HTT, prop. 4.2.4.4,\n\nThe $(\\infty,1)$-category presented by the global model structure on simplicial presheaves $SPSh(C)_{proj}$ on $C$ (either the projective or the injective one) is equivalent to that of $(\\infty,1)$-presheaves on $C$:\n\n$(SPSh(C)_{proj})^{\\circ} \\simeq (SPSh(C)_{inj})^{\\circ} \\simeq PSh_\\infty(C) \\,.$\n\n## $(\\infty,1)$-sheaves\n\nThere are $(\\infty,1)$-category analogs of all the familiar notions from category theory, in particular\n\nUsing this we obtain a definition of geometric embedding of $(\\infty,1)$-toposes , i.e. left exaxt reflective (\u221e,1)-subcategories by literally copying the 1-categorical definition.\n\n###### Definition ($(\\infty,1)$-sheaves) (HTT, def. 6.1.0.4)\n$Sh_\\infty(C) \\stackrel{\\leftarrow}{\\to} PSh_\\infty(C) \\,.$\n###### Proposition (models for reflective $(\\infty,1)$-subcategories)\n\nLet the combinatorial simplicial model category $\\mathbf{B}$ be a left Bousfield localization of the combinatorial simplicial model category $\\mathbf{A}$ then\n\n$\\mathbf{B}^\\circ \\stackrel{\\leftarrow}{\\to} \\mathbf{A}^\\circ$\n\nis the inclusion of a reflective (\u221e,1)-subcategory.\n\n###### Proof\n\nBy HTT, prop A.3.7.4 every combinatorial simplicial left Bousfield localization is given by a set $S$ of cofibrations such that\n\n\u2022 the fibrant objects of $\\mathbf{B}$ are precisely the fibrant objects in $\\mathbf{A}$ that are $S$-local object;\n\n\u2022 the weak equivalences of $\\mathbf{B}$ are the $S$-local morphisms in $\\mathbf{A}$.\n\nAccordingly $\\mathbf{B}^\\circ$ is the full $\\infty Grpd$-enriched subcategory of $\\mathbf{A}^\\circ$ on $S$-local objects. (see also HTT, prop 6.5.2.14).\n\nBy HTT, prop. 5.5.4.15 this means that $\\mathbf{B}$ is a reflective (\u221e,1)-subcategory of $\\mathbf{A}$.\n\nRemark Notice that this does not yet say that the localization is left exact .\n\nBut this makes at least plausible that the local model structure on simplicial presheaves is a presentation for an (\u221e,1)-category of (\u221e,1)-sheaves.\n\nThat this is indeed the case is\n\n###### Proposition (model for hypercomplete $(\\infty,1)$-sheaves) (HTT, prop. 6.5.2.14)\n\nThe local model structure on simplicial presheaves $SSh(C)^{l loc}_{proj}$ presents the hypercompleted version of the (\u221e,1)-category of (\u221e,1)-sheaves $Sh^{hc}(C)$ on $C$.\n\n$(SPSh(C)_{proj}^{loc})^\\circ \\simeq Sh^{hc}(C) \\,.$\n\nRemark See the discussion at ?ech cohomology for the role of hypercompletion.\n\n# Applications\n\n## Abelian sheaf cohomology as special case of $\\infty$-stackification\n\nThe nerve operation of the Dold-Kan correspondence\n\n$N : Ch_+ \\to SimpAb \\subset \\infty Grpd$\n\nembeds sheaves with values in non-negatively graded chain complexes of abelian groups into simplicial sheaves as those simplicial sheaves with values in Kan complexes that carry a struict abelian group structure. This way homological algebra and abelian sheaf cohomology are realized as special cases of models for $\\infty$-stacks: a complex of abelian sheaves presents a stably abelian $\\infty$-stack.\n\n###### Proposition\n\nUnder the Dold-Kan correspondence abelian sheaf cohomology identifies with the hom-set of the homotopy category corresponding infinity-stack (infinity,1)-topos.\n\nMore precisely, let\n\n\u2022 the underlying site be the category of open subsets $C = Op(X)$ of a topological space $X$,\n\n\u2022 let $A \\in Sh(X)$ be a sheaf with values in abelian groups on $X$;\n\n\u2022 let $\\mathbf{B}^n A \\in Sh(X,SSet)$ be the image of the complex of sheaves $A[-n]$ concentrated in degree $n$ under the Dold-Kan nerve;\n\n\u2022 write $X \\in Sh(X)$ for the terminal object sheaf in $Sh(X)$ (the sheaf constant on the singleton set).\n\nThen degree $n$ abelian sheaf cohomology of $X$ with coefficients in $A$ is homotopy classes of maps from $X$ to $\\mathbf{B}^n A$:\n\n$H^n(X,A) \\simeq Ho_{SSh(X)}(X, \\mathbf{B}^n A) \\,.$\n###### Proof\n\nThe original proof was given in BrownAHT in terms of the category of fibrant objects structure on locally Kan simplicial sheaves.\n\nThe analogous arguments in terms of the full injective model structure were given by Jardine. See section 6 of his lecture notes.\n\nLast revised on January 17, 2011 at 14:03:08. See the history of this page for a list of all contributions to it.","date":"2022-08-14 19:01:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 155, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9243309497833252, \"perplexity\": 757.5198209521128}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882572063.65\/warc\/CC-MAIN-20220814173832-20220814203832-00481.warc.gz\"}"}
| null | null |
{"url":"http:\/\/mathhelpforum.com\/calculus\/78732-question-sequences-print.html","text":"A question on sequences\n\n\u2022 March 14th 2009, 09:59 PM\ntwilightstr\n$a_n= (3^n + 5^n)^{\\frac 1n}$\n\nis the above sequence convergent? if so, what is the limit?\n\u2022 March 14th 2009, 10:41 PM\nJhevon\nQuote:\n\nOriginally Posted by twilightstr\n$a_n= (3^n + 5^n)^{\\frac 1n}$\n\nis the above sequence convergent? if so, what is the limit?\n\nit is convergent.\n\nHint: $(3^n + 5^n)^{\\frac 1n} = \\text{exp} \\left( \\ln (3^n + 5^n)^{\\frac 1n} \\right) = \\text{exp} \\left( \\frac {\\ln (3^n + 5^n)}n \\right)$\n\nnow take the limit\n\u2022 March 14th 2009, 11:10 PM\ntwilightstr\nis it infinity\n\u2022 March 14th 2009, 11:12 PM\nJhevon\nQuote:\n\nOriginally Posted by twilightstr\nis it infinity\n\nno (if it were, it wouldn't be convergent)\n\nHint 2: you can use L'Hopital's rule to find the limit\n\u2022 March 14th 2009, 11:18 PM\no_O\nJust as an alternative..\n\nIf you can use the fact that $\\lim_{n \\to \\infty} a^{\\frac{1}{n}} = 1$ for $a > 0$, then notice that:\n\n$5^n \\ \\ \\leq \\ \\ 3^n + 5^n \\ \\ \\leq \\ \\ 5^n + 5^n$\n\n$\\Rightarrow \\left(5^n \\right)^{\\frac{1}{n}} \\ \\ \\leq \\ \\ \\left(3^n +5^n \\right)^{\\frac{1}{n}} \\ \\ \\leq \\ \\ \\left(5^n +5^n \\right)^{\\frac{1}{n}}$\n\n$\\Rightarrow 5 \\ \\ \\leq \\ \\ \\left(3^n +5^n \\right)^{\\frac{1}{n}} \\ \\ \\leq \\ \\ 5\\left(2 \\right)^{\\frac{1}{n}}$\n\nAnd use squeeze theorem ..\n\u2022 March 14th 2009, 11:23 PM\ntwilightstr\njhevon, how would u take the limit\n\u2022 March 14th 2009, 11:34 PM\nJhevon\nQuote:\n\nOriginally Posted by twilightstr\njhevon, how would u take the limit\n\nI told you. I would use L'Hopital's rule. o_O gives a nice alternative\n\u2022 March 14th 2009, 11:36 PM\ntwilightstr\nyes. thats what i tried doing in the first place, but i think i did it incorrectly.\n\u2022 March 15th 2009, 12:23 AM\nJhevon\nQuote:\n\nOriginally Posted by twilightstr\nyes. thats what i tried doing in the first place, but i think i did it incorrectly.\n\nrecall that, by L'Hopital's, $\\lim_{n \\to \\infty} \\frac {\\ln (3^n + 5^n)}n = \\lim_{n \\to \\infty} \\frac {\\frac d{dn}[ \\ln (3^n + 5^n)]}{\\frac d{dn}n}$\n\ni suppose it is the $\\frac d{dn}[ \\ln (3^n + 5^n)]$ that is giving you trouble. what did you get for this?\n\u2022 March 15th 2009, 10:19 AM\nstapel\nQuote:\n\nOriginally Posted by twilightstr\nyes. thats what i tried doing in the first place, but i think i did it incorrectly.\n\nWe'll be glad to look for any errors, but you'll need to show the work you did.\n\nPlease be complete. Thank you! :D\n\u2022 March 15th 2009, 12:50 PM\ntwilightstr\n= [(ln3)3^x + (ln5)5^x]\/(3^x + 5^x) as the limit goes to infinity. I really dont know what to do from there\n\u2022 March 15th 2009, 12:53 PM\nJhevon\nQuote:\n\nOriginally Posted by twilightstr\n= [(ln3)3^x + (ln5)5^x]\/(3^x + 5^x) as the limit goes to infinity. I really dont know what to do from there\n\nwell, after that, we're good. multiply by $\\frac {\\frac 1{5^x}}{\\frac 1{5^x}}$, the limit should seem obvious from there\n\u2022 March 15th 2009, 01:05 PM\nKrizalid\n$\\left( 3^{n}+5^{n} \\right)^{1\/n}=5\\left\\{ \\left( \\frac{3}{5} \\right)^{n}+1 \\right\\}^{1\/n}\\to 5$ since $\\left( \\frac{3}{5} \\right)^{n}+1\\to 1$ and $1^{1\/n}\\to 1$ as $n\\to\\infty.$","date":"2016-08-30 19:45:49","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 15, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9603951573371887, \"perplexity\": 1480.012026829062}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-36\/segments\/1471983001995.81\/warc\/CC-MAIN-20160823201001-00278-ip-10-153-172-175.ec2.internal.warc.gz\"}"}
| null | null |
{"url":"https:\/\/groupprops.subwiki.org\/w\/index.php?title=Group_in_which_every_element_is_automorphic_to_its_inverse&mobileaction=toggle_view_mobile","text":"# Group in which every element is automorphic to its inverse\n\nBEWARE! This term is nonstandard and is being used locally within the wiki. [SHOW MORE]\nThis article defines a group property: a property that can be evaluated to true\/false for any given group, invariant under isomorphism\nView a complete list of group properties\nVIEW RELATED: Group property implications | Group property non-implications |Group metaproperty satisfactions | Group metaproperty dissatisfactions | Group property satisfactions | Group property dissatisfactions\n\n## Definition\n\nA group $G$ is termed a group in which every element is automorphic to its inverse if it satisfies the following equivalent conditions:\n\n1. For every element $g \\in G$, there is an automorphism $\\sigma$ of $G$ such that $\\sigma(g) = g^{-1}$.\n2. There exists a group $K$ containing $G$ as a normal subgroup such that every element of $G$ is a real element of $K$: it is conjugate to its inverse in $K$.\n\n## Metaproperties\n\n### Characteristic subgroups\n\nThis group property is characteristic subgroup-closed: any characteristic subgroup of a group with the property, also has the property\n\nIf every element of $G$ is automorphic to its inverse, then if $H$ is a characteristic subgroup of $G$, every element of $H$ is also automorphic to its inverse.","date":"2020-03-29 07:00:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 14, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7725240588188171, \"perplexity\": 474.62247724370485}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-16\/segments\/1585370493818.32\/warc\/CC-MAIN-20200329045008-20200329075008-00137.warc.gz\"}"}
| null | null |
Q: Symfony and Azure Distribution Bundle - assets in Azure I am trying to deploy an existing Symfony 2.1 application to Azure. For this I am using the Azure Distribution Bundle, and I am trying to deploy assets to Azure as documented here.
However, I am getting an error when doing windowsazure:package to create the package:
Catchable Fatal Error: Argument 2 passed to WindowsAzure\DistributionBundle\Deployment\Assets\BlobStrategy::__construct() must be an instance of WindowsAzure\Blob\BlobRestProxy, none given, called in C:\IGPR\igpr\app\cache\azure\appAzureDebugProjectContainer.php on line 2361 and defined in C:\IGPR\igpr\vendor\beberlei\azure-distribution-bundle\WindowsAzure\DistributionBundle\Deployment\Assets\BlobStrategy.php line 35
Here is the relevant section of my config.yml:
windows_azure_distribution:
...
services:
blob:
default: UseDevelopmentStorage=true
azureprod: DefaultEndpointsProtocol=http;AccountName=myaccountname;AccountKey=MyVeryLongAccOUntKeY==
assets:
type: blob
connection_name: azureprod
Any ideas? Seems that the Blob proxy cannot be created. I get the same error if I try to use the local development storage.
The bundle is installed via Composer.
A: Looks like this was a bug in the Azure Distribution Bundle, which the maintainer has now fixed.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,828
|
package com.weathercool.adapter;
import android.content.Context;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.BaseAdapter;
import android.widget.TextView;
import com.weathercool.R;
import com.weathercool.model.City;
import java.util.ArrayList;
import java.util.List;
/**
* Created by wangzhiguo on 15/8/21.
*/
public class CityAdapter extends BaseAdapter {
private Context mContext;
private LayoutInflater mLayoutInflater;
private List<City> mCityList;
private int mPosition;
public CityAdapter(Context context) {
this.mContext = context;
mLayoutInflater = LayoutInflater.from(context);
mCityList = new ArrayList<City>();
}
public void addItem(List<City> list) {
mCityList.clear();
mCityList.addAll(list);
notifyDataSetChanged();
}
public void setMPosition(int position) {
this.mPosition = position;
notifyDataSetChanged();
}
@Override
public int getCount() {
return mCityList.size();
}
@Override
public Object getItem(int i) {
return mCityList.get(i);
}
@Override
public long getItemId(int i) {
return i;
}
@Override
public View getView(int i, View view, ViewGroup viewGroup) {
ViewHolder viewHolder;
if(view == null) {
view = mLayoutInflater.inflate(R.layout.item_area,viewGroup,false);
viewHolder = new ViewHolder();
viewHolder.tvArea = (TextView) view.findViewById(R.id.province_name);
view.setTag(viewHolder);
} else {
viewHolder = (ViewHolder) view.getTag();
}
if(i == mPosition) {
viewHolder.tvArea.setBackgroundColor(mContext.getResources().getColor(R.color.gray_bg_7));
viewHolder.tvArea.setTextColor(mContext.getResources().getColor(R.color.blue5));
}else {
viewHolder.tvArea.setBackgroundColor(mContext.getResources().getColor(R.color.gray_bg_8));
viewHolder.tvArea.setTextColor(mContext.getResources().getColor(R.color.gray));
}
final City city = mCityList.get(i);
viewHolder.tvArea.setText(city.cityName);
return view;
}
static class ViewHolder {
TextView tvArea;
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,128
|
{"url":"https:\/\/www.vn.freelancer.com\/job-search\/summarize-research-academic-article\/","text":"# Summarize research academic article jobs\n\n### B\u1ed9 l\u1ecdc\n\n\u0111\u1ebfn\n\u0111\u1ebfn\n\u0111\u1ebfn\n##### T\u00ecnh tr\u1ea1ng c\u00f4ng vi\u1ec7c\n6,800 summarize research academic article c\u00f4ng vi\u1ec7c \u0111\u01b0\u1ee3c t\u00ecm th\u1ea5y, gi\u00e1 USD\nDocument Summarization 6 ng\u00e0y left\n\u0110\u00c3 X\u00c1C TH\u1ef0C\n\nSummarize specific sections of 2,000+ documents. These sections are 30-40 page long. Summary needs to be in 6-8 pages each, without losing\/ damaging the meaning of the content. E.g. summarize the following sections of the attached document. Section VI - List of Requirements, Section VII \u2013 Technical Specifications, Section VIII \u2013 Quality Control Requirements\n\n$4 \/ hr (Avg Bid)$4 \/ hr Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n6 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\nCreate Powerpoint slides from PDF 6 ng\u00e0y left\n\u0110\u00c3 X\u00c1C TH\u1ef0C\n\n...important! Read each section. Then summarize in your own words. Key words must be re-used however. Milestones: M1: Summarize and create Powerpoint slides for Chapter EU 1 and EU 1.1 only; submit PPT for validation M2: Summarize and create Powerpoint slides for all Chapter EU 1; submit PPT for validation M3: Summarize all PDF and create Powerpoint slides;\n\n$108 (Avg Bid)$108 Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n57 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\n\n...users. The most important thing is these questions would probably contain equations so MathJax should be supported. Statistics also should appear below the questions. To summarize the required feature : -Easy and simple dashboard to add new questions -Membership for users to view answers and explanations -Comments allowed for users -Social sharing -Innovative\n\n$298 (Avg Bid)$298 Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n51 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\n\nI have a pdf on an excel sheet that shows the required data sets to be applied to the related data sets to produce the needed summarize answer. I need the sheet created and the formulas properly placed the sheet functions to provide the proper numeric answers. I do not need the \"Gateway Engineering\" heading on the sheet. But I need everything else\n\n$23 (Avg Bid)$23 Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n29 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\n\n...Messaging Support Messenger Support Sign up \/ Login through Facebook and other social media platforms Manage User profile and set preferences Swipe to Accept\/Decline\/Chat Summarize each flash card activity Secured Real Time Chat feature Notifications\/Feeds Geopositioning Great UI Better Match Making Algorithm Social Shares Navigation to Other Users\n\n$674 (Avg Bid)$674 Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n22 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\n\nThe project consists of two phases. The first is to summarize, group, and compare the website messaging used by five identified competitors that will be shared upon award of the project. Specifically, summarizing means creating a brief description of the copy present on a competitor's homepage. Grouping means segmenting the main copy elements into\n\n$228 (Avg Bid)$228 Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n18 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\n\n...will be assessed to determine if it is appropriate to proceed with the inferential analyses. Data Report: \uf076 The first section of the report will include a presentation of the research aims and hypotheses, as well as of the analyses to be conducted. \uf076 Afterwards, the results of the assumption testing will be reported (2nd section) and followed by the results\n\n$205 (Avg Bid)$205 Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n36 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\nSummarize data 3 ng\u00e0y left\n\nHello, I have an Excel spreadsheet filled with data and I need it summarized in whatever way you believe right. This shouldn't take more than 3 hours. Please bid only if you have the relevant skills. When applying for this job, start your bid with the answer to this question: What is the capital of Lesotho. Regards DK\n\n$5 \/ hr (Avg Bid)$5 \/ hr Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n35 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\nResearch Intern 2 ng\u00e0y left\n\n...intern with an interest in research and strategy. Our team of analysts provides below-the-surface analytics, measurement, and insights discovery capabilities to help companies accomplish their business objectives. Responsibilities include Monitoring digital news stories Conducting secondary research via Google, research databases and other sources\n\n$26 \/ hr (Avg Bid)$26 \/ hr Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n21 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\nBuild a Database\/Phone App 1 ng\u00e0y left\n\u0110\u00c3 X\u00c1C TH\u1ef0C\n\n...specified on a form. The database needs to store the data that each client enters, then I will need a function to be able to run reports at the end of the week, which will summarize the client's week. Ultimately, I would like to turn this into an app, which will prompt the user to enter data at a specific time of day, and will send a push notification\n\n$1291 (Avg Bid)$1291 Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n136 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\niOS Developer - Summarize Algorithm 1 ng\u00e0y left\n\u0110\u00c3 X\u00c1C TH\u1ef0C\n\nI am needing a developing to help me with building an intense algorithm. Answer This First: 10-4? We will be building it in Swift. I am limited on Budget but we will be working together for long term. Why do you think I should hire you?\n\n$375 (Avg Bid)$375 Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n48 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\n\n...be 1200x1200 pixels. When you press the \"result\" button it won't show the popup like the demo does. It will perform the function of the \"print your image\" button. So, to summarize 1. Add croppie source code to my files. 2. Add a \"Select Your Photo\" button to my [\u0111\u0103ng nh\u1eadp \u0111\u1ec3 xem URL] 3. When \"Select Your Photo\" is pressed it will let the use...\n\n$160 (Avg Bid)$160 Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n23 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\n\n...a helpful list of conditions can be found at this URL; [\u0111\u0103ng nh\u1eadp \u0111\u1ec3 xem URL] Eventually we will assign this excel sheet to a writer, who will summarize the studies, properly cite the studies, and turn them into blog articles that the average person can read and comprehend. It is of paramount importance the studies be about\n\n$30 -$250\n$30 -$250\n29 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\n\nMy business offers emergency management services to non-profits. We want some technical documents pharaphrased and summarized. Please quote me for 10 pages. There is the possibility of up to 100 pages depending on the quality of the work and price. Thanks.\n\n$111 (Avg Bid)$111 Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n51 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\n\nWe have a client who needs to write between 5-10 project details and summary. We'll need a writer who can speak to the client over the phone and be able to summarize the project for their website. The content will need to be discussed with the client first and you must be available to talk to the client during the working hours (9-5pm; US Eastern Time)\n\n$122 (Avg Bid)$122 Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n34 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\nTranscribe this audio. \u0110\u00e3 k\u1ebft th\u00fac left\n\nI need this audio transcribed clean verbatim. Please do not summarize.\n\n$20 (Avg Bid)$20 Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n54 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\nPrint sales strategy \u0110\u00e3 k\u1ebft th\u00fac left\n\n...high-quality print products and services. . \u2022 Work to identify new business prospects and service existing Customers \u2022 Profile and hunt for clients using various methods of research \u2022 Conduct sales and product opportunities for clients \u2022 Provide on a weekly basis, sales opportunities, deal wins and losses and market feedback and progress of each state\n\n$12 \/ hr (Avg Bid)$12 \/ hr Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n3 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\nRewrite and summarize papers \u0110\u00e3 k\u1ebft th\u00fac left\n\nI need a professional person help me to rewrite mini paragraphs in academic skills writing way. There are 4 papers need to review, and note that the similarity should to be 0 without any plagiarism. I need the work ready by 1 day. Thanx on advanced.\n\n$39 (Avg Bid)$39 Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n55 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\n\nWe are working on our ecommerce web site. We need Content writer with SEO optimization in ...Content writer with SEO optimization in German to our blogs. 5-10 pages content in English with sources link would be provided. The topic is selected. The writer should summarize and use the keywords with SEO optimization. The content should be min 250 words.\n\n$131 (Avg Bid)$131 Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n26 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1\nSHARP ESSAY \u0110\u00e3 k\u1ebft th\u00fac left\n\n...reduce(or not reduce) the risk of sexual harassment? ( could be tied to point A) C.( point D) how do we end sexual violence in the military? [\u0111\u0103ng nh\u1eadp \u0111\u1ec3 xem URL]: (1-2) paragraphs A. summarize the main points. B. make a strong, memorable final statement NOTES: essay must be 1-2 pages in length ( title page does not count as a page). minimum 500 words maximum 750\n\n$80 (Avg Bid)$80 Gi\u00e1 \u0111\u1eb7t trung b\u00ecnh\n6 l\u01b0\u1ee3t \u0111\u1eb7t gi\u00e1","date":"2018-08-16 16:18:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.17682476341724396, \"perplexity\": 11706.554069227397}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-34\/segments\/1534221211126.22\/warc\/CC-MAIN-20180816152341-20180816172341-00326.warc.gz\"}"}
| null | null |
\section{Introduction}
Recently scalarized black holes gained a lot of attention in the literature. In general relativity black holes possess no scalar hair, associated with a single real scalar field. They are uniquely determined by their mass, angular momentum and charge if gravity is coupled to a Maxwell field \cite{Chrusciel:2012jk}. However, there are possibilities to bypass the no-hair theorem and allow scalar hair on black holes \cite{Herdeiro:2015waa, Cardoso:2016ryw}. In string theory the scalar field corresponds to a dilaton. Coupling the dilaton to the Lagrangian of the electromagnetic field yields charged dilatonic black hole solutions \cite{Garfinkle:1990qj}, \cite{Kleihaus:2003df}, \cite{Gibbons:1987ps}. In the case of Kaluza-Klein coupling, first a new spatial dimension is added to an existing black hole solution, then a boost is performed. The charged dilatonic black hole solution can now be obtained by a dimensional reduction, see \cite{Maison:1979kx}--\cite{Kunz:2006jd}.
Chodos et al. applied this method \cite{Chodos:1980df} to the Schwarzschild spacetime and obtained the static electrically charged Einstein-Maxwell-dilaton black hole. The corresponding rotating charged solution was found in \cite{Frolov:1987rj,Horne:1992zy}, where the Kerr solution was used as a starting point for the method. Later Rasheed \cite{Rasheed:1995zv} applied two boosts and a rotation to find a rotating, dilatonic black hole in Kaluza-Klein theory which is electrically and magnetically charged.
The dilaton can also be coupled to a curvature invariant, for example a Gau{\ss}-Bonnet term \cite{Kanti:1995vq}--\cite{Kokkotas:2017ymc}. Lately, there is a growing interest in more general coupling functions, since these lead to spontaneously scalarized black holes \cite{Doneva:2017bvd}--\cite{Konoplya:2019goy}.\\
A powerful approach to gain insight to the structure of a spacetime is to study the motion of test particles. Black holes in different theories can be explored with the analysis of the orbits of test particles and light. When deriving the equations of motion, the Hamilton-Jacobi formalism is a very efficient method. For the Kerr spacetime, the separability of the Hamilton-Jacobi equation was shown by Carter \cite{Carter:1968ks}. In the four-dimensional Kerr spacetime, the equations of motion can be solved analytically with the help of elliptic functions. However, in higher dimensions or when the cosmological constants is taken into account, the equations become often more complicated and hyperelliptic functions are needed in order to solve the equations of motion analytically, see e.g. \cite{Kraniotis:2003ig}--\cite{Hackmann:2010zz}.
Most scalarized black hole solutions exist in numerical form. In this article we want to focus on an analytical solution and therefore we study the spacetime of the extremal rotating dyonic black hole in Kaluza-Klein theory found by Rasheed \cite{Rasheed:1995zv}. This solution has some surprising features, which make it interesting to study. Rasheed found that the gyromagnetic and gyroelectric ratios can become arbitrarily large. Furthermore he investigated the properties of the extremal case. The angular velocity of the event horizon can be zero while the black hole still has non-zero ADM angular momentum. Also, in certain configurations angular momentum can be added, while keeping the mass and charges fixed, and the solution will remain extremal. In Einstein-Maxwell theory this would lead to a naked singularity.
We will investigate the motion of massless particles and charged particles in the black hole spacetime of Rasheed. A subclass of spacetimes with electric and dilatonic charge only, the Kerr-Kaluza-Klein black hole, was considered in \cite{Aliev:2013jya}. The authors showed that the Hamilton-Jacobi equation separates completely for massless particles. Additionally they studied the motion uncharged particles in the equatorial plane. In \cite{Cunha:2018uzc} black holes without $\mathbb{Z}_2$ symmetry are considered, which includes the dyonic black hole found by Rasheed \cite{Rasheed:1995zv}. It was shown that the Hamilton-Jacobi equation is fully separable for massless particles.\\
The present article is structured as follows. First we will introduce the black hole spacetime by Rasheed \cite{Rasheed:1995zv} and review some of its properties. It can be shown that the Hamilton-Jacobi equation for particles moving around an extremal Rashed black hole separates in three cases: massless particles, charged particles around a non-rotating Rasheed black hole, charged particles in the equatorial plane of the rotating Rasheed black hole. We will analyze the motion of charged particles in the equatorial plane in detail. Then we will present the analytical solutions of the equations of motion and plot some examples of the orbits.
\section{The rotating dyonic black hole in Kaluza-Klein theory}
In 1995 D. Rasheed \cite{Rasheed:1995zv} presented a rotating dyonic black hole in Kaluza-Klein theory. It can be obtained from the Kerr solution by adding a spatial dimension and then applying two boosts and a rotation. A dimensional reduction leads to an electrically and magnetically charged rotating black hole with a scalar dilaton field. The metric is
\begin{equation}
{\mathrm{d}} s^2_{(4)} = -\frac{f^2}{\sqrt{AB}}\left({\mathrm{d}} t+{\omega^0}_\phi {\mathrm{d}}\phi\right)^2 + \frac{\sqrt{AB}}{\Delta}{\mathrm{d}} r^2 + \sqrt{AB}{\mathrm{d}}\theta^2 + \frac{\Delta\sqrt{AB}}{f^2}\sin^2\theta {\mathrm{d}}\phi^2
\label{eqn:ds4}
\end{equation}
and the electromagnetic vector potential is given by
\begin{equation}
2A_\mu {\mathrm{d}} x^\mu = \frac{C}{B}{\mathrm{d}} t + \left({\omega^5}_\phi + \frac{C}{B}{\omega^0}_\phi\right){\mathrm{d}}\phi \, .
\end{equation}
The dilaton field $\sigma$ can be read of from the five-dimensional metric in \cite{Rasheed:1995zv} which gives
\begin{equation}
\exp \left( \frac{4}{\sqrt{3}}\sigma \right) = \frac{B}{A}
\end{equation}
so that
\begin{equation}
\sigma = \frac{\sqrt{3}}{4} \ln \left( \frac{B}{A} \right)\, .
\end{equation}
The functions in the metric, the vector potential and the dilaton field are
\begin{align}
A &= \left(r-\Sigma /\sqrt{3}\right)^2 - \frac{2P^2\Sigma}{\Sigma - M\sqrt{3}} + a^2\cos^2\!\theta + \frac{2JPQ\cos\theta}{\left(M+\Sigma/\sqrt{3}\right)^2-Q^2} \, ,
\\
B &= \left(r+\Sigma /\sqrt{3}\right)^2 - \frac{2Q^2\Sigma}{\Sigma + M\sqrt{3}} + a^2\cos^2\!\theta - \frac{2JPQ\cos\theta}{\left(M-\Sigma/\sqrt{3}\right)^2-P^2} \, ,
\\
C &= 2 Q \left(r-\Sigma /\sqrt{3}\right) - \frac{2PJ\cos\theta\left(M+\Sigma /\sqrt{3}\right) }{\left(M-\Sigma /\sqrt{3}\right)^2-P^2} \, ,
\\
{\omega^0}_\phi &= \frac{2J\sin^2\!\theta}{ f^2}\left[r-M + \frac{\left(M^2+\Sigma^2-P^2-Q^2\right)\left(M+\Sigma /\sqrt{3}\right)}{\left(M+\Sigma /\sqrt{3}\right)^2-Q^2}\right] \, ,
\\
{\omega^5}_\phi &= \frac{2P\Delta}{ f^2}\cos\theta - \frac{2QJ\sin^2\!\theta \left[r\left(M - \Sigma/\sqrt{3}\right) + M\Sigma/\sqrt{3} + \Sigma^2-P^2-Q^2\right] }{ f^2\left[\left(M+\Sigma/\sqrt{3}\right)^2-Q^2\right]}
\end{align}
and
\begin{align}
\Delta = r^2 - 2Mr + P^2 + Q^2 - \Sigma^2 + a^2 \, ,
\\
f^2 = r^2 - 2Mr + P^2 + Q^2 - \Sigma^2 + a^2\cos^2\!\theta \, .
\end{align}
The parameter $a$ is related to $J$ via
\begin{equation}
J^2 = a^2\frac{\left[\left(M+\Sigma/\sqrt{3}\right)^2-Q^2\right]
\left[\left(M-\Sigma/\sqrt{3}\right)^2-P^2\right]}{ M^2+\Sigma^2-P^2-Q^2} \,
\end{equation}
and the charges satisfy the equation
\begin{equation}
\frac{Q^2}{\Sigma+M\sqrt{3}} + \frac{P^2}{\Sigma-M\sqrt{3}} = \frac{2\Sigma}{ 3} \, .
\end{equation}
So the rotating dyonic black hole solution depends on four parameters $M$, $J$, $Q$ and $P$, where $M$ is the mass, $J$ is the angular momentum, $Q$ and $P$ are the electric and magnetic charges and $\Sigma$ is the dilaton charge. These parameters are related to the mass $M_K$ of the Kerr solution by
\begin{equation}
M_K^2=M^2+\Sigma^2-P^2-Q^2 \, .
\end{equation}
For some purposes it is useful to express the quantities in terms of the boost parameters $\alpha$ and $\beta$, the Kerr mass $M_K$ and the rotation parameter $a$ by
\begin{align}
M &= \frac{M_K\left(1+\cosh^2\!\alpha\cosh^2\!\beta\right)\cosh\alpha}{\sqrt{1+\sinh^2\!\alpha\cosh^2\!\beta}} \, ,
\\
\Sigma &= \frac{\sqrt{3}M_K\cosh\alpha\left(1-\cosh^2\!\beta+\sinh^2\!\alpha\cosh^2\!\beta\right)}{2\sqrt{1+\sinh^2\!\alpha\cosh^2\!\beta}} \, ,
\\
Q &= M_K\sinh\alpha\sqrt{1+\sinh^2\!\alpha\cosh^2\!\beta} \, ,
\\
P &= \frac{M_K\sinh\beta\cosh\beta}{\sqrt{1+\sinh^2\!\alpha\cosh^2\!\beta}}\, ,
\\
J &= aM_K\cosh\beta\sqrt{1+\sinh^2\!\alpha\cosh^2\!\beta} \, .
\end{align}
\subsection{Extremal solutions}
The two horizons of the rotating dyonic black hole are given by $\Delta = 0$. The horizons are present for
\begin{equation}
M^2 \geq P^2+Q^2+a^2-\Sigma^2 \, .
\end{equation}
This yields $M_K^2 \geq a^2$, which is the same condition as for the original Kerr spacetime to have horizons. It is interesting to the study the extremal case with a single degenerate horizon. In Einstein-Maxwell theory the surface of extreme solutions, formed by the scaled global charges $\frac{J}{M^2}$, $\frac{Q}{M}$ and $\frac{P}{M}$, is a sphere. In Kaluza-Klein theory this surface is not smooth but made up of two parts as shown in figure \ref{pic:extremesurface} (see also \cite{Rasheed:1995zv}).
\begin{figure}[h]
\centering
\subfigure[Part of the surface with positive parameters]{
\includegraphics[width=0.4\linewidth]{extremesurface1.png}
}
\subfigure[Complete surface]{
\includegraphics[width=0.4\linewidth]{extremesurface2.png}
}
\caption{Surface of extremal rotating dyonic solutions in Kaluza-Klein theory.}
\label{pic:extremesurface}
\end{figure}
The first surface (blue in figure \ref{pic:extremesurface}) emerges from the condition for a single degenerate horizon $ M^2 = P^2+Q^2+a^2-\Sigma^2$, which leads to $M_K=a^2$ and therefore boosting the extremal Kerr solution will lead to the blue surface.
The second surface (green in figure \ref{pic:extremesurface}) represents the special case $M^2+\Sigma^2=P^2+Q^2$. Then $a=0$ and $M_K=0$ but $\frac{a}{M_K}\leq 1$, so that the angular momentum $J$ may be non-zero. This can be seen by taking the extremal limit $\beta\rightarrow\infty$ of the boosted solutions, thus
\begin{align}
\frac{J}{M^2} &= \frac{4a\sinh^3\!\alpha}{M_K\cosh^6\!\alpha} \, ,\\
\frac{P}{M} &= \frac{2}{\cosh^3\!\alpha} \, ,\\
\frac{Q}{M} &= \frac{2\sinh^3\!\alpha}{\cosh^3\!\alpha} \, .
\end{align}
Then we have
\begin{equation}
\left(\frac{P}{M}\right)^\frac{2}{3} + \left(\frac{Q}{M}\right)^\frac{2}{3} = 2 ^\frac{2}{3} \quad \text{and} \quad J\leq PQ\, .
\end{equation}
So any non-rotating dyonic solution with fixed $M,P,Q$ on this part of the surface can be given angular momentum $J\leq PQ$ and it will still remain extremal. In Einstein-Maxwell adding angular momentum to an extremal black hole would lead to a naked singularity.\\
In this article we will concentrate on the special case
\begin{equation}
P=Q=\frac{M}{\sqrt{2}} \quad \text{and} \quad J\leq PQ = \frac{M^2}{2}\, ,
\end{equation}
where the metric is
\begin{equation}
{\mathrm{d}} s^2 = -\frac{\left(r-M\right)^2}{\sqrt{r^4-4J^2\cos^2\!\theta}}\left({\mathrm{d}} t-\frac{2J\sin^2\!\theta}{ r-M}{\mathrm{d}}\phi\right)^2 + \frac{\sqrt{r^4-4J^2\cos^2\!\theta}}{\left(r-M\right)^2}{\mathrm{d}} r^2 + \sqrt{r^4-4J^2\cos^2\!\theta} \ ( {\mathrm{d}}\theta^2 + \sin^2\!\theta \ {\mathrm{d}}\phi^2),
\label{eqn:extremalmetric}
\end{equation}
and the non-zero components of the electromagnetic vector potential are
\begin{align}
A_t &= \frac{Mr-2J\cos\theta}{\sqrt{2}\ (r^2-2J\cos\theta) } \, , \\
A_\phi &= \frac{M}{\sqrt{2}}\cos\theta - \frac{\sqrt{2} \ J\sin^2\!\theta}{r-M} + \frac{\sqrt{2} \ J\sin^2\!\theta (Mr-2J\cos\theta) }{(r-M)(r^2-2J\cos\theta)} \, .
\label{eqn:extremalAtAphi}
\end{align}
The metric possesses a singularity at $r_S=\sqrt{2 \lvert J\cos\theta \rvert}$, which is hidden behind an event horizon at $r_H=M$ if $J< \frac{M^2}{2}$. As for all solutions on the green surface, there is no ergoregion.
In the extremal case the dilaton field is
\begin{equation}
\sigma = \frac{\sqrt{3}}{4} \ln \left( \frac{r^2+2J\cos\theta}{r^2-2J\cos\theta} \right)\, ,
\label{eqn:extr-dfield}
\end{equation}
which means that $\sigma =0$ for $J=0$ and also $\sigma =0$ in the equatorial plane $\theta=\frac{\pi}{2}$.
\subsection{Equations of motion for electrically charged particles in the extremal case}
\label{sec:EQM}
Let us investigate the motion of electrically charged particles around the extremal dyonic black hole described by the equation \eqref{eqn:extremalmetric}. As described in \cite{Maki:1992up, Pris:1995, Rahaman:2003wv} we will take into account the effects of the dilaton field. The Hamiltonian for a charged particle with the electric charge $q$ is
\begin{equation}
\mathcal{H}=\frac{1}{2}\mathrm{e}^{-\beta \sigma} g^{\mu\nu} \left( p_\mu - qA_\mu \right) \left( p_\nu - qA_\nu \right)
\end{equation}
where the parameter $\beta$ describes the coupling to the dilaton field $\sigma$. The mass shell condition is in this case
\begin{equation}
g^{\mu\nu} \left( p_\mu - qA_\mu \right) \left( p_\nu - qA_\nu \right) + \mathrm{e}^{\beta \sigma}m^2 = 0 \, .
\end{equation}
The Hamilton-Jacobi equation is
\begin{equation}
\mathcal{H} + \frac{\partial S}{\partial \lambda} =0
\label{eqn:hjeq}
\end{equation}
with $p_\mu = \frac{\partial S}{\partial x^\mu}$ and an affine parameter $\lambda$ along the trajectory of the test particle. To solve equation \eqref{eqn:hjeq} we use the following ansatz for the action
\begin{equation}
S= \frac{1}{2}\delta\mathrm{e}^{\beta \sigma}\lambda - Et + L\phi + S_\theta(\theta) + S_r(r)
\label{eqn:action}
\end{equation}
where $E$ is the energy and $L$ is the angular momentum of the test particle. The parameter $\delta = m^2$ describes the particle's mass and is $1$ for particles and $0$ for light. With this ansatz the Hamilton-Jacobi equation becomes
\begin{equation}
\mathrm{e}^{2\beta \sigma}\delta + g^{tt} (-E-qA_t)^2 + 2g^{t\phi}(-E-qA_t)(L-qA_\phi) + g^{\phi\phi} (L-qA_\phi)^2 + g^{\theta\theta } \left( \frac{\partial S_\theta(\theta)}{\partial \theta} \right)^2 + g^{rr} \left( \frac{\partial S_r(r)}{\partial r} \right)^2 = 0
\end{equation}
Inserting the metric \eqref{eqn:extremalmetric}, the vector potential \eqref{eqn:extremalAtAphi} and the dilaton field \eqref{eqn:extr-dfield} yields
\begin{align}
&\delta \left( \frac{r^2+2J\cos\theta}{r^2-2J\cos\theta} \right) ^{\beta\sqrt{3}/2}\sqrt{r^4-4J^2\cos^2\!\theta} + \frac{4 J^2 -r^4}{(r-M)^2} \left( -\frac{q (Mr-2J\cos\theta) }{\sqrt{2}(r^2-2J \cos\theta)}-E\right)^2 \nonumber\\
&+\frac{4J}{r-M}\left( -\frac{q (Mr-2J\cos\theta)}{\sqrt{2}(r^2-2J \cos\theta)}-E\right) \left(L -q \left( \frac{M}{\sqrt{2}} \cos\theta - \frac{\sqrt{2}J\sin^2\!\theta}{r-M} +\frac{\sqrt{2}J\sin^2\!\theta (Mr-2J\cos\theta)}{(r-M)(r^2-2J\cos\theta)} \right)\right) \nonumber\\
&+\frac{1}{\sin^2\!\theta}\left(L -q \left( \frac{M}{\sqrt{2}} \cos\theta - \frac{\sqrt{2}J\sin^2\!\theta}{r-M} +\frac{\sqrt{2}J\sin^2\!\theta (Mr-2J\cos\theta)}{(r-M)(r^2-2J\cos\theta)} \right)\right)^2 \nonumber\\
&+ \left( \frac{\partial S_\theta(\theta)}{\partial \theta} \right)^2 +(r-M)^2 \left( \frac{\partial S_r(r)}{\partial r} \right)^2 = 0 \, .
\label{eqn:HJD-general}
\end{align}
Now we have to separate the above equation to get (the derivatives of) the unknown functions $S_\theta(\theta)$ and $S_r(r)$. This is possible in three cases:
\begin{enumerate}
\item For photons with $\delta=0$ and $q=0$, equation \eqref{eqn:HJD-general} simplifies to
\begin{equation}
\frac{(4J^2-r^4)E^2}{(r-M)^2} - \frac{4JEL}{r-M} + \frac{L^2}{\sin^2\!\theta} + \left( \frac{\partial S_\theta(\theta)}{\partial \theta} \right)^2 + (r-M)^2 \left( \frac{\partial S_r(r)}{\partial r} \right)^2 = 0 \, .
\end{equation}
\item For a non-rotating black hole with $J=0$, equation \eqref{eqn:HJD-general} yields
\begin{equation}
\delta r^2 - \frac{r^4}{(r-M)^2}\left(-\frac{Mq}{\sqrt{2}r}-E\right)^2 + \frac{1}{\sin^2\!\theta} \left( L-\frac{Mq}{\sqrt{2}}\cos\theta \right)^2 + \left( \frac{\partial S_\theta(\theta)}{\partial \theta} \right)^2 + (r-M)^2 \left( \frac{\partial S_r(r)}{\partial r} \right)^2 = 0 \, .
\end{equation}
\item For charged particles in the equatorial plane $\theta = \frac{\pi}{2}$, equation \eqref{eqn:HJD-general} becomes
\begin{equation}
\delta r^2 -\frac{\left(r^4-4J^2\right)\left(qM\sqrt{2}+2Er\right)^2}{4r^2(r-M)^2} - \frac{2J\left(qM\sqrt{2}+2Er\right)\left(Jq\sqrt{2}+Lr\right)}{(r-M)r^2}
+\frac{\left(Jq\sqrt{2}+Lr\right)^2}{r^2} + (r-M)^2 \left( \frac{\partial S_r(r)}{\partial r} \right)^2 = 0 \, .
\label{eqn:HJD4-equa}
\end{equation}
\end{enumerate}
So far we have described the particle motion in the four-dimensional Rasheed spacetime, which respresents a scalarized black hole. Since most scalarized black holes are given as a numerical solution, it is interesting to have some insight on the analytical side.
On the other hand, one has to take into account that the four-dimensional Rasheed spacetime is a solution in Kaluza-Klein theory, which is obtained from a five-dimensional spacetime by dimensional reduction. One could argue that in this framework, the Hamilton-Jacobi equation for the four-dimensional problem, does not correctly describe the particle motion. Instead the equation of motion in the five-dimensional metric has to be considered \cite{Kovacs:1984qx}--\cite{Kerner:2000iz}. Here the dilatonic force yields new terms in the equations of motion.
In the following we will prove that in the above special cases, the particle motion is correctly described by the Hamilton-Jacobi equation for the four-dimensional problem.
In case 1, all new terms entering the equations of motion vanish since $\delta=0$ and $q=0$, compare \cite{Kovacs:1984qx}--\cite{Liu:1997fg}. In case 2, the dilaton field vanishes everywhere, because $J=0$. In case three, the dilaton field vanishes in the equatorial plane since $\theta=\frac{\pi}{2}$. However, derivatives of the dilaton field enter the equations of motion in the five-dimensional spacetime \cite{Kovacs:1984qx}--\cite{Liu:1997fg}. Therefore it has to be checked, if there are effects due to the dilaton in the equatorial plane.
The four-dimensional black hole spacetime ${\mathrm{d}} s^2_{(4)}$ of Rasheed (see equation \eqref{eqn:ds4}) can be obtained from the five-dimensional metric
\begin{equation}
{\mathrm{d}} s^2_{(5)} = \frac{B}{A} \left( {\mathrm{d}} x^5 + 2A_\mu{\mathrm{d}} x^\mu \right) + \sqrt{\frac{A}{B}} {\mathrm{d}} s^2_{(4)} = \tilde{g}_{\mu\nu} {\mathrm{d}} x^\mu {\mathrm{d}} x^\nu \, .
\end{equation}
The motion of a test particle in this metric is described by the Hamilton-Jacobi equation
\begin{equation}
\frac{1}{2} \tilde{g}^{\mu\nu} \tilde{p}_\mu \tilde{p}_\nu + \frac{\partial S_5}{\partial \lambda} =0
\label{eqn:HJD5}
\end{equation}
with $\tilde{p}_\mu = \frac{\partial S_5}{\partial x^\mu}$. To solve equation \eqref{eqn:HJD5} we propose the ansatz
\begin{equation}
S_5= \frac{1}{2}\delta\lambda - Et + L\phi + K w + S_\theta(\theta) + S_r(r)
\end{equation}
where $K$ is a constant of motion corresponding to the momentum in the direction if the fifth dimension $x^5=w$. With this ansatz equation \eqref{eqn:HJD5} becomes
\begin{equation}
\delta + \tilde{g}^{tt}E^2-2 \tilde{g}^{t\phi}EL+ \tilde{g}^{\phi\phi}L^2 + \tilde{g}^{ww}K^2-2 \tilde{g}^{tw}EK + 2 \tilde{g}^{\phi w}K+ \tilde{g}^{\theta\theta } \left( \frac{\partial S_\theta(\theta)}{\partial \theta} \right)^2 + \tilde{g}^{rr} \left( \frac{\partial S_r(r)}{\partial r} \right)^2 = 0
\label{eqn:HJD5-2}
\end{equation}
Inserting the metric functions of the five-dimensional metric into equation \eqref{eqn:HJD5-2} results in a long and complicated expression, which we will not display here. Setting then $\theta=\frac{\pi}{2}$ simplifies the equation to
\begin{align}
&\delta r^2 + \frac{(r^4-4 J^2) E^2}{(r-M)^2} + \frac{4JEL}{r-M} - L^2 +(r-M)^2 \left( \frac{\partial S_r(r)}{\partial r} \right)^2 - \frac{(2Mr^5-3M^2r^4 -r^6 + 32J^2M^2 - 32J^2Mr + 8J^2r^2)K^2}{r^2(r-M)^2} \nonumber\\
&+ \frac{ ((4r-8M)J^2+Mr^4) 2\sqrt{2}EK}{r(r-M)^2} + \frac{ 4J(2M-r) \sqrt{2}LK}{r(r-M)} = 0 \, .
\label{eqn:HJD5-equa}
\end{align}
It can be shown that equation \eqref{eqn:HJD5-equa} is the same as the Hamilton-Jacobi equation \eqref{eqn:HJD4-equa} for a charged particle in the equatorial plane of the four-dimensional spacetime if we make the identifications
\begin{equation}
q=2K \quad \text{and} \quad \delta=m^2=1+K^2
\end{equation}
which result in the relation
\begin{equation}
\frac{q}{m} = \frac{2K}{\sqrt{1+K^2}} \ .
\end{equation}
The same relation without the factor 2 was found in \cite{Liu:1997fg}, \cite{Kerner:2000iz}. Here the factor 2 is due to the fact that there is an additional factor 2 in front of the vector field $A_\mu$ in the five-dimensional Rasheed metric, which is not present in the metric in \cite{Liu:1997fg}, \cite{Kerner:2000iz}. Therefore in the equatorial plane of the extreme Rasheed metric, the Hamilton-Jacobi equation from the four-dimensional particle motion agrees with the Hamilton-Jacobi equation from the five-dimensional problem and correctly describes the motion of test particles.
\\
In this article we are interested in the motion of charged particles around an extremal rotating dyonic black hole in Kaluza-Klein theory; thus, we will consider the third case and set $\theta = \frac{\pi}{2}$ from now on. The equations of motion in the equatorial plane can be derived from the action $S$ \eqref{eqn:action}
\begin{align}
\left( \frac{{\mathrm{d}} r}{{\mathrm{d}}\gamma} \right)^2 &= R(r) = \sum_{i=0}^6 a_i r^i \, , \label{eqn:r-equation}\\
\left( \frac{{\mathrm{d}}\phi}{{\mathrm{d}}\gamma}\right) &= Lr + \frac{1}{r-1}\left[ Jr\left( \sqrt{2}q-2E \right) - 2\sqrt{2}Jq \right] \, , \label{eqn:phi-equation}\\
\left( \frac{{\mathrm{d}} t}{{\mathrm{d}}\gamma} \right) &= \frac{1}{(r-1)^2}\left[ 2JLr(r-1) + Er(r^4-4J^2) + \sqrt{2}q \left( \frac{1}{2} r^4 + 2J^2(r-2) \right) \right] \, . \label{eqn:t-equation}
\end{align}
To simplify the equations of motion we used dimensionless quantities equivalent to setting $M=1$
\begin{equation}
r\rightarrow Mr \, , \ t\rightarrow Mt \, , \ \lambda\rightarrow M\lambda\ \, , \ J\rightarrow M^2J \, , \ L\rightarrow ML
\end{equation}
and we also applied the Mino-time \cite{Mino:2003yg} $\gamma$ with ${\mathrm{d}}\lambda = r^3 {\mathrm{d}} \gamma$. The function $R(r)$ in equation \eqref{eqn:r-equation} is a polynomial of order $6$ with the coefficients
\begin{align}
a_6 &= E^2-\delta\, ,\\
a_5 &= \sqrt{2}qE+2\delta\, ,\\
a_4 &= \frac{1}{2} q^2-\delta-L^2\, ,\\
a_3 &= 2L\left(2EJ-\sqrt{2}qJ+L\right)\, ,\\
a_2 &= -J^2\left(2E-\sqrt{2}q\right)^2-2LJ\left(2E-3\sqrt{2}q\right)-L^2\, ,\\
a_1 &= -4Jq\left(2\sqrt{2}EJ+\sqrt{2}L-2qJ\right)\, ,\\
a_0 &= -q^2J^2\, .
\end{align}
In general the equations of motion are of hyperelliptic type. However, in the special case of uncharged particles $q=0$ the equations of motion reduce to elliptic type (note that ${\mathrm{d}}\lambda = r {\mathrm{d}} \gamma$ in this case)
\begin{align}
\left( \frac{{\mathrm{d}} r}{{\mathrm{d}}\gamma} \right)^2 &= R(r) = \sum_{i=0}^4 a_i r^i \, , \label{eqn:r-equation2}\\
\left( \frac{{\mathrm{d}}\phi}{{\mathrm{d}}\gamma}\right) &= L - \frac{2JE }{r-1} \, , \label{eqn:phi-equation2}\\
\left( \frac{{\mathrm{d}} t}{{\mathrm{d}}\gamma} \right) &= \frac{2JL(r-1) + E(r^4-4J^2)}{(r-1)^2} \, . \label{eqn:t-equation2}
\end{align}
where
\begin{align}
a_4 &= E^2-\delta\, ,\\
a_3 &= 2\delta\, ,\\
a_2 &= -\delta-L^2\, ,\\
a_1 &= 2L(2EJ+L)\, ,\\
a_0 &= -(2EJ+L)^2-2ELJ\, .
\end{align}
Both in the hyperelliptic and the elliptic case the equations of motion can be solved analytically as shown in section \ref{sec:solutions}.\\
\section{Classification of the orbits}
In this section we will analyze the motion of charged particles in the spacetime of the extremal black hole found by Rasheed in Kaluza-Klein theory.
\subsection{The azimuthal motion}
\label{sec:azimuth}
The $\phi$-equation \eqref{eqn:phi-equation} determines the azimuthal motion. ${\mathrm{d}}\phi / {\mathrm{d}}\gamma$ vanishes if
\begin{equation}
E_{\rm turn} = \frac{Jq\sqrt{2}(r-2)+Lr(r-1)}{2Jr}
\label{eqn:eturn}
\end{equation}
or
\begin{equation}
r^{\rm turn}_{1,2}=\frac{1}{2L} \left[ L-\sqrt{2}qJ+2JE \pm \sqrt{ \left( L-\sqrt{2}qJ+2JE \right)^2 + 2\sqrt{2}JL } \right] \, .
\end{equation}
At these radii the test particles change their direction, which leads to interesting orbits, see section \ref{sec:orbits}. A similar effect is usually observed at the ergoregion of a black hole. However, in this spacetime an ergoregion does not exist. The radii at which the angular direction changes were called \emph{turnaround boundaries} in \cite{Diemer:2013fza}.
\subsection{The radial motion}
The radial equation of motion determines the type of the orbits, because its zeros are the turning points of the orbit. Particle motion is allowed for $R(r)\geq 0$ only. Here we are interested in the real positive zeros. To study the possible orbits, parametric diagrams and effective potentials can be constructed from the polynomial $R(r)$. First we will give a list of the possible orbit types:
\begin{enumerate}
\item Bound orbits (BO) with $r\in [r_1, r_2]$ exist for $r_{1,2} > r_H$ and also for $r_{1,2} < r_H$ hidden behind the event horizon.
\item Many-world bound orbits (MBO) with $r\in [r_1, r_2]$ and $r_1 < r_H < r_2$, where the particle crosses the horizon several times. Each time the horizon is crossed twice the particle enters another universe.
\item Escape orbits (EO) with $r\in [r_1, \infty)$ and $r_1 > r_H$, where the test particle (or light) escapes the gravity of the black hole.
\item Two-world escape orbits (TWEO) with $r\in [r_1, \infty)$ and $r_1 < r_H$, where the particle crosses the horizon twice and enters another universe.
\end{enumerate}
The number of zeros of $R(r)$ for a set of parameters is closely related to the possible orbit types. If double zeros occur, i.e. $R(r)=0$ and $\frac{{\mathrm{d}} R}{{\mathrm{d}} r}=0$, the number of zeros changes. We plot these two conditions as a parameterplot of $E$ and $L$ to see nine regions with different numbers of real positive zeros. An example of a parameterplot is shown in figure \ref{pic:parameterplot-del1}.
There is one real positive zero in region I, two zeros in region II, three zeros in the regions III and IV, four zeros in the regions V and VI, five zeros in the regions VII and VII, and six zeros in region IX. Two regions may have the same number of positive real zeros; however, there are different types of possible orbits.
\begin{figure}[h!]
\includegraphics[width=0.4\linewidth]{parameterplot-del1.png}
\caption{$E$-$L$-Parameterplot for charged particles with $\delta=1, J=0.4, q=0.8$. The blue curves separate regions with a different number of zeros of $R$, and therefore different types of orbits. The roman numbers refer to these orbit types and are explained in table \ref{tab:orbits-del1}.}
\label{pic:parameterplot-del1}
\end{figure}
The orbit types in each region can be determined with the help of effective potentials. We define an effective potential consisting of two parts $V^\pm$ by
\begin{equation}
\left( \frac{{\mathrm{d}} r}{{\mathrm{d}} \gamma} \right)^2 = R(r)= \left(r^6-4J^2r^2 \right) \left(E-V^+ \right) \left(E-V^- \right)
\end{equation}
and therefore
\begin{equation}
V^\pm = -\frac{q}{\sqrt{2}r} + \frac{r-1}{r \left( 4J^2-r^4 \right)}\left[ 2J\left(rL+\sqrt{2}qJ\right) \pm r^2\sqrt{ \left(rL+\sqrt{2}qJ\right)^2 -\delta\left( 4J^2-r^4 \right) } \right] \, .
\label{eqn:potential}
\end{equation}
In the case of uncharged photons $\delta=q=0$ the effective potentials simplify to
\begin{equation}
V^\pm = \frac{L(r-1)}{2J\mp r^2}\, .
\end{equation}
At the horizon $r_H=1$ the two effective potentials meet at $E=-\frac{q}{\sqrt{2}}$. Here $R(r=r_H, E=-\frac{q}{\sqrt{2}})=0$ and a circular orbit exists directly on the horizon. Furthermore, the two potentials meet at a point behind the horizon, which can be calculated by
\begin{equation}
\left(rL+\sqrt{2}qJ\right)^2 = \delta\left( 4J^2-r^4 \right) \, .
\end{equation}
For $r\rightarrow 0$ both potentials diverge. Additionally for $r\rightarrow \sqrt{2|J|}$, the potential $V_+$ diverges if $JL<0$ and $V_-$ diverges if $JL>0$. However, the latter divergency has no effect on the orbits, since the equations of motion remain smooth and therefore we will not show this divergency in the plots of the effective potentials . At $r\rightarrow \infty$ the potentials $V^ \pm$ aproach $\mp \sqrt{\delta}$ asymptotically. We find the following symmetries
\begin{align}
V^\pm (-J, -L) &= V^\pm (J, L) \, , \nonumber \\
V^\pm (-q, -L) &= -V^\mp (q, L) \, ,\nonumber \\
V^\pm (-q, -J) &= -V^\mp (q, J) \, .
\label{eqn:symmetry}
\end{align}
In the Schwarzschild spacetime the positive root of the effective potential is always $V_+\geq0$ and the negative root $V_-\leq0$. There is no classical interpretation for particles with negative energies; however, in quantum field theory they can be associated with antiparticles \cite{Deruelle:1974zy}. Note that in contrast to the Schwarzschild spacetime, here the positive root $V_+$ can be negative, which can allow particles with negative energies. In \cite{Christodoulou:1972kt, Denardo:1973} the region for particles with negative energies was described as a ``generalized ergosphere'', where energy may be extracted. Also the negative root $V_-$ can be positive in this spacetime. To find a bound which separates particles and antiparticles, one can consider equation \eqref{eqn:t-equation} which describes the coordinate time $t$. Time should run forward for particles and backwards for antiparticles. Solving the $t$-equation for $E$ will give a lower energy bound for particles, which can be negative in the ``generalized ergosphere''.
Some examples of the potentials are depicted in figure \ref{pic:potential}. We also plot the energy $E_{\rm turn}$ (equation \eqref{eqn:eturn}) where the test particles changes angular direction.\\
\begin{figure}[hp]
\subfigure[~$\delta=1, q=0.8, J=0.4, L=-3.75$]{
\includegraphics[width=0.4\linewidth]{potential-1a.png}
}
\subfigure[~Closeup of figure (a)]{
\includegraphics[width=0.4\linewidth]{potential-1b.png}
}
\subfigure[~$\delta=1, q=0.95, J=0.3, L=-4.4$]{
\includegraphics[width=0.4\linewidth]{potential-2a.png}
}
\subfigure[~Closeup of figure (b)]{
\includegraphics[width=0.4\linewidth]{potential-2b.png}
}
\subfigure[~$\delta=1, q=0.5, J=0.4, L=-27$]{
\includegraphics[width=0.4\linewidth]{potential-3a.png}
}
\subfigure[~Closeup of figure (e)]{
\includegraphics[width=0.4\linewidth]{potential-3b.png}
}
\caption{Plot of the effective potentials $V_+$ (light blue curves) and $V_-$ (dark blue curves) along with some example energies (green lines) for different orbits. The green circles represent the turning points. The roman numbers refer to the orbit types and are explained in table \ref{tab:orbits-del1}. Particle motion is only possible in the white area, in the grey area the motion is forbidden due to $R<0$. The red dotted curves show the energy $E_{\rm turn}$ (equation \eqref{eqn:eturn}), where a turnaround boundary occurs. The vertical black dashed lines indicate the position of the horizon.}
\label{pic:potential}
\end{figure}
Taking all the information from the parameterplots and the effective potentials into account, we determine the possible orbits for charged particles in the spacetime of an extremal black hole in Kaluza-Klein theory. An overview of the regions and the corresponding orbit types can be found in table \ref{tab:orbits-del1} and the following list.
\begin{enumerate}
\item Region I: The polynomial $R$ has one zero $r_1 < r_H$ and the corresponding orbit is a TWEO.
\item Region II: $R$ has two zeros $r_1 < r_H$ and $r_2 > r_H$. The corresponding orbit is a MBO.
\item Region III: $R$ has three zeros $r_1 < r_H$ and $r_2 ,r_3 > r_H$. A MBO and a EO exist.
\item Region IV: $R$ has three zeros $r_1, r_2 < r_H$ and $r_3 > r_H$. A BO hidden behind the horizon and a TWEO exist.
\item Region V: $R$ has four zeros $r_1 < r_H$ and $r_2 , r_3 , r_4 > r_H $. Here a MBO and a BO exist.
\item Region VI: $R$ has four zeros $r_1 , r_2 , r_3 < r_H$ and $r_4 > r_H$. A BO hidden behind the horizon and a MBO exist.
\item Region VII: $R$ has five zeros $r_1 , r_2 , r_3 < r_H$ and $r_4, r_5 > r_H$. A BO hidden behind the horizon, a MBO and a EO exist.
\item Region VIII: $R$ has five zeros $r_1 , r_2 , r_3, r_4, r_5 < r_H$. Two BOs hidden behind the horizon and a TWEO exist.
\item Region IX: $R$ has six zeros $r_1 , r_2 , r_3 < r_H$ and $r_4, r_5, r_6 > r_H$. A BO hidden behind the horizon, a MBO and a second BO outside the black hole exist.
\end{enumerate}
In the special case of massless particles with $\delta=q=0$, only the regions I, II, III and IV exist. For massless particles in region II, $R$ possesses a double zero at $r_H=1$ and the energy is always $E=0$, here light moves on a circular orbit directly on the horizon.\\
\begin{table}[h!]
\includegraphics[width=0.6\linewidth]{table1.png}
\caption{Types of orbits for charged particles in the spacetime of an extremal black hole in Kaluza-Klein theory. The thick lines show the range of the orbits and the thick dots mark the turning points. The single vertical line corresponds to the singularity and the vertical double lines indicate the position of the horizon.}
\label{tab:orbits-del1}
\end{table}
Let us now investigate under which conditions an orbit terminates at the singularity $r_S$. At the singularity, the polynomial $R$ has the value
\begin{equation}
R(r_S) = -8 J^2 q^2\, .
\end{equation}
Therefore in the case $J\neq 0$ and $q\neq0$ particles can never reach the singularity. If $J=0$ or $q=0$ we have $R(r_S)=0$, but then $r_S=0$ is a local maximum of $R$, so that test particles can never reach that point.
\begin{enumerate}
\item If $J=0$ then $R'(r=0)=0$ and $R''(r=0)=-L^2$.
\item If $q=0$ then $R'(r=0)=0$ and $R''(r=0)=-2(2EJ+L)^2$.
\end{enumerate}
If additionally $L=0$ in the first case or $J=L=0$ in the second case then $r_S=0$ is an inflection point. The effective potentials show that the singularity is guarded by a potential barrier which makes it impossible to find orbits that reach $r_S=0$. Only in the special case $J=L=q=\delta=0$ particles can reach the singularity. Here in fact all orbits end in the singularity.\\
\subsection{Static orbits}
In \cite{Collodel:2017end} static orbits in axisymmetric rotating spacetimes were discovered. Under certain conditions there is a ring of points in the equatorial plane where particles remain at rest. We will show that these orbits also exist in the Rasheed spacetime.
A particle is at rest at a certain point if
\begin{align}
\frac{{\mathrm{d}} r}{{\mathrm{d}} \gamma}&=0 \, , \nonumber\\
\frac{{\mathrm{d}} \phi}{{\mathrm{d}} \gamma}&=0 \, .
\end{align}
From this we can calculate the radius and the energy of the particle at rest
\begin{align}
r_{\rm rest} &= \frac{J}{L}\left(-\sqrt{2}q\pm 2\sqrt{\delta}\right)\, ,\nonumber\\
E_{\rm rest} &= \frac{ \pm \sqrt{\delta}\left(2J\sqrt{2}q + 2L\right) + L\sqrt{2}q - 4J\delta}{2J\left(\sqrt{2}q\pm 2\sqrt{\delta}\right)} \, .
\end{align}
At $r_{\rm rest}$ the effective potentials (equation \eqref{eqn:potential}) intersect with the turnaround boundary (equation \eqref{eqn:eturn}) and the particle is at rest at the pericenter or apocenter of its orbit. If the turnaround boundary intersects with the local minimum of one of the effective potentials, i.e. if
\begin{align}
\frac{{\mathrm{d}} r}{{\mathrm{d}} \gamma}&=0 \, , \nonumber\\
\frac{{\mathrm{d}}^2 r}{{\mathrm{d}} \gamma^2}&=0 \, , \nonumber\\
\frac{{\mathrm{d}} \phi}{{\mathrm{d}} \gamma}&=0 \, ,
\end{align}
the particle remains at rest at all times, this is called the \emph{static ring}. We set $\delta=1$, because then there will be a local minimum in one of the effective potentials for $r>r_H$. The above equations then yield
\begin{align}
q_{\rm st}&=\mp\sqrt{2} \, , \nonumber \\
E_{\rm st}&=\pm 1 \, , \nonumber \\
r_{\rm st}&=\pm 4\frac{J}{L}\, . \\
\end{align}
Additionally the particle remains at rest at the points where the two effective potentials $V^\pm$ meet, including the horizon $r_H=1$.
Figure \ref{pic:static_pot} shows some examples of static orbits. There the effective potentials and the turnaround boundary are depicted as well as the position ($r$ and $E$) of a particle at rest. In figure \ref{pic:static_pot}(a) and \ref{pic:static_pot}(b) the particle is at rest at the pericenter or apocenter, respectively. Figure \ref{pic:static_pot}(c) shows the static ring, here the particle remains at rest at all times.
\begin{figure}[h]
\subfigure[~$\delta=1, q=-0.5, J=0.4, L=-0.75$]{
\includegraphics[width=0.31\linewidth]{staticorbit1.png}
}
\subfigure[~$\delta=1, q=-0.5, J=0.4, L=0.25$]{
\includegraphics[width=0.31\linewidth]{staticorbit2.png}
}
\subfigure[~$\delta=1, q=-\sqrt{2}, J=0.49, L=1.5$]{
\includegraphics[width=0.31\linewidth]{staticorbit3.png}
}
\caption{Plots of the effective potentials $V_+$ (light blue curves), $V_-$ (dark blue curves) and the energy $E_{\rm turn}$ (red dotted curve) of the turnaround boundary. In the grey area motion is forbidden since $R<0$. At the green squares $V^\pm$ and $E_{\rm turn}$ intersect, so that $\frac{{\mathrm{d}} r}{{\mathrm{d}} \gamma}=\frac{{\mathrm{d}} \phi}{{\mathrm{d}} \gamma}=0$ and the particle is at rest. In subfigure (a) and (b) the particle is at rest at the pericenter or apocenter, respectively. Subfigure (c) shows the static ring, here the particle remains at rest at all times.}
\label{pic:static_pot}
\end{figure}
\subsection{Photon sphere}
The photon sphere is a radius around the black hole, where light moves on unstable circular orbits (see e.g. \cite{Claudel:2000yi}). It marks the region between light rays escaping the black hole and light rays falling beyond the event horizon. Via a coordinate transformation \cite{Grenzebach:2014fha}-\cite{Grenzebach:Springer} the shadow of a black hole can be obtained from the photon sphere.
We consider the $r$-equation for $\delta=0$ and $q=0$, equation \eqref{eqn:r-equation2}, and apply the conditions for unstable circular orbits
\begin{align}
R&=0 \, , \nonumber\\
\frac{{\mathrm{d}} R}{{\mathrm{d}} r}&=0 \, .
\end{align}
Solving the above equations yields
\begin{align}
\frac{L}{E}&=\pm 2 r_{\rm ph} \, ,\\
\frac{J}{E}&=\pm \left(\frac{1}{2} r_{\rm ph}^2-r_{\rm ph}\right) \, .
\end{align}
Then there are four solutions for the radius $r_{\rm ph}$ of the photon sphere which are valid for different ranges of the angular momentum of the black hole $J$ and the angular momentum of the photon $L$. There is always a photon sphere outside the event horizon $r_{\rm ph}\geq r_H$ and for certain ranges of $J$ a second $r_{\rm ph}$ exists hidden behind the horizon. The solutions for the photon sphere are:
\begin{itemize}
\item $L>0$ and $-\frac{1}{2} < J$: $r_{\rm ph}=1+\sqrt{1+2J}\geq r_H$
\item $L>0$ and $-\frac{1}{2} < J <0$: $r_{\rm ph}=1-\sqrt{1+2J} \leq r_H$
\item $L<0$ and $ J < \frac{1}{2}$: $r_{\rm ph}=1+\sqrt{1-2J}\geq r_H$
\item $L<0$ and $ 0 < J < \frac{1}{2}$: $r_{\rm ph}=1-\sqrt{1-2J}\leq r_H$
\end{itemize}
\subsection{Causality and time flow}
Note that this spacetime allows closed timelike curves (CTCs). For an extensive discussion on CTCs see \cite{Gibbons:1999uv}. CTCs occur for $g_{\phi\phi}<0$. In this spacetime the metric component is
\begin{equation}
g_{\phi\phi} = \frac{\left(r^4-4J^2\right) \sin^2\theta}{\sqrt{r^4-4J^2\cos^2\theta}}
\end{equation}
and therefore, CTC are present in the region $r_S< r <r_{CTC}$ with $r_{CTC}=\sqrt{2|J|}$. Since $r_{CTC}< r_H$ for $J< \frac{M^2}{2}$, the CTCs are hidden behind the degenerate horizon.\\
As in \cite{Gibbons:1999uv} we can consider an effective potential from the $t$-equation by solving \eqref{eqn:t-equation} for $E$
\begin{equation}
V_{\rm time}= -\frac{q}{\sqrt{2}r} + \frac{r-1}{r \left( 4J^2-r^4 \right)}\left[ 2J\left(rL+\sqrt{2}qJ\right) \right] \, .
\end{equation}
$V_{\rm time}$ is equal to the first part (i.e. without the root) of the effective potentials $V^\pm$ from the $r$-equation. $V_{\rm time}$ diverges for $r\rightarrow r_{CTC}=\sqrt{2|J|}$. From the effective potential $V_{\rm time}$ we get information on the time flow of the coordinate time $t$ with respect to the affine parameter $\gamma$ (which is related to the eigentime). There are regions where $\left( \frac{{\mathrm{d}} t}{{\mathrm{d}}\gamma} \right) >0$ and regions where $\left( \frac{{\mathrm{d}} t}{{\mathrm{d}}\gamma} \right) <0$. Time runs forward for particles and backwards for antiparticles.
In figure \ref{pic:timepotential} we plot the effective potentials from the $r$-equation together with the potential from the $t$-equation. We also see here that there are particles with $\left( \frac{{\mathrm{d}} t}{{\mathrm{d}}\gamma} \right) >0$, but with negative energies. In \cite{Christodoulou:1972kt, Denardo:1973} this region was called ``generalized ergosphere''.
For most orbits, the time flow stays either positive or negative. However, for certain sets of parameters, particles can cross from a region with $\left( \frac{{\mathrm{d}} t}{{\mathrm{d}}\gamma} \right) >0$ into a region with $\left( \frac{{\mathrm{d}} t}{{\mathrm{d}}\gamma} \right) <0$, so that the time flow is reversed at at some point in these orbits. This occurs behind the horizon only.
\begin{figure}[h!]
\subfigure[~$\delta=1, q=0.8, J=0.4, L=-3.75$]{
\includegraphics[width=0.4\linewidth]{time-pot1.png}
}
\subfigure[~$\delta=1, q=0.5, J=0.4, L=-27$]{
\includegraphics[width=0.4\linewidth]{time-pot2.png}
}
\caption{Effective potentials $V_+$ (light blue curves) and $V_-$ (dark blue curves) from the $r$-equation and from the $t$-equation (red). The grey areas are forbidden since $R<0$. In the red areas the coordinate time runs backwards $\left( \frac{{\mathrm{d}} t}{{\mathrm{d}}\gamma} \right) <0$. The vertical dashed line indicates the position of the horizon.}
\label{pic:timepotential}
\end{figure}
\section{Solution of the equations of motion}
\label{sec:solutions}
In this section we will present the analytical solution of the equations of motion for charged and uncharged particles in the equatorial plane of an extremal black hole in Kaluza-Klein theory. The solutions of the $r$-equation and the $\phi$-equation can then be used to plot the orbits. For charged particles the equations of motion \eqref{eqn:r-equation}--\eqref{eqn:t-equation} are of hyperelliptic type. However, in the special case of uncharged particles $q=0$ the equations of motion reduce to elliptic type, see equations \eqref{eqn:r-equation2}--\eqref{eqn:t-equation2}.
\subsection{The $r$-equation}
\label{sec:r-solution}
\subsubsection{Hyperelliptic case}
For charged particles the right side of the $r$-equation \eqref{eqn:r-equation} is a polynomial of order 6
\begin{equation}
\left( \frac{{\mathrm{d}} r}{{\mathrm{d}}\gamma} \right)^2 = R(r) = \sum_{i=0}^6 a_i r^i \, .
\end{equation}
The order of the polynomial can be reduced by substituting $r=\pm\frac{1}{x} + r_0$ with $R(r_0)=0$
\begin{equation}
\left( x \frac{{\mathrm{d}} x}{{\mathrm{d}}\gamma} \right)^2 = \sum_{i=0}^5 b_i x^i \quad \text{with} \quad b_i=\left. \frac{(\pm 1)^i}{(6-i)!}\frac{{\mathrm{d}}^{6-i}R}{{\mathrm{d}} r^{6-i}}\right| _{r=r_0} \, .
\end{equation}
A second substitution $x=\frac{4}{b_5}y$ transforms the polynomial into the canonical form
\begin{equation}
\left( y \frac{{\mathrm{d}} y}{{\mathrm{d}}\gamma} \right)^2 = \sum_{i=0}^5 \lambda_i y^i = P_5(y)\,
\end{equation}
where
\begin{equation}
\lambda_5 = 4 \, , \ \lambda_4=b_4 \, , \ \lambda_3 = \frac{b_3 b_5}{4} \, , \ \lambda_2 = \frac{b_2 b_5^2}{16} \, , \ \lambda_1 = \frac{b_1 b_5^3}{64} \, , \ \lambda_0 = \frac{b_0 b_5^4}{256} \, .
\end{equation}
A separation of variables leads to a hyperelliptic integral of genus 2
\begin{equation}
\gamma - \gamma_{\rm in} = \int_{y_{\rm in}}^y \frac{y{\mathrm{d}} y}{\sqrt{P_5(y)}}
\end{equation}
where $y_{\rm in}=\frac{\pm b_5}{4(r-r_{\rm in})}$ and $r_{\rm in}$ is the initial value for $r$. We are looking for a solution $y(\gamma)$, so basically we have to solve a special case of the Jacobi inversion problem, see \cite{Hackmann:2008tu}, \cite{Hackmann:2008zz}, \cite{Enolski:2010if}, \cite{Enolski:2011id} for a detailed discussion. The solution is
\begin{equation}
y(\gamma) = \left. \frac{\sigma_1\left( \vec{\gamma}_\infty \right)}{\sigma_2\left( \vec{\gamma}_\infty \right)} \right| _{\sigma\left( \vec{\gamma}_\infty \right)=0}
\end{equation}
where
\begin{equation}
\vec{\gamma}_\infty = \left(
\begin{array}{c}
\gamma_1\\
\gamma-\gamma_{\rm in}'
\end{array}
\right)
\quad \text{and} \quad
\gamma_{\rm in}'=\gamma_{\rm in} + \int_{y_{\rm in}}^\infty \frac{y{\mathrm{d}} y}{\sqrt{P_5(y)}} \, .
\end{equation}
If the initial value $r_{\rm in}$ is chosen at one of the turning points in the orbit, i.e. the zeros of $R(r)$, then $\gamma_{\rm in}'$ can be expressed in terms of the periods. The functions $\sigma_1$ and $\sigma_2$ are derivatives of the Kleinian $\sigma$-function $\sigma_i=\frac{\partial \sigma (\vec{z})}{\partial z_i}$. The constant $\gamma_1$ can be determined by the condition $\sigma\left( \vec{\gamma}_\infty \right)=0$. By resubstitution we get the solution of the $r$-equation \eqref{eqn:r-equation}
\begin{equation}
r(\gamma)=\pm \frac{b_5}{4 } \left. \frac{\sigma_2\left( \vec{\gamma}_\infty \right)}{\sigma_1\left( \vec{\gamma}_\infty \right)} \right| _{\sigma\left( \vec{\gamma}_\infty \right)=0} + r_0\, .
\end{equation}
\subsubsection{Elliptic case}
For neutral particles the $r$-equation simplifies so that the right side of equation \eqref{eqn:r-equation2} is a polynomial of order 4
\begin{equation}
\left( \frac{{\mathrm{d}} r}{{\mathrm{d}}\gamma} \right)^2 = R(r) = \sum_{i=0}^4 a_i r^i
\end{equation}
which can be reduced to third order by the substitution $r=\pm\frac{1}{x} + r_0$ with $R(r_0)=0$
\begin{equation}
\left(\frac{{\mathrm{d}} x}{{\mathrm{d}} \gamma}\right) ^2 = \sum _{i=1}^3 b_i x^i \, .
\end{equation}
Substituting further $x=\frac{1}{b_3}\left( 4y-\frac{b_2}{3}\right)$ gives the standard Weierstra{\ss} form
\begin{equation}
\left(\frac{{\mathrm{d}} y}{{\mathrm{d}} \gamma}\right)^2= 4 y^3-g_2 y -g_3 = P_3 (y)
\label{eqn:weierstrass-form}
\end{equation}
with the coefficients
\begin{equation}
g_2=\frac{b_2^2}{12} - \frac{b_1b_3}{4} \, , \qquad g_3=\frac{b_1b_2b_3}{48} - \frac{b_0b_3^2}{16}-\frac{b_2^3}{216} \ .
\end{equation}
Equation \eqref{eqn:weierstrass-form} is solved by the Weierstra{\ss} $\wp$-function and therefore after resubstitution the solution of $r$-equation \eqref{eqn:r-equation2} is
\begin{equation}
r(\gamma)=\pm \frac{b_3}{4 \wp\left(\gamma - \gamma'_{\rm in}; g_2, g_3\right) - \frac{b_2}{3}} +r_0
\end{equation}
with the initial values $\gamma'_{\rm in}=\gamma_{\rm in}+\int^\infty_{y_{\rm in}}{\frac{{\mathrm{d}} y}{\sqrt{4y^3-g_2y-g_3}}}$ and $y_{\rm in}=\pm\frac{b_3}{4(r_{\rm in}-r_0)} + \frac{b_2}{12}$.
\subsection{The $\phi$-equation}
\label{sec:phi-solution}
\subsubsection{Hyperelliptic case}
Let us first solve the $\phi$-equation in case of charged particles. We use the substitution $r=\pm \frac{b_5}{4y}+r_0$ and knowing that ${\mathrm{d}}\gamma=\frac{y{\mathrm{d}} y}{\sqrt{P_5(y)}}$ we can write the $\phi$-equation \eqref{eqn:phi-equation} as
\begin{equation}
{\mathrm{d}} \phi = \left( C_0 + C_1 y + \frac{C_2}{y-p} \right) \frac{{\mathrm{d}} y}{\sqrt{P_5(y)}}
\label{eqn:phi-partial}
\end{equation}
with the pole
\begin{equation}
p=\frac{b_5}{4(r_0-1)}
\end{equation}
and the constants
\begin{align}
C_0 &= -\frac{b_5}{4(r_0-1)^2}\left[ L(r_0-1)^2 + J(2E+\sqrt{2}q) \right] \\
C_1 &= \frac{1}{1-r_0}\left[ 2EJr_0+Lr_0(1-r_0)+Jq\sqrt{2}(2-r_0) \right]\\
C_2 &= -\frac{b_5^2J(2E+\sqrt{2}q)}{(r_0-1)^3}
\end{align}
which arise from a partial fraction decomposition. Integrating equation \eqref{eqn:phi-partial} yields
\begin{align}
\phi-\phi_{\rm in} &= C_0 \int_{y_{\rm in}}^y \! \frac{{\mathrm{d}} y}{\sqrt{P_5(y)}} + C_1 \int_{y_{\rm in}}^y \! \frac{y {\mathrm{d}} y}{\sqrt{P_5(y)}} + C_2 \int_{y_{\rm in}}^y \! \frac{1}{y-p}\frac{{\mathrm{d}} y}{\sqrt{P_5(y)}} \nonumber \\
&= C_0 (\gamma_1 -\gamma_{\rm in}'') + C_1 (\gamma -\gamma_{\rm in}') + C_2 \int_{y_{\rm in}}^y \! \frac{1}{y-p}\frac{{\mathrm{d}} y}{\sqrt{P_5(y)}}
\end{align}
where $\gamma_{\rm in}'=\gamma_{\rm in} + \int_{y_{\rm in}}^\infty \frac{y{\mathrm{d}} y}{\sqrt{P_5(y)}}$ and $\gamma_{\rm in}''=-\int_{y_{\rm in}}^\infty \frac{{\mathrm{d}} y}{\sqrt{P_5(y)}}$ can be expressed in terms of the periods if $y_{\rm in}$ is chosen to be a zero of $P_5$. As before $\gamma_1$ is the constant to be determined by $\sigma\left( \vec{\gamma}_\infty \right)=0$. The solution of the remaining integral, which is a hyperelliptic integral of the third kind, was found in \cite{Enolski:2011id} and can be proven with the help of the Riemann vanishing theorem. In terms of the Kleinian $\sigma$-function the solution is
\begin{equation}
\int_{y_{\rm in}}^y \frac{1}{y-p}\frac{{\mathrm{d}} y}{\sqrt{ P_5(y)}} = \frac{1}{\sqrt{ P_5(p)}} \left[ -2 \int_{y_{\rm in}}^y {\mathrm{d}}\vec{u}^T \int_{e_0}^p {\mathrm{d}}\vec{r}
+\ln \frac{\sigma\left(\int_{\infty}^y {\mathrm{d}}\vec{u}- \int_{e_0}^{p} {\mathrm{d}} \vec{u} - \vec{K}_\infty \right)}{\sigma\left(\int_{\infty}^y {\mathrm{d}} \vec{u}+ \int_{e_0}^{p} {\mathrm{d}} \vec{u} - \vec{K}_\infty \right)}
- \ln \frac{\sigma\left(\int_{\infty}^{y_{\rm in}} {\mathrm{d}} \vec{u} - \int_{e_0}^{p} {\mathrm{d}} \vec{u} - \vec{K}_\infty \right)}{\sigma\left(\int_{\infty}^{y_{\rm in}} {\mathrm{d}}\vec{u} + \int_{e_0}^{p} {\mathrm{d}} \vec{u} - \vec{K}_\infty \right)} \right]
\label{eqn:sol-thirdkind}
\end{equation}
where the components of the vectors ${\mathrm{d}} \vec{u}$ and ${\mathrm{d}} \vec{r}$ are
\begin{equation}
{\mathrm{d}} u_1 = \frac{{\mathrm{d}} y}{\sqrt{P_5(y)}} \, , \
{\mathrm{d}} u_2 = \frac{y{\mathrm{d}} y}{\sqrt{P_5(y)}} \, , \
{\mathrm{d}} r_1 = (\lambda_3 y+ 2\lambda_4 y^2 + 12 y^3) \frac{{\mathrm{d}} y}{4\sqrt{P_5(y)}} \, , \
{\mathrm{d}} r_2 = \frac{y^2{\mathrm{d}} y}{\sqrt{P_5(y)}} \, .
\end{equation}
$\vec{K}_\infty$ is the vector of Riemann constants and the basepoint $e_0$ is a zero of $P_5$. The integrals in equation \eqref{eqn:sol-thirdkind} can also be expressed as
\begin{equation}
\int_\infty^y {\mathrm{d}} \vec{u}=
\left(
\begin{array}{c}
\gamma_1\\
\gamma-\gamma_{\rm in}'
\end{array}
\right)
=\vec{\gamma}_\infty
\ , \quad
\int_\infty^{y_{\rm in}} {\mathrm{d}} \vec{u}=
\left(
\begin{array}{c}
\gamma_{\rm in}''\\
\gamma_{\rm in}-\gamma_{\rm in}'
\end{array}
\right)
=\vec{\gamma}_\infty^{\, \rm in}
\quad \text{and} \quad
\int_{y_{\rm in}}^y {\mathrm{d}} \vec{u} = \vec{\gamma}_\infty - \vec{\gamma}_\infty^{\, \rm in}
\ .
\end{equation}
We also define the constant vectors and $\int_{e_0}^p {\mathrm{d}}\vec{r}= \vec{c}_r$ and $\int_{e_0}^p {\mathrm{d}}\vec{u}= \vec{c}_u$. Then we can write the solution of the $\phi$-equation \eqref{eqn:phi-equation} as
\begin{equation}
\phi (\gamma) =\phi_{\rm in} + C_0 (\gamma_1 -\gamma_{\rm in}'') + C_1 (\gamma -\gamma_{\rm in}') + \frac{C_2}{\sqrt{ P_5(p)}} \left[ -2 \left( \vec{\gamma}_\infty - \vec{\gamma}_\infty^{\, \rm in} \right)^T \vec{c}_r
+\ln \frac{\sigma\left( \vec{\gamma}_\infty - \vec{c}_u - \vec{K}_\infty \right)}{\sigma\left( \vec{\gamma}_\infty + \vec{c}_u - \vec{K}_\infty \right)}
- \ln \frac{\sigma\left( \vec{\gamma}_\infty^{\, \rm in} - \vec{c}_u - \vec{K}_\infty \right)}{\sigma\left( \vec{\gamma}_\infty^{\, \rm in} + \vec{c}_u - \vec{K}_\infty \right)} \right]
\end{equation}
\subsubsection{Elliptic case}
In the special case $q=0$, we use the substitutions from section \ref{sec:r-solution}, which transform the $r$-equation \eqref{eqn:r-equation2} into the Weierstra{\ss} form (equation \eqref{eqn:weierstrass-form}). Then we can write the $\phi$-equation \eqref{eqn:phi-equation2} as
\begin{equation}
{\mathrm{d}}\phi = C_0 \frac{{\mathrm{d}} y}{\sqrt{P_3(y)}} + \frac{C_1}{1-p} \frac{{\mathrm{d}} y}{\sqrt{P_3(y)}}
\end{equation}
where
\begin{align}
C_0 &=L-\frac{2EJ}{r_0-1} \, ,\\
C_1 &=\frac{EJb_3}{2(r_0-1)^2}\, ,\\
p &= \frac{b_2}{12}-\frac{b_3}{4(r_0-1)}\, .
\end{align}
Then we obtain
\begin{equation}
\phi (\gamma) =\phi_{\rm in} + C_0 (\gamma -\gamma_{\rm in}) + \int_{y_{\rm in}}^y\!\frac{C_1}{1-p} \frac{{\mathrm{d}} y}{\sqrt{P_3(y)}} \, .
\end{equation}
The remaining integral is an elliptic integral of the third kind which can easily be integrated (see e.g. \cite{Enolski:2011id}, \cite{Kagramanova:2010bk}, \cite{Grunau:2010gd}), so that the solution of the $\phi$-equation \eqref{eqn:phi-equation2} is in terms of the Weierstra{\ss} $\wp$, $\zeta$ and $\sigma$ functions
\begin{equation}
\phi (\gamma) =\phi_{\rm in} + C_0 (\gamma -\gamma_{\rm in}) + \frac{C_1}{\wp'(v)}\left( 2\zeta(v)(\gamma -\gamma_{\rm in}') + \ln\frac{\sigma(\gamma -\gamma_{\rm in}'-v)}{\sigma(\gamma_{\rm in} -\gamma_{\rm in}'-v)} - \ln\frac{\sigma(\gamma -\gamma_{\rm in}'+v)}{\sigma(\gamma_{\rm in} -\gamma_{\rm in}'+v)}\right)
\end{equation}
where the constant $v$ is determined by $p=\wp(v)$.
\subsection{The $t$-equation}
The solution procedure of the $t$-equation is similar to the $\phi$-equation. However, in the case of charged particles, some integrals cannot be solved with the current methods.
\subsubsection{Hyperelliptic case}
In the case charged particles $q\neq 0$ we use the substitution $r=\pm \frac{b_5}{4y}+r_0$ and knowing that ${\mathrm{d}}\gamma=\frac{y{\mathrm{d}} y}{\sqrt{P_5(y)}}$ we can write the $t$-equation \eqref{eqn:t-equation} as
\begin{equation}
{\mathrm{d}} t = \left( A_0 + A_1 y + \frac{A_2}{y-p} + \frac{B_1}{(y-p)^2} + \frac{B_2}{y} + \frac{B_3}{y^2}\right) \frac{{\mathrm{d}} y}{\sqrt{P_5(y)}}
\end{equation}
with the pole $p=\frac{b_5}{4(r_0-1)}$ and the constants $A_i$ and $B_i$ which arise from a partial fraction decomposition and can be expressed in terms of the parameters of the black hole and the test particle. The terms containing $A_i$ can be integrated as shown in section \ref{sec:phi-solution}. Unfortunately, the terms containing $B_i$ cannot be integrated analytically with the current methods.
\subsubsection{Elliptic case}
In the case $q=0$ we use the substitutions from section \ref{sec:r-solution}, which transform the $r$-equation \eqref{eqn:r-equation2} into the Weierstra{\ss} form (equation \eqref{eqn:weierstrass-form}). The the $t$-equation \eqref{eqn:t-equation2} can be written as
\begin{equation}
{\mathrm{d}} t = \left( A_0 + \sum _{i=1}^2 \frac{A_i}{y-p_i} + \sum _{i=1}^2 \frac{B_i}{(y-p_i)^2} \right) \frac{{\mathrm{d}} y}{\sqrt{P_3(y)}}
\label{eqn:tparfrac2}
\end{equation}
with the poles $p_1=\frac{b_2}{12}-\frac{b_3}{4(r_0-1)}$ and $p_2=\frac{b_2}{12}$. The constants $A_i$ and $B_i$ (which are different from the constants in the hyperelliptic case) arise from a partial fraction decomposition and can be expressed in terms of the parameters of the black hole and the test particle. Here it is possible to integrate equation \eqref{eqn:tparfrac2} (compare \cite{Kagramanova:2010bk}, \cite{Grunau:2010gd} and \cite{Willenborg:2018zsv}), so that the solution of the $t$-equation \eqref{eqn:t-equation2} is
\begin{align}
t (\gamma) &=t_{\rm in} + A_0 (\gamma -\gamma_{\rm in}) + \sum _{i=1}^2 \frac{A_i}{\wp'(v_i)}\left( 2\zeta(v_i)(\gamma -\gamma_{\rm in}') + \ln\frac{\sigma(\gamma -\gamma_{\rm in}'-v_i)}{\sigma(\gamma_{\rm in} -\gamma_{\rm in}'-v_i)} - \ln\frac{\sigma(\gamma -\gamma_{\rm in}'+v_i)}{\sigma(\gamma_{\rm in} -\gamma_{\rm in}'+v_i)}\right) \nonumber\\
&- \sum _{i=1}^2 B_i\frac{\wp''(v_i)}{\wp'(v_i)^3} \left( 2\zeta(v_i)(\gamma -\gamma_{\rm in}') + \ln\frac{\sigma(\gamma -\gamma_{\rm in}'-v_i)}{\sigma(\gamma_{\rm in} -\gamma_{\rm in}'-v_i)} - \ln\frac{\sigma(\gamma -\gamma_{\rm in}'+v_i)}{\sigma(\gamma_{\rm in} -\gamma_{\rm in}'+v_i)}\right) \nonumber\\
&-\sum _{i=1}^2 \frac{B_i}{\wp'(v_i)^2} \left[2 \wp(v_i) (\gamma -\gamma_{\rm in}) + 2 (\zeta(\gamma -\gamma_{\rm in}') - \zeta(\gamma_{\rm in} -\gamma_{\rm in}')) + \frac{\wp'(\gamma -\gamma_{\rm in}')}{\wp(\gamma -\gamma_{\rm in}') - \wp(v_i)} - \frac{\wp'(\gamma_{\rm in} -\gamma_{\rm in}')}{\wp(\gamma_{\rm in} -\gamma_{\rm in}') - \wp(v_i)} \right]
\label{eqn:tsol}
\end{align}
where the constants $v_i$ are determined by $p_i=\wp(v_i)$. Note that there is a printing error concerning the signs in equation (62) and (63) of \cite{Willenborg:2018zsv}, which we corrected in equation \eqref{eqn:tsol}.
\subsection{The orbits}
\label{sec:orbits}
In this section, the analytical solutions are used to plot the orbits of charged particles in the spacetime of an extremal black hole in Kaluza-Klein theory. The orbits are plotted in Boyer-Lindquist coordinates
\begin{align}
x&= \sqrt{r^2+J^2}\sin\phi \, ,\\
y&= \sqrt{r^2+J^2}\cos\phi \, .
\end{align}
Figure \ref{pic:orbits} shows some examples of the orbits. Bound orbits can be seen in (a), (b), (c) and (d); whereas escape orbits can be seen in (e) and (f). The bound orbit in (c) exists hidden behind the horizon. (d) shows a many-world bound orbit crossing the horizon multible times. A two-world escape orbit crossing the horizon twice is depicted in (f). The test particles in figure (b), (c), (d) and (f) cross the turnaround boundaries (see section \ref{sec:azimuth}) where they change direction and therefore move on ``loops''.
\begin{figure}[p]
\centering
\subfigure[Bound orbit with $\delta=1, q=0.7, J=0.28, L=-3.6, E=0.91$.]{
\includegraphics[width=0.4\textwidth]{bo1.png}
}
\subfigure[Bound orbit with $\delta=1, q=2.5, J=0.6, L=8.8, E=0.955$.]{
\includegraphics[width=0.4\textwidth]{bo2.png}
}
\subfigure[Bound orbit behind the horizon with $\delta=1, q=0.1, J=0.4, L=-7, E=6.2$.]{
\includegraphics[width=0.4\textwidth]{bo3.png}
}
\subfigure[Many-world bound orbit with $\delta=1, q=0.8, J=0.34, L=-4.1, E=1.01$.]{
\includegraphics[width=0.4\textwidth]{mbo.png}
}
\subfigure[Escape orbit with $\delta=1, q=0.1, J=-0.4, L=4, E=1.381$.]{
\includegraphics[width=0.4\textwidth]{eo.png}
}
\subfigure[Two-world escape orbit with $\delta=1, q=0.7, J=-0.2, L=2.7, E=1.3$]{
\includegraphics[width=0.4\textwidth]{tweo.png}
}
\caption{Different orbits of charged particles in the spacetime of an extremal black hole in Kaluza-Klein theory. The blue curves are the orbits, that black dashed circles indicate the position of the horizon and the red dotted circles show the so-called turnaround boundaries.}
\label{pic:orbits}
\end{figure}
\section{Conclusion}
In this article we studied the motion of electrically charged particles and light in the spacetime of a rotating dyonic black hole in Kaluza-Klein theory. We focused on the extremal case with a single degenerate horizon. This case is of particular interest, since the angular velocity of the horizon vanishes, but the solution can still have angular momentum.
In section \ref{sec:EQM} we saw that the Hamilton-Jacobi equation for particles moving around the extremal Rasheed black hole separates in three cases:
\begin{enumerate}
\item Charged particles around a non-rotating Rasheed black hole with $J=0$.
\item Uncharged massless particles with $\delta=0$ and $q=0$.
\item Charged particles in the equatorial plane.
\end{enumerate}
In the present article we studied the third case. For electrically charged particles, the equations of motion are of hyperelliptic type and were solved analytically in terms of the Kleininan $\sigma$ function. For uncharged particles the equations simplify to elliptic type and were solved analytically in terms of the Weierstra{\ss} $\wp$, $\sigma$ and $\zeta$ functions. We analyzed the particle motion with the help of effective potentials and parametric plots and gave a full list of all possible orbit types. Moreover we calculated the photon sphere and considered the causality in the spacetime. Here we saw that CTCs exist behind the horizon.
Depending on the parameters of the black hole and the test particle, the azimuthal equation of motion vanishes at certain radii. At these so-called turnaround boundaries, the test particles change their direction. A similar behaviour is usually seen in an ergosphere, which does not exist in this spacetime. We found that the turnaround boundary can intersect with the effective potentials, so that a test particle is at rest at this point. If this happens at the local minimum of one of the effective potentials, a static ring occurs, where particles are at rest at all times.
For future work it would be interesting to consider the particle motion not only in the equatorial plane but also study the cases of charged particles moving around a non-rotating Rasheed black hole ($J=0$) and uncharged massless particles with $\delta=0$ and $q=0$. Furthermore, one could explore the particle motion in the general non-extreme Rasheed metric. However, this will only be possible with numerical techniques.
\section{Acknowledgements}
We would like to thank Jutta Kunz for fruitful discussions. S.G. gratefully acknowledges support by the DFG (Deutsche Forschungsgemeinschaft/ German Research Foundation) within the Research Training Group 1620 ``Models of Gravity.''
\clearpage
\bibliographystyle{unsrt}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,862
|
\section{Introduction}
Lunar digital elevation models (DEMs) have been constructed with sensory data collected from the Lunar Reconnaissance Orbiter (LRO) \cite{chin2007lunar, smith2010lunar} and provided useful topographic information of the Moon since 2009.
One of the most primary roles of LRO is to identify high-resolution elevations of the lunar surface by using data obtained from the Lunar Orbiter Laser Altimeter (LOLA) and the Lunar Reconnaissance Orbiter Camera (LROC) \cite{robinson2010lunar_a, robinson2010lunar_b}.
LOLA provides a global topographic model of a resolution of 1024 pixel per degree ($\sim$30m at the equator) with high accuracy.
On the other hand, LROC collects stereo observations with two narrow-angle cameras (NACs), which enables the generation of higher resolution ($\sim$5m at the equator) DEMs, called NAC DEMs \cite{tran2010generating}.
Unfortunately, raw NAC DEMs contain regions of no-data gaps (voids) since stereo image matching processes often fail to match pixels near shadowed regions, breaking the continuity of the terrain map.
Despite the problem, studies on no-data gap-filling algorithms for lunar NAC DEMs have not been sufficiently reported.
Furthermore, a reconstruction for no-data gaps in lunar DEMs is not straightforward due to the following challenging issues:
1) NAC DEM requires high-resolution reconstruction methodology,
2) reconstruction algorithm must be reliable since it can affect related lunar studies or exploration missions, and
3) NAC DEM is a large and high-resolution area map, thus a scalable approach should be applied.
Accordingly, the main aim of this paper is to develop a scheme to robustly and efficiently inference no-data regions while maintaining the high-resolution of NAC DEMs.
In particular, this paper adopts the deep-learning approach that has been gaining popularity in aerospace fields as well as artificial intelligence studies \cite{
yang2016neural, chang2017adaptive, bagherzadeh2018nonlinear, furfaro2018deep, roy2019lunar}.
One of the most conventional approaches for gap filling algorithms is to use an interpolation algorithm \cite{reuter2007evaluation, jassim2013image}.
Interpolation based method primarily assumes spatial continuity of the map; the elevation value for a given point is more likely to be similar to the value of the nearby region than to that of the distant region.
As such interpolation algorithms predict the unseen value as a weighted average of neighboring values where the weight is inversely proportional to the distance.
Consequentially, interpolation approaches are difficult to account for regional characteristics and tend to focus too much on local information near the gap.
Besides, they often provide blurred results and fail to resolve the first challenge.
For more accurate and precise reconstruction, however, algorithms that adaptively reflect local characteristics and effectively aggregate information even away from the target region is needed.
Alternatively, one can apply image inpainting algorithms by replacing the void filling task as image completion task; elevation maps can be viewed as single-channel images.
Traditional inpainting algorithms \cite{bertalmio2000image, ballester2000filling, efros2001image, barnes2009patchmatch, komodakis2006image}, however, often fail to preserve details of surface information, thus are unsuitable for high-resolution reconstruction on large holes.
On the other hand, data-driven approaches based on deep-learning methods such as a convolutional neural network (CNN) \cite{krizhevsky2012imagenet} and generative adversarial network (GAN) \cite{goodfellow2014generative, arjovsky2017wasserstein} have been recently studied \cite{iizuka2017globally, yeh2017semantic, yang2017high, gavriil2019void}.
Although such approaches are known to generate unseen region with a very high-resolution, it is doubtful whether the predicted output from deep neural networks is reliable because they do not provide any uncertainty analysis.
To resolve the presented challenges, this paper proposes a data-driven probabilistic inference scheme as illustrated in Figure \ref{fig:arch}.
In particular, this paper primarily replaces the no-data gap reconstruction task as conditional inference problem.
Then, we adopt the attention mechanism \cite{vaswani2017attention} based neural processes (NPs) \cite{garnelo2018neural, garnelo2018conditional} as a stochastic inference process to predict the elevation value on the shadowed region for given nearby observation data.
The neural process model is trained in a self-supervised manner by using randomly masked NAC DEM data.
We evaluated the effectiveness of our approach on the Apollo 17 landing site and showed that it fills no-data gaps of large-scale NAC DEMs in high precision and reliability.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{fig/arch.png}
\caption{Flow diagram of the proposed neural process based no-data gap reconstruction method in Lunar NAC DEM.}
\label{fig:arch}
\end{figure}
\section{Preliminaries} \label{sec:bgd}
Our method builds upon the attentive neural processes model \cite{kim2019attentive} which is a state of the art stochastic process model that incorporates attention approach into NPs.
Here we briefly describe preliminary concepts about NPs and attention models.
\subsection{Nomenclature}
In the following sections, subscript $i$ and $j$ denotes $i^{th}$ and $j^{th}$ data set, respectively.
Scalar, vector and matrix values are, represented using lowercase italic, lowercase bold italic, and capital bold italic, respectively.
$\mathcal{N}(\mathbf{x}; \boldsymbol{\mu}, \mathbf{\Sigma})$ represents the probability density of vector $\mathbf{x}$ under the normal distribution with mean vector $\boldsymbol{\mu}$ and covariance matrix $\mathbf{\Sigma}$.
$p(\mathbf{X}|\mathbf{Y};\mathbf{Z})$ denotes the probability of random variable $\mathbf{X}$ conditioning on random variable $\mathbf{Y}$ and parameter $\mathbf{Z}$.
\subsection{Neural Processes} \label{sec:NP}
The neural processes (NP) model is a flexible and powerful multi-dimensional regression model that approximates conditional distributions over functions given a set of observations \cite{garnelo2018conditional}.
Consider a set of observation data consisting of $N$ input $\mathbf{x}_i \in \mathbb{R}^P$ and output $\mathbf{y}_i = f(\mathbf{x}_i) \in \mathbb{R}^Q$ pairs $\mathcal{O} = \{(\mathbf{x}_i, \mathbf{y}_i)\}_{i=1}^{N}$, and another set of $M$ unlabeled points $\mathbf{X}^* = \{\mathbf{x}_i^*\}_{i=1}^{M}$.
The main aim of NP is to approximate a conditional distributions of $\mathbf{Y}^* = \{\mathbf{y}_{i}^* = f(\mathbf{x}_{i}^*)\}_{i=1}^{M}$ for unknown function $f: \mathbb{R}^P \rightarrow \mathbb{R}^Q$.
Since it is analytically intractable to infer conditional probability for the nonlinear function $f$, NP introduces encoding process ($g_{\theta}$) that compresses each observation data into a finite-dimensional latent vector $\mathbf{r}_i \in \mathbb{R}^D$ and decoding process ($Q_{\phi}$) that reconstructs unseen data for a given data point and computed latent vectors:
\begin{equation}
p \left( \mathbf{Y}^* | \mathcal{O}, \mathbf{X}^* \right) \approx Q_{\phi} \left( \mathbf{Y}^* | \mathbf{X}^*; \mathcal{R} \sim g_{\theta}(\mathcal{O}) \right) .
\end{equation}
where $\mathcal{R} = \{ \mathbf{r}_i \}_{i=1}^{N}$ is a set of latent vectors while $\theta$ and $\phi$ are sets of learnable parameters for the encoding and decoding process respectively.
For the decoding process, the mean-field approximation is used to achieve the scalable inference \footnote{The parallel inferences of $Q_{\phi}$ is enabled via GPU by mean-field factorization.}:
\begin{equation}
Q_{\phi} \left( \mathbf{Y}^* | \mathbf{X}^*; \mathcal{R} \right) = \prod_{\mathbf{x}^* \in \mathbf{X}^*} Q_{\phi} \left( \mathbf{y}^* | \mathbf{x}^*; \mathcal{R} \right) .
\end{equation}
In more detail, $Q_{\phi}(\cdot)$ consists of mean and covariance functions $\mu_{\phi}(\cdot)$ and $\sigma_{\phi}(\cdot)$ parameterized by neural networks:
\begin{align}
Q_{\phi} \left( \mathbf{y}^* | \mathbf{x}^*; \mathcal{R} \right) &= \mathcal{N} \left( \mathbf{y}^* ; \mu_{\phi}(\mathbf{x}^*, \mathbf{r}^*), \sigma_{\phi}(\mathbf{x}^*, \mathbf{r}^*) \right) \label{eq:dec} \\
\mathbf{r}^* &= aggregate \left( \mathcal{R} \right) \label{eq:agg}
\end{align}
where the $aggregate$ function is an arbitrary permutation-invariant function (e.g. sum, mean, max operation, etc.).
The element-wise sum operation has been used for $aggregate$ function in previous works \cite{garnelo2018neural, garnelo2018conditional}:
\begin{equation}
\mathbf{r} \equiv aggregate( \mathcal{R} ) = \mathbf{r}_1 \oplus \cdots \oplus \mathbf{r}_N \label{eq:sum_agg}
\end{equation}
Similarly, encoding process $g_{\theta}(\cdot)$ is also parameterized by neural network \footnote{The encoding process also can be considered as stochastic without loss of generality; but this paper focuses on deterministic process for the simplicity in implementation and operation.}:
\begin{equation}
\mathbf{r}_i = g_{\theta} \left( \mathbf{x}_i, \mathbf{y}_i \right) ~~~~~ \forall (\mathbf{x}_i, \mathbf{y}_i) \in \mathcal{O} \label{eq:enc}
\end{equation}
Usage of the neural process has several big advantages as following.
Primarily, NP is robust.
Since NP is a probabilistic model, uncertainty analysis is available thus the predicted values are highly reliable.
Secondly, NP is scalable.
Note that Gaussian processes (GPs) \cite{rasmussen2006gaussian}, one of the most widely-used traditional stochastic processes, imposes an extensive computational cost of $\mathcal{O}(N^3)$, while NP can predict the data with a cost of $\mathcal{O}(N + M)$.
Finally, NP can possess strong representation power by using the latest deep learning algorithms.
\subsection{Attentive Neural Processes}
In the early model of NPs, $aggregate$ function in \eqref{eq:agg} only take $\mathcal{R}$ as inputs and does not consider the inference point $\mathbf{x}^*$.
Moreover, the encoding process in \eqref{eq:enc} does not take other data points into account.
As such the conventional NPs infer latent variables $\mathcal{R}$ by merely using local information on each pixel, and therefore fail to compress the global information regarding whole observed data.
As a result, overall processes could not have strong representation power and NP often suffers an under-fitting problem.
To resolve the challenge, Kim et al.\cite{kim2019attentive} suggests attention based NP model, called attentive neural processes (ANPs), that incorporates the attention model into encoding and decoding processes.
\subsubsection{Attention Models}
Attention-based neural network architecture, the Transformer, is first introduced by Vaswani et al.\cite{vaswani2017attention}.
After the invention of the Transformer, attention models have been applied to state of the art deep learning algorithms in the wide range of field (e.g. natural language processing \cite{devlin2018bert}, image processing \cite{yu2018generative}, recommendation \cite{zhou2018deep}, etc.), achieving superior performances.
The key idea int the attention mechanism is to \textit{learn} to adaptively determine the associations among data: it automatically figures out the more and less important data points (regions) to pay attention.
Consider a lunar crater classification task, for instance, while vanilla neural process places equal importance on every pixel data, attention models try to focus on pixels in crater regions automatically.
One of the biggest advantages of using attention model is that the representation power of the model drastically increases.
Consider a set of key-value (input-output) pairs $\{ \mathbf{k}_i, \mathbf{v}_i \}_{i=1}^N$ and queries (target) $\{ \mathbf{q}_i \}_{i=1}^M$.
We can build a key matrix $\mathbf{K} \in \mathbb{R}^{M \times P} $, a value matrix $\mathbf{V} \in \mathbb{R}^{N \times Q}$, and a query matrix $\mathbf{Q} \in \mathbb{R}^{N \times P}$ by stacking data along the column:
\begin{equation}
\mathbf{K} = \Big[ \mathbf{k}_1 , \cdots, \mathbf{k}_N \Big] , ~~
\mathbf{V} = \Big[ \mathbf{v}_1 , \cdots, \mathbf{v}_N \Big] , ~~
\mathbf{Q} = \Big[ \mathbf{q}_1 , \cdots, \mathbf{q}_M \Big] .
\end{equation}
Scaled dot-product (SDP) attention function is given by:
\begin{equation}
SDP(\mathbf{Q}, \mathbf{K}, \mathbf{V}) \equiv \mbox{softmax} \Big( \frac{\mathbf{Q} \mathbf{K}^T}{\sqrt{P}} \Big) \mathbf{V} .
\label{eq:sdp}
\end{equation}
To be specific, SDP function predicts $j^{th}$ output for a given query $\mathbf{q}_j$ as a weighted average on the set of observation values $\{ \mathbf{v}_i \}_{i=1}^N$ by giving higher weight on the value $\mathbf{v}_i$ that has a larger dot product of key $\mathbf{k}_i$ and $\mathbf{q}_j$.
Vanilla SDP function does not have any trainable parameters.
To enhance the representation power of attention network, linear transformation layers are often preceded before SDP function:
\begin{equation}
Attention \big( \mathbf{Q}, \mathbf{K}, \mathbf{V} \big) \equiv SDP \big( \mathbf{Q} \mathbf{W}^Q, \mathbf{K} \mathbf{W}^K, \mathbf{V} \mathbf{W}^V \big)
\end{equation}
where $\mathbf{W}^Q, \mathbf{W}^K \in \mathbb{R}^{P \times D_q}$ and $\mathbf{W}^V \in \mathbb{R}^{P \times Q}$ are query, key, and value embedding matrices respectively.
Note that using additional linear layers also allows the attention function to take into account the heterogeneity of query, key and value matrices.
Furthermore, we can extend the attention model to a multi-headed model to consider $K$ number of relational processes:
\begin{equation}
MA^K \big( \mathbf{Q}, \mathbf{K}, \mathbf{V} \big) \equiv \mathbf{w}_0 + \sum_{k=1}^{K} Attention^k \big( \mathbf{Q}, \mathbf{K}, \mathbf{V} \big)\mathbf{W}^k
\end{equation}
where $Attention^k$ is $k$-th attention function, $\mathbf{W}^k$ are corresponding embedding matrix, and $\mathbf{w}^0$ is bias vector.
$K$ is user defined hyperparameter.
\subsubsection{Attentive Neural Processes} \label{sec:anp}
The primary difference between NP and ANP is that ANP adopts multi-headed \textit{self-attention} (attention where keys and queries are identical) on the encoding processes:
\begin{equation}
\mathbf{R}_{attn} = SA \left( \mathbf{R} \right) = MA^{K_e} \left( \mathbf{R}, \mathbf{R}, \mathbf{R} \right)
\end{equation}
where $\mathbf{R} = \left[ \mathbf{r}_1 , \cdots, \mathbf{r}_N \right]$ is a matrix of latent vectors.
ANP additionally adopts multi-headed \textit{cross-attention} instead of $aggregate$ function in equation \eqref{eq:agg}:
\begin{equation}
\mathbf{r}_i^* = MA^{K_d} \left( \mathbf{x}_i^*, \mathbf{X}, \mathbf{R}_{attn} \right)
\end{equation}
where $\mathbf{X} = \left[ \mathbf{x}_1 , \cdots, \mathbf{x}_N \right]$ is a matrix of input vectors.
The graphical representation of ANP architecture is illustrated in Figure \ref{fig:anp}.
\begin{figure}[t]
\begin{subfigure}{.44\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{fig/enc.pdf}
\caption{Encoding Process}
\label{fig:sub-first}
\end{subfigure}
\begin{subfigure}{.55\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{fig/dec.pdf}
\caption{Decoding Process}
\label{fig:sub-second}
\end{subfigure}
\caption{The architecture of attentive neural process}
\label{fig:anp}
\end{figure}
\section{Probabilistic Reconstruction Framework for No-Data Gap in NAC DEMs} \label{sec:methods}
\subsection{Problem Formulation}
Primarily, we formulate the no-data gap reconstruction problem as a probabilistic inference problem.
Denote the $L$ latitude-longitude pairs on the non-shadowed regions of NAC DEM, $\mathcal{D} = \{\mathbf{x}_i=(\lambda_i, \varphi_i)\}_{i=1}^{L}$ and the elevation for each point $\{\mathbf{y}_i\}_{i=1}^{L}$.
Similarly, we denote the location and elevation for shadowed region $\{\mathbf{x}^*_i\}_{i=1}^{T}$ and$\{\mathbf{y}^*_i\}_{i=1}^{T}$, respectively.
The goal of no-data gap reconstruction is to compute the probability distribution: $q \big( \mathbf{y}^* \big) = p \big( \mathbf{y}^* | \mathbf{x}^*, \{\mathbf{x}_i, \mathbf{y}_i\}_{i=1}^{L} \big)$.
However, it is computationally extensive to consider every elevation data of high-resolution NAC DEM.
Instead, we approximate the distribution as:
\begin{align}
& q \big( \mathbf{y}^* \big) \approx p \big( \mathbf{y}^* | \mathbf{x}^*, \{\mathbf{x}_i, \mathbf{y}_i\}_{i \in \mathcal{W}(\mathbf{x}^*)} \big) \\ \nonumber
& \mathcal{W}\big( \mathbf{x} =(\lambda, \varphi) \big) \\ \nonumber
& ~~~~ = \{ i \mid ~ |\lambda_i - \lambda| \le \frac{1}{2} w_\lambda, ~ |\varphi_i - \varphi| \le \frac{1}{2} w_\varphi \}
\end{align}
where $w_\lambda$ and $w_\varphi$ are user-defined window sizes.
As an inference model, attentive neural process model, $Q_\phi(\cdot)$, is used to estimate the posterior distribution of elevation value for given target points.
In order to normalize the range of input values for effective computation, we used a relative location to the target point rather as NP inputs.
Note that since the encoding and decoding processes are both parameterized by the neural network in an amortized manner \cite{kingma2013auto, gershman2014amortized} as illustrated in equation \eqref{eq:dec} and \eqref{eq:enc}, we utilized only a single neural process model that can be applied to an arbitrary target region.
\subsection{Model Architecture}
\subsubsection{Encoder}
We built the encoder network with three neural-networks: location encoder, elevation encoder, and latent encoder.
Location encoder ($g_{l}$) and elevation ($g_{e}$) encoder linear transforms the location ($\mathbf{x}_i$) and elevation ($\mathbf{y}_i$) information into the latent space with dimension $D$:
\begin{align}
g_{l}(\mathbf{x}_i) & \equiv \mathbf{x}'_i = \mathbf{W}_l\mathbf{x}_i \\ \nonumber
g_{e}(\mathbf{y}_i) & \equiv \mathbf{y}'_i = \mathbf{W}_e\mathbf{y}_i
\end{align}
where $\mathbf{W}_l \in \mathcal{R}^{D \times 2}$ and $\mathbf{W}_e \in \mathcal{R}^{D \times 1}$ are learnable parameters.
Then latent encoder ($g_{latent}$), the neural network ($\mbox{mlp}_e$) that consists of two fully connected layers with 1024 hidden units and ReLU activation, aggregates the transformed location and elevation information into the latent vector $\mathbf{r}_i$ with the same dimension $D$:
\begin{align}
g_{latent}(\mathbf{x}'_i, ~ \mathbf{y}'_i) \equiv \mathbf{r}_i = \mbox{mlp}_e(\mathbf{x}'_i + \mathbf{y}'_i)
\end{align}.
Finally, as presented in Section \ref{sec:anp}, we aggregate latent vectors by using the 2-headed self-attention layer to compute $\mathbf{R}_{attn} \equiv \{\mathbf{r}'_i \}_{i=1}^N$.
\subsubsection{Decoder}
Decoder network consists of a 2-headed cross-attention layer and one neural network.
Cross-attention layer estimates the latent vector for the target location $\mathbf{r}^*$ with $\{\mathbf{x}'_i\}_{i=1}^N$ as key, $\{\mathbf{r}_i'\}_{i=1}^N$ as value, and the transformed target location $g_l(\mathbf{x}^*)$ as query.
Finally, the neural network ($\mbox{mlp}_d$) that consists of two fully connected layers with 1024 hidden units and ReLU activation, decodes the latent vector into the elevation $\mathbf{y}^*$:
\begin{align}
\mathbf{y}^* = \mbox{mlp}_d(\mathbf{r}^*)
\end{align}
\subsection{Sparse Neural Processes}
Although the computational cost of ANP, $\mathcal{O}(ND^2)$, is much lighter than the one of Gaussian processes, the dot-product operation in \eqref{eq:sdp} is still a computationally and memory extensive.
As a result, the vanilla ANP algorithm cannot be applied to large window sizes.
Furthermore, using large amounts of information does not always lead to better results since the model can be easily over-fitted and leads the slow convergence.
As such, we propose \emph{sparse attentive neural processes} (SANPs), a self-supervised learning scheme that not only is applicable to the large scale window size but enhances the model performance.
\footnote{In the same way, neural processes are extended to sparse neural processes (SNPs)}.
For each training iteration, we randomly select $B < L$ numbers of data points $\mathcal{D}'$ among $\mathcal{D}$.
For each data point $\mathbf{x}_i$ in $\mathcal{D}'$, we collect the DEM data of window size $\mathcal{S}_i = \{(\mathbf{x}_j, \mathbf{y}_j)\}_{j \in \mathcal{W}(\mathbf{x}_i)}$.
After this, we randomly sample $K \ll |\mathcal{S}_i|$ context points $\mathcal{O}_i$ for each $\mathcal{S}_i$ under the probability distribution:
\begin{equation}
p_{sample}(\mathbf{x}; \mathbf{x}_i)=\begin{cases}
\exp(-\frac{1}{\alpha} || \mathbf{x} - \mathbf{x}_i||_2), & \text{if $\mathbf{x} \ne \mathbf{x}_i$}.\\
0, & \text{otherwise}.
\end{cases}
\end{equation}
where $\alpha$ is a user-defined sampling temperature.
Note that the context points are uniformly sampled for $\alpha=\infty$ while the top-$K$ closest points are selected for $\alpha=0$.
Finally, the model is trained to maximize the probability of elevation values for given sampled context points:
\begin{equation}
J = \frac{1}{B} \sum_{i=1}^{B} \Big[ Q_\phi(\mathbf{y}_i \mid \mathbf{x}_i, g_\theta(\mathcal{O}_i)) \Big]
\end{equation}
We used Adam \cite{kingma2014adam} algorithm for the gradient descent optimizer.
The overall algorithm flow is described in Algorithm \ref{alg:dem}.
\subsection{Data Augmentation}
Data augmentation technique has been widely applied in various machine learning models, especially in computer vision fields, which effectively enhance the model performance.
Data augmentation reduces generalization errors, particularly when the number of data is not sufficiently secured, such as the DEM problem.
Therefore, this paper applied two data augmentation techniques, rotation and scaling, as described in Algorithm \ref{alg:da}.
\begin{algorithm}[t]
\SetAlgoLined
Initialize dataset $\mathcal{D}$ \\
Initialize ANP paraemters $\theta, \phi$ \\
Initialize learning rate $\eta$ \\
\While{Converged}{
$J$ = 0 \\
Sample $B$ data points $\mathcal{D}'$ from $\mathcal{D}$ \\
\For{$\{ \mathbf{x}_i, \mathbf{y}_i \}$ in $\mathcal{D}'$}{
Gather DEM data $\mathcal{S}_i$ near $\mathbf{x}_i$ \\
Sample context points $\mathcal{O}_i$\\
Apply data augmentation to $\mathcal{O}_i$ \\
Collate training triplet $\{ \mathcal{O}_i, \mathbf{x}_i^*, \mathbf{y}_i\}$ \\
Compute the log-likelihood of the elevation at target regions: $p = Q_\phi(\mathbf{y}_i | \mathbf{x}_i, g_\theta(\mathcal{O}_i))$ \\
$J = J + \frac{1}{B} p$
}
Train $\theta$ and $\phi$ with gradient-descent algorithm
}
\caption{Sparse Attentive Neural Processes}
\label{alg:dem}
\end{algorithm}
\begin{algorithm}[t]
\SetAlgoLined
\textbf{Input}: context points $\mathcal{O}_i$ \\
Sample rotation angle $\theta_i$ in $\left[ 0, 2\pi \right]$ \\
Sample scaling factor $s_i$ in $\left[ 0.5, 1.5 \right]$ \\
\For{$\{\mathbf{x}_j=(\lambda_j, \varphi_j), \mathbf{y}_j \}$ in $\mathcal{O}_i$}{
$\lambda_j \leftarrow \lambda_j \cos (\theta_i) - \varphi_j \sin (\theta_i) $ \\
$\varphi_j \leftarrow \lambda_j \sin (\theta_i) + \varphi_j \cos (\theta_i) $ \\
$\mathbf{y}_j \leftarrow \mathbf{y}_j \times s_i$ \\
}
\caption{Data Augmentation}
\label{alg:da}
\end{algorithm}
\section{Experiments}
To demonstrate the efficiency of the proposed method, we compared the reconstruction performance with baseline methods on NAC DEM data at the Apollo 17 landing site (20.0$^{\circ}$N and 30.4$^{\circ}$E)
\footnote{The data is collected from \hyperlink{http://wms.lroc.asu.edu/lroc}{http://wms.lroc.asu.edu/lroc}}.
Data consists of 4.5M points (pixels) of size 0.005 $km$ * 0.005 $km$.
We used 10k points for valid and test data each, before training models.
As expected, valid and test data is not used for training procedures.
Our model is implemented in \emph{PyTorch} \cite{NEURIPS2019_9015} on a single Nvidia V100 GPU, while the baseline algorithms are implemented in \emph{SciPy} \cite{jones2001scipy} and \emph{OpenCV-Python} \cite{opencv_library}.
The code will be available after publication.
\subsection{Model Specification}
For sparse neural process models, window sizes $w_\phi$ and $w_\varphi$ are set to 0.5 $km$ ($\sim$ 100 pixels) so that they can adequately cover the size of the no-data gap.
Hyperparameters, sampling size ($K$), sampling temperature ($\alpha$), and latent dimensions ($D$), are set to 100, 0.4 $km$, and 512 respectively through ablation studies in section \ref{sec:abl}. For the vanilla neural process models, the window sizes are adjusted to 0.05 $km$ ($\sim$ 10 pixels) to match the number of context points, and the latent dimension was equally set to 512.
Furthermore, we compare our model with the following baselines as well as neural process models:
\begin{itemize}
\item \textbf{Linear} is an extension of linear interpolation for 2 - dimensional data, using the closest $2 \times 2$ pixels.
\item \textbf{Cubic} is an extension of cubic interpolation for 2 - dimensional data, using the closest $4 \times 4$ pixels.
\item \textbf{Nearest} fills with the value of the nearest point.
\item \textbf{Navier Stokes} \cite{bertalmio2001navier} is one of the most well-known image inpainting algorithms based on fluid dynamics utilizing partial differential equations.
\item \textbf{Telea} \cite{telea2004image} is another image inpainting algorithm based on fast marching method.
\end{itemize}
\begin{table}[t]
\centering
\begin{tabular}{lccc}
\hline
& NLL & MAE (m) & RMSE (m) \\ \hline
\bf SANP & \bf -0.5538 & \bf 0.3096 & \bf 0.4439 \\ \hline
SNP & -0.2147 & 0.41708 & 0.5793 \\ \hline
ANP & -0.3775 & 0.3418 & 0.4865 \\ \hline
NP & 0.8475 & 1.039 & 1.485 \\ \hline
Linear & - & 0.3586 & 0.5082 \\ \hline
Cubic & - & 0.4034 & 0.6026 \\ \hline
Nearest & - & 1.693 & 2.186\\ \hline
Navier Stokes & - & 2.306 & 33.26\\ \hline
Telea& - & 9.131 & 44.06 \\ \hline
\end{tabular}
\caption{Reconstruction Results}
\label{tab:recon}
\end{table}
\subsection{Reconstruction Results}
Reconstruction errors of the SANP are compared against baseline models, in terms of three evaluation metrics: negative log-likelihood (NLL), mean absolute error (MAE), and root mean square error (RMSE).
While MAE and RMSE only evaluate the accuracy of the reconstruction results, NLL further indicates how close the approximated \emph{probability distribution} is to the ground truth.
Thus even if the evaluated MAE or RMSE is small, it can be inferred that if the NLL is large, the model is overconfident when predicting uncertain locations.
As such, NLL represents the robustness of the model.
Note that NLL is only reported for neural process models since others are not probabilistic models.
The results are shown in in Table \ref{tab:recon}, SANP outperforms baseline algorithms for every metrics.
From this result, we found that sparsification method helps the model to be more robust; this is consistent with previous studies that Dropout \cite{srivastava2014dropout} reduces the generalization error of the model in many machine learning algorithms.
Moreover, the sparsification considerably improves the performance of NP; the sum aggregation in \eqref{eq:sum_agg} generally does not perform well in large window sizes because the latent information becomes vague as the number of data increased, but the sampling technique resolves such over-smoothing problem and enhances the accuracy of the information.
To illustrate more detailed results for our model, we primarily show several sample reconstruction results on artificially generated images with no-data gaps as shown in figure \ref{fig:recon}.
As shown in the results, our model can predict the no-data gap successfully regardless of the size or shape of the voids.
Furthermore, we provided the reconstruction result on NAC DEM at the Apollo 17 landing site which contains no-data gaps, and the uncertainty map estimated by SANP model.
We fount that the SANP model predicts relatively high uncertainty where the reconstruction is suspected to the naked eye indicating that the model is aware of its own prediction accuracy, and it demonstrates the robustness of the model.
Results are illustrated in figure \ref{fig:nodata}.
\begin{figure}[H]
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{fig/r1.png}
\caption{}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{fig/r2.png}
\caption{}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{fig/r3.png}
\caption{}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{fig/r4.png}
\caption{}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{fig/r5.png}
\caption{}
\end{subfigure}
\caption{Sample reconstruction results of the SANP for various shapes and sizes of no-data gap. Color represents the elevation value (unit: $m$). (Left) input context, (middle) ground-truth, (right) reconstruction.}
\label{fig:recon}
\end{figure}
\begin{figure}[H]
\begin{subfigure}{0.65\textwidth}
\includegraphics[width=\linewidth]{fig/t.png}
\caption{Reconstructed map}
\end{subfigure} \hfill
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{fig/ts.png}
\caption{Uncertainty map}
\end{subfigure}
\caption{Reconstruction results of the SANP on no-data gaps in NAC DEM data at the Apollo 17 landing site (20.0$^{\circ}$N and 30.4$^{\circ}$E). (a) Left image is NAC DEM, and right image is reconstructed map. Color represents the elevation value (unit: $m$). (b) Uncertainty map derived based on the standard deviation estimated by SANP. Color represents the standard deviation value (unit: $m$).}
\label{fig:nodata}
\end{figure}
\subsection{Ablation Studies} \label{sec:abl}
To find the effect of hyperparameters to the model performance, ablation studies for SANP model are conducted in terms of the three most important factors: sampling size, sampling temperature, and latent dimension.
In the case of sampling size and latent dimension, relative inference time is also compared as it is a factor directly related to the scalability of the method.
Primarily, the best sampling size is found to be 100 as shown in Table \ref{tab:ss}.
A small number of context points can not provide enough information for reconstruction while too large number raises an over-smoothing problem and dramatically reduces the reconstruction quality.
Empirically, it is found that over 1000 sample size leads to the out of memory (OOM) error for a single GPU with the batch size 1024. Figure \ref{fig:ss} shows that the inference time linearly increases to the number of sample size, as expected.
Secondly, experiments on the effect of varying sampling temperature are conducted.
Intuitively, it can be easily expected that the elevation values at points closer from the target point are more important.
As such, the model with too high temperature would fail to make accurate reconstruction results.
When the temperature is too low, meanwhile, the model becomes too myopic.
As a result, the best sampling temperature is found at 0.4, as shown in Table \ref{tab:st}.
Finally, the performance and inference time with different latent dimension is analyzed as well.
As the latent dimension gets larger, the number of learnable parameters in neural networks gets bigger; therefore, the model with larger dimension generally has more representation power.
However, the model with too large dimension may suffer from over-fitting problems, and the inference time quadratically increases with the latent dimension as well.
In our experiments, 512 is found to be the most suitable dimension size as shown in Table \ref{tab:ed} and Figure \ref{fig:ed}.
\begin{table}[t]
\centering
\begin{tabular}{l||ccc}
\hline
$K$ & NLL & MAE & RMSE \\ \hline
50 & -0.3697 & 0.3613 & 0.5131\\ \hline
\bf 100 & \bf -0.5538 & \bf 0.3096 & \bf 0.4439 \\ \hline
200 & -0.1664 & 0.3534 & 0.5018 \\ \hline
500 & 0.5761 & 0.8871 & 1.301 \\ \hline
1000 & \multicolumn{3}{c}{\it{Out of Memory}} \\ \hline
\end{tabular}
\caption{Reconstruction Performance by Sampling Size}
\label{tab:ss}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{l||ccc}
\hline
$\alpha$ & NLL & MAE & RMSE \\ \hline
$\infty$ & 668.8 & 60.90 & 69.11 \\ \hline
8 & 0.7698 & 0.9701 & 1.329 \\ \hline
0.8 & -0.4188 & 0.3379 & 0.4859 \\ \hline
\bf 0.4 & \bf -0.5538 & \bf 0.3096 & \bf 0.4434 \\ \hline
0.16 & -0.4081 & 0.3391 & 0.4867 \\ \hline
0.08 & -0.3961 & 0.3362 & 0.4804 \\ \hline
0 & -0.3134 & 0.3702 & 0.5252 \\ \hline
\end{tabular}
\caption{Reconstruction Performance by Sampling Temperature}
\label{tab:st}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{l||ccc}
\hline
$D$ & NLL & MAE & RMSE \\ \hline
128 &-0.4328 & 0.3371 & 0.4810\\ \hline
256 &-0.4388 & 0.3295 & 0.4720 \\ \hline
\bf 512 &\bf 0.5538 & \bf 0.3096 & \bf 0.4439 \\ \hline
768 &-0.5033 & 0.3170 & 0.4550 \\ \hline
1024 &-0.3334 & 0.3512 & 0.4973 \\ \hline
\end{tabular}
\caption{Reconstruction Performance by Latent Dimension}
\label{tab:ed}
\end{table}
\begin{figure}[t]
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=\linewidth]{fig/ss.pdf}
\caption{}
\label{fig:ss}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=\linewidth]{fig/ed.pdf}
\caption{}
\label{fig:ed}
\end{subfigure}
\caption{The relative inference time by (a) sampling size and (b) latent dimension.}
\end{figure}
\section{Conclusion}
In this paper, we have proposed a probabilistic reconstruction framework for no-data gaps in lunar digital elevation maps.
Our framework is built upon attentive neural processes, a state of the art stochastic process model, which enables the model not only to perform uncertainty analysis but to be robust.
To take account for scalability issue, we have extended neural process to sparse attentive neural process (SANP).
SANP reduces the computational complexity of NP models and enhances reconstruction performance by solving over-fitting and over-smoothing problems.
We have evaluated our model on lunar NAC DEMs at the Apollo 17 landing site and showed our model outperforms competitive void-filling methods in terms of negative likelihood, mean absolute error, and root mean square error.
Furthermore, we have conducted extensive ablation studies to provide the effect of critical hyperparameters on the model performance.
In future, the proposed method can be extended by using CNN, a powerful deep learning model used in various computer vision tasks, with NPs.
\section*{References}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 76
|
\section{Introduction}
Motivated by applications in Quantum Mechanics Bauer and Bernard investigated in the recent contribution \cite{BB17} scaling limits $\lambda\rightarrow \infty$ and $\varepsilon\rightarrow 0$ for classes of stochastic differential equations of the form
\begin{align} \label{e:mainSDE}
dX_t=\frac{\lambda^2}{2}(\varepsilon \cdot b_1(X_t)-b_2(X_t))\,dt+\lambda \cdot \sigma(X_t)\,dB_t.
\end{align}
More precisely, in case of constant $b_1>0$ and linear $b_2$ and $\sigma$, i.e. for stochastic differential equations of the form
\begin{align} \label{e:BB-SDE}
dX_t=\frac{\lambda^2}{2}(\varepsilon - b X_t)\,dt+\lambda \cdot X_t \,dB_t
\end{align}
Bauer and Bernard rigorously study the non-trivial scaling limit of the process $(X_t)_{t\geq 0}$ in the regime $\lambda \rightarrow \infty$ and $\varepsilon \rightarrow 0$ such that $\lambda^2\varepsilon^{b+1}$ is constant and conjecture
the validity of similar assertions for a larger class of stochastic differential equations of the type \eqref{e:mainSDE}. In this scaling limit the first hitting time of a level $z$ for the diffusion \eqref{e:BB-SDE} started at $x<z$ converges in distribution to a mixture of a point mass in zero and an exponential distributed random variable. Related questions for a slightly different model have previously been physically motivated and then analyzed by Bauer, Bernard and Tilloy in \cite{BBT15} and \cite{BBT16}. Observe that the diffusion given by \eqref{e:BB-SDE} is scale invariant, a fact which allows specific arguments and simplifies several calculations. Bauer and Bernard in particular proved that in the scaling limit $\lambda \rightarrow \infty$ and $\varepsilon \rightarrow 0$ with $\lambda^2\varepsilon^{b+1} = J$ constant the first hitting time of a level $z$ with start from $x<z$ converges in distribution to a convex combination of a exponential distributed random variable and the trivial random variable which is constant equal to zero. Using this result the authors also deduce a Poisson approximation for the number of hits above the level $z$. The analytic approach of Bauer and Bernard allows to cover also certain types of stochastic differential equations which are different from \eqref{e:BB-SDE} but still share the property of scale invariance. Using non-rigorous arguments the authors of \cite{BB17} come to the conjecture that the results will carry over to a larger rather general class of stochastic differential equations and they provide certain natural but not always precisely formulated conditions, under which the results are expected to hold. Our main aim is to provide a different rather elementary approach to the results of Bauer and Bernard, which allows to prove analogous results for general classes of stochastic differential equations, which do not necessarily satisfy a form of scale invariance. In particular we can extend the results to 'linearized version' of the stochastic differential equation describing the homodyne detection of Rabi oscillations. The resulting stochastic differential equation has a clear quantum mechanical background which is in more detail described in \cite{BB17}. As a fact we will mainly rely on classical methods from probability theory such as Poisson approximation and some further mainly basic properties of diffusion processes. This is in contrast to the tools used by Bauer and Bernard which are analytic i.e. based on analysis of differential equations and basic It\^o theory for diffusions. Apart from extending the validity of the results to a larger class of stochastic differential equations we believe that our approach helps to put the results in a clear probabilistic perspective.
Let us stress that the results are related to known assertions about hitting times of large levels for diffusion processes such as e.g. \cite{M68} and \cite{BK98}. There the authors consider the behavior of hitting times of a high level and deduce that in an appropriate scaling limit this hitting time is exponentially distributed. We want to stress, that in the case of a non scale-invariant diffusion it does not seem possible to directly use known theorems concerning the extreme value behavior of hitting of large sets as given e.g. in \cite{M68} and \cite{BK98}. In the case of equation~\eqref{e:mainSDE} it is possible to connect the hitting of a fixed level $z$ when started from $\varepsilon$ into the question of hitting $z/\varepsilon$ with start in $1$. For this situation one can make direct use of the results in \cite{BK98} and of paragraph \,2, section \Romannum{5} in \cite{M68}. For start in a fixed point $x$ and for more non scale-invariant equations this does not seem possible. In any case due to the connections to the theory of quantum systems under continuous measurement we believe that our results and methods - which might not be that well known in the physics community - are of sufficiently broad interest and are useful in order to derive results for the most interesting higher dimensional situation.
The structure of the paper is the following: In Section 2 we introduce some essential notations and formulate the abstract version of our results, which are rigorously proved in section 3. This abstract result is based on a cycle decomposition of the diffusion and the usual renewal analysis of the associated renewal process. In section 4 we work through two classes of examples. The first class of stochastic differential equations we are dealing with are in some sense perturbation of equation~\eqref{e:BB-SDE}, which are still not covered by the results of Bauer and Bernard. The second fundamental example is deduced from the mathematical description of a 'linearized' version homodyne detection of Rabi oscillations.
\section{Scaling limits of hitting times}
Let us give some basic definitions and notations. For $x>0$, we denote as $\P_x$ the probability measure for the diffusion process conditioned to start at $x$ and write $\mathbb{E}_x$ for the corresponding expectation. The hitting time for the process $(X_t)_{t\ge 0}$ of some level $z>0$ will be denoted as
\[T_z = \inf\lbrace t\ge 0\mid X_t = z\rbrace.\]
We require the existence of two functions $0<\alpha(\varepsilon) < \beta(\varepsilon)$ for small $\varepsilon>0$ which are differentiable in $0$ with $\lim_{\varepsilon\downarrow 0} \beta(\varepsilon) = \lim_{\varepsilon\downarrow 0} \alpha(\varepsilon)=0$. Let $(X_t^1)_{t\ge 0}$ denote the process $(X_t)_{t\geq 0}$ with $\lambda=1$ and $(\widetilde{X}^1_t)_{t\ge 0}$ the process obtained from $(X_t^1)_{t\ge 0}$ by conditioning on $\lbrace T_{\alpha(\varepsilon)} < T_z \rbrace$ via a $h$\=/transform in the sense of Doob (see e.g. \cite{P95}, chapter 4, section 1). We introduce the following quantities:
\begin{align*}
\widetilde{\sigma}_0=0,\,\widetilde{\tau}_1=\inf\lbrace t\geq 0\mid X_t^1=\beta(\varepsilon)\rbrace,\,\widetilde{\sigma}_1=\inf\lbrace t\geq \widetilde{\tau}_1\mid \widetilde{X}_t^1=\alpha(\varepsilon)\rbrace;
\end{align*} furthermore, for $i\ge 2$:
\begin{align*}
\widetilde{\tau}_i=\inf\lbrace t\geq \widetilde{\sigma}_{i-1}\mid X_t^1=\beta(\varepsilon)\rbrace,\,\widetilde{\sigma}_i=\inf\lbrace t\geq \widetilde{\tau}_{i}\mid \widetilde{X}_t^1=\alpha(\varepsilon)\rbrace.
\end{align*}
\begin{figure}
\includegraphics{TIKZ4}
\caption{Illustration of the cycle decomposition given by the stopping times $\widetilde{\tau}_i$ and $\widetilde{\sigma}_i$}
\end{figure}
Starting at $\alpha(\varepsilon)$ we run the process $(X_t)_t$ until it hits $\beta(\varepsilon)$ (observe that the conditioning event $\lbrace T_{\alpha(\varepsilon)} < T_z \rbrace$ has full probability as we start in $\alpha(\varepsilon)$), then we run the conditioned process $(\widetilde{X}_t)_{t\geq 0}$ starting in $\beta(\varepsilon)$ until we hit $\alpha(\varepsilon)$.
When started at $\beta(\varepsilon)$, the probability to hit $z$ before $\alpha(\varepsilon)$ will be denoted as
\[p_{\varepsilon,z} := \P_{\beta(\varepsilon)}(T_z < T_{\alpha(\varepsilon)}).\]
If a cycle means a piece of the diffusion path starting at $\alpha(\varepsilon)$, moving to $\beta(\varepsilon)$ and then returning to $\alpha(\varepsilon)$ then $p_{\varepsilon,z}$ describes the probability that the cycle was completed without hitting $z$.
By (generalized) scaling limit we will mean the limiting process as $\lambda\to\infty$ and $\varepsilon\downarrow 0$ along the curve $\lambda^2 p_{\varepsilon,z} = J >0$. Especially, for the generalized scaling limit to be defined, it is required, that $\varepsilon\downarrow 0$ implies $p_{\varepsilon,z}\to 0$. Let us now employ the following standing assumptions on the considered stochastic differential equation~(\ref{e:mainSDE}): \begin{enumerate}[label=(A\theenumi)]
\item There exists a (weak) solution to the SDE~(\ref{e:mainSDE}) in the sense of Definition~25.1 in \cite{B11}, which is unique in law.
\item The expected cycle length converges to some positive real number independent of $z$: \[\mathbb{E}_{\alpha(\varepsilon)}[\widetilde{\sigma}_1] \xrightarrow[\varepsilon\downarrow 0]{} \kappa^{-1} \in (0,\infty). \]
\item For small $\varepsilon>0$, the cycles have finite second moment uniformly in $\varepsilon$: \[\limsup_{\varepsilon\downarrow 0} \mathbb{E}_{\alpha(\varepsilon)} [\widetilde{\sigma}_1^2] < \infty. \]
\end{enumerate}
\begin{remark}
Technically, (A3) may be weakened by $\limsup_{\varepsilon\downarrow 0} \mathbb{E}_{\alpha(\varepsilon)}[\widetilde\sigma_1^{1+\rho}] < \infty$ for some positive $\rho>0$. That is, only $(1+\rho)$-th moment is actually needed. The conditions (A1), (A2) and (A3) are rather natural and not too restrictive.
\end{remark}
With the help of a regeneration structure based on cycle decompositions we will show
\begin{proposition} \label{p:mainProp}
Assume that (A1), (A2) and (A3) are satisfied, then in the scaling limit $\lambda \rightarrow \infty$, $\varepsilon \rightarrow 0$ with $\lambda^2p_{\varepsilon,z}=J \in (0,\infty)$ we have
\begin{displaymath}
\lim_{scaling}\P_{\alpha(\varepsilon)}\bigl(T_z>T)= e^{-\kappa J T}.
\end{displaymath}
\end{proposition}
This result gives the almost exponential behavior of the hitting of a fixed level $z$, when started very close to zero. In order to deduce the result when started from a fixed level $0<x<z$ we assume the following conditions:
\begin{enumerate}[label=(B\theenumi)]
\item In the generalized scaling limit for any $z>0$ and $0<x<z$, under $\P_x$, \[T_{\alpha(\varepsilon)} \wedge T_z \xrightarrow[scaling]{\mathcal{D}} 0, \]
and the law of $T_{\alpha(\varepsilon)}$ under $\P_x(\cdot \mid T_{\alpha(\varepsilon)} < T_z)$ converges to the point mass in zero for any $z>0$.
\item Furthermore, the limit \[\P_x (T_{\alpha(\varepsilon)} < T_z) \xrightarrow[\varepsilon\to 0]{} \alpha_{x,z} \in (0,1) \
exists for all $z>0$, $0<x<z$.
\end{enumerate}
\begin{remark}
Our assumptions (A1) -- (A3) and (B1) -- (B2) are natural and related but not fully comparable to the conditions formulated by Bauer and Bernard. We point at some similarities. Condition \romannum{9}) in \cite{BB17} essentially corresponds to (A3) and the assumption $p_{\varepsilon,z}\to 0$ is related to \romannum{2}). \romannum{1}) is encoded in the example below as (E2) and (E3).
\end{remark}
We are now ready to state our main result.
\begin{theorem}\label{t:mainThm}
Assume that the conditions (A1) to (A3), (B1) and (B2) are satisfied, then in the scaling limit $\lambda \rightarrow \infty$, $\varepsilon \rightarrow 0$ with $\lambda^2p_{\varepsilon,z}=J \in (0,\infty)$ the law of the hitting time $T_z$ when started at $0<x<z$ equals
\[
(1-\alpha_{x,z}) \,\delta_0 +
\alpha_{x,z} \Exp_{J\kappa}.
\]
The choice of the level one in $p_{\varepsilon,1}$ and in the definition of $q(z)$ respectively is of course rather arbitrary.
\end{theorem}
\begin{remark}
In the special case of equation~\eqref{e:BB-SDE} this result corresponds to Corollary~3 in \cite{BB17}.
\end{remark}
Theorem~\ref{t:mainThm} can be interpreted in the following way, which has also been observed in \cite{BB17}. If the diffusion process starts at the point $x$ and wants to reach level $z$ then there are two options: Either the process reaches level $z$ without coming close to zero and in the scaling limit this takes no time or it first reaches a neighborhood of zero. Once it has reached the neighborhood of $0$ it needs many trials to get up to level $z$ and each trial has low success probability (see e.g. \cite{B90}). The latter follows from the form of the stochastic differential equation; the drift is weak near zero and the diffusion is slowed down near zero. The proof of this result will exactly follow this picture and we will make this rigorous in the following section. \\
The asymptotic of $p_{\varepsilon,z} \to 0$, as $\varepsilon\downarrow 0$ may depend on the value of $z$. We want to consider the $z$-free scaling limit. For $z\in(0,\infty)$ define \[
q(z) := \begin{cases}
1-\alpha_{z,1}, & z\le 1,\\
(1-\alpha_{1,z})^{-1} ,& z>1.
\end{cases}
\]
\begin{corollary}
Assume that all conditions (A1) to (A3), (B1) and (B2) are satisfied, then in the scaling limit $\lambda \rightarrow \infty$, $\varepsilon \rightarrow 0$ with $\lambda^2p_{\varepsilon,1}=J \in (0,\infty)$ the law of the hitting time $T_z$ when started at $0<x<z$ equals
\[
(1-\alpha_{x,z}) \,\delta_0 +
\alpha_{x,z} \Exp_{\kappa J/q(z)}.
\]
\end{corollary}
\begin{proof}
\begin{align*}
\lim\limits_{scaling} \P_x(T_z>T) &= \lim\limits_{scaling} \P_x(T_z(X^1) > \lambda^2T) \\
&= \lim\limits_{\varepsilon\downarrow 0} \P_x\left(T_z(X^1)\frac{p_{\varepsilon,z}}{JT} > q^{-1}(z) + \left(\frac{p_{\varepsilon,z}}{p_{\varepsilon,1}}-q^{-1}(z)\right) \right)\\
&= \lim\limits_{\substack{\lambda\to\infty\\\varepsilon\to 0\\ \lambda^2p_{\varepsilon,z} = J}} \P_x\left(\frac{T_z}{T}> q^{-1}(z) + \left(\frac{p_{\varepsilon,z}}{p_{\varepsilon,1}}-q^{-1}(z)\right) \right).
\end{align*}
Below we will show, that the convergence
\begin{align}\label{e:claimLim}
\lim\limits_{\varepsilon\downarrow 0}\frac{p_{\varepsilon,z}}{p_{\varepsilon,1}} = \frac{1}{q(z)}
\end{align}
holds true. Using Theorem~\ref{t:mainThm} this shows
\[
\limsup_{scaling} \P_x(T_z>T) \le \alpha_{x,z} e^{-\kappa J T (q^{-1}(z)-\delta)}
\]
and
\[
\liminf_{scaling} \P_x(T_z>T) \ge \alpha_{x,z} e^{-\kappa J T (q^{-1}(z)+\delta)}
\]
for $\delta>0$ arbitrary, hence implying the assertion.
It remains to prove the claim~\eqref{e:claimLim}. By the strong Markov property
\[
p_{\varepsilon,1\vee z} = p_{\varepsilon, 1\wedge z} \cdot \P_{1\wedge z}(T_{1\vee z} < T_{\alpha(\varepsilon)}).
\]
It follows from condition (B2)
\[
\frac{p_{\varepsilon,z}}{p_{\varepsilon,1}} =
\begin{cases}
(1-\P_z(T_{\alpha(\varepsilon)} < T_1))^{-1}, & z\le 1,\\
1-\P_1(T_{\alpha(\varepsilon)} < T_z) ,& z>1
\end{cases}
\xrightarrow[\varepsilon\downarrow 0]{} q^{-1}(z).
\]
This gives the required assertion.
\end{proof}
\begin{remark}
In both examples worked out below the scaling limit relation $\lambda^2 p_{\varepsilon,1} = const$ is essentially (meaning up to some arbitrary positive multiplicative constant) equivalent to choosing the curve $\lambda^2 Z_{\varepsilon}=const$ which is used \cite{BB17} in order to formulate the general conjecture. There, \[Z_\varepsilon := \int_0^\infty \frac{1}{x^4}\exp\left( -\frac{\varepsilon}{3} \frac{1}{x^3} + \frac{b}{2}\frac{1}{x^2} \right)\, dx.\] denotes the total mass of some invariant measure, cf. condition \romannum{7}) in section 6.1 main conjectures in \cite{BB17}. Also, $q(z)$ in that article is the same as our $q(z)$ here if the limit in (B2) has the form as in the examples. Note, that our main result corresponds to Conjecture B (\romannum{1}) and (\romannum{2}).
\end{remark}
\section{An embedded approximate Poisson process}
In \cite{BB17}, the distribution of the first hitting time $T_z$ is deduced by calculating the Laplace transform of $T_z$, i.e. the expectation
\begin{displaymath}
\mathbb{E}_x\bigl[e^{-s T_z}\bigr],\quad s >0,\,0<x<z
\end{displaymath}
making use of the fact that they solve certain ordinary differential equations. Our approach has a somewhat different more probabilistic flavor. We are using the following rather classical strategy:
\begin{itemize}
\item Starting the diffusion near zero, we introduce stopping times, which decompose the path up to an arbitrary time $T$ into cycles.
\item During every cycle, the diffusion reaches with a small probability the level $z$.
\item Counting only the hits of level $z$ now up to a time $\lambda^2T$ results in an approximate Poisson process.
\end{itemize}
As mentioned above we call cycle a path from ${\alpha(\varepsilon)}$ to ${\beta(\varepsilon)}$ and back to ${\alpha(\varepsilon)}$ when $\lambda$ is set to equal $1$. If we now speed up the time scale which is done by introducing the large time scale factor $\lambda$ we have many cycles in a time interval $[0,T]$ and in each cycle we hit the level $z$ with small probability. This is the standard situation, where the Poisson heuristic should apply.
\subsection{A Thinned Renewal Process}
In order to motivate our approach we define the counting variable
\begin{displaymath}
{N}(T)=\max \lbrace i \in \mathbb{N}_0 \mid {\sigma}_i \leq T\rbrace,
\end{displaymath} where \begin{equation*}
{\sigma}_0=0,\,{\tau}_1=\inf\lbrace t\geq 0\mid X_t^1=\beta(\varepsilon)\rbrace,\,{\sigma}_1=\inf\lbrace t\geq {\tau}_1\mid X_t^1={\alpha(\varepsilon)}\rbrace,
\end{equation*}
\begin{equation*}
{\tau}_i=\inf\lbrace t\geq {\sigma}_{i-1}\mid X_t^1=\beta(\varepsilon)\rbrace,\,{\sigma}_i=\inf\lbrace t\geq {\tau}_{i}\mid X_t^1={\alpha(\varepsilon)}\rbrace.
\end{equation*}
The quantity ${N}(T)$ encodes the number of cycles completed up to time $T$ and we will use known results from renewal theory (see e.g. \cite{D05}, \cite{L76} and \cite{Y91}). For given $z>0$ we are actually not interested in the number of completed cycles up to time $T$ but in the number of cycles up to time $T$, which do cross the level $z$. Thus we have to delete those cycles which do not cross the level $z$ and we observe that this happens with probability $1-p_{\varepsilon,z}$.\\
\\
As a first motivation we consider the thinned-rescaled point process obtained by retaining every point of ${N}$ with probability $p_{\varepsilon,z}$ independently of the other points and of the point process ${N}(T)$, and then replacing the retained point at time instant $t_i$ by a point at $\lambda^{-2}\cdot t_i$. Let us denote this counting process by
\begin{displaymath}
{\mathfrak{N}}_{p_{\varepsilon,z},\lambda}(T),\quad T\geq 0.
\end{displaymath}
We observe that
\begin{equation*}
\begin{split}
{\mathfrak{N}}_{p_{\varepsilon,z},\lambda}(T) = \xi_1+\dots +\xi_{{N}(\lambda^2T)},
\end{split}
\end{equation*}
where the random variables $\xi_1,\xi_2,\dots$ are independent and identically distributed with $\mathrm{P}(\xi_i=1)=p_{\varepsilon,z},\,\mathrm{P}(\xi_i=0)=1-p_{\varepsilon,z}$ and independent of the process $({N}(t))_{t\geq 0}$. This thinned counting process in fact converges to a Poisson process as can be deduced using standard results in the literature. Obviously, the independent thinning does not precisely describe what we are really interested in.
\subsection{Poisson Limits in the high noise regime}
We thus consider the probability, that none of the cycles of our original process up to time $\lambda^2T$ has reached level $z$, i.e. we investigate
\begin{align*}
&\sum_{k=0}^{\infty}\P_{\alpha(\varepsilon)}\left({N}(\lambda^2T)=k,\forall 1\leq i\leq k:\sup_{\tau_{i}\leq t\leq \sigma_i}X_t^1 < z \right)\\
=&\sum_{k=0}^{\infty} \Bigl[ \P_{\beta(\varepsilon)}\bigl(T_{\alpha(\varepsilon)}<T_z\bigr)^k \\
&\qquad \times \P_{\alpha(\varepsilon)}\Bigl({N}(\lambda^2T)=k, \forall 1\leq i\leq k:\sup_{\tau_{i}\leq t\leq \sigma_i}X_t^1 < z\Bigr) \, \P_{\beta(\varepsilon)}\bigl(T_{\alpha(\varepsilon)}<T_z\bigr)^{-k} \Bigr].
\end{align*}
Let us observe that using results on the relation between conditioning and $h$\=/transforms we have for $k\ge 1$
\begin{align}\begin{split}\label{e:NandNTilde}
&\P_{\alpha(\varepsilon)}\biggl({N}(\lambda^2T)=k,\forall 1\leq i\leq k: \sup_{\tau_{i}\leq t\leq \sigma_i}X_t^1 < z\biggr) \, \P_{\beta(\varepsilon)}\bigl(T_{\alpha(\varepsilon)}<T_z\bigr)^{-k}\\
&=\P_{\alpha(\varepsilon)}\bigl(\widetilde{N}(\lambda^2T)=k\bigr),
\end{split}\end{align}
where the process $(\widetilde{N}(T))_{T\geq 0}$ is the counting process $\widetilde{N}(T)=\max\lbrace n\mid \widetilde{\sigma}_n<T\rbrace$. We need to stress that the involved quantities depend on $\varepsilon$ even though the notation does not make this explicit.
\begin{proposition} \label{p:LimTillCycle}
Under (A1) to (A3) to hold, we have
\[
\lim_{scaling}\sum_{k=0}^{\infty}\P_{\alpha(\varepsilon)}\biggl({N}(\lambda^2T)=k,\forall 1\leq i\leq k : \sup_{\tau_{i}\leq t\leq \sigma_i}X_t^1 < z\biggr)= e^{-\kappa JT}.
\]
\end{proposition}
\begin{proof}
Defining \[\widetilde{\mathfrak{N}}_{p_{\varepsilon,z},\lambda}(T):=\widetilde{\xi}_1+\dots +\widetilde{\xi}_{\widetilde{N}(\lambda^2T)}\] with $(\widetilde{\xi}_i)_{i\ge 1}$ being an independent family of Bernoulli distributed random variables with $\mathrm{P}(\widetilde{\xi}_1=1)= p_{\varepsilon,z}$ and independent of the counting process $\widetilde{N}(T)$ we notice using equation~(\ref{e:NandNTilde})
\begin{align*}
&\sum_{k=0}^{\infty}\P_{\alpha(\varepsilon)}\biggl({N}(\lambda^2T)=k,\forall 1\leq i\leq k : \sup_{\tau_{i}\leq t\leq \sigma_i}X_t^1 < z\biggr) \\
&= \sum_{k=0}^{\infty} \P_{\beta(\varepsilon)}(T_{\alpha(\varepsilon)} < T_z)^k \, \P_{\alpha(\varepsilon)}(\widetilde{N}(\lambda^2T)=k) = \mathrm{P}(\widetilde{\mathfrak{N}}_{p_{\varepsilon,z},\lambda}(T) = 0).
\end{align*}
We observe that by standard results on Poisson approximation (see e.g. equation~(23) in \cite{Y91}) for every $T>0$
\begin{equation*}
\begin{split}
d_{TV}(\widetilde{\mathfrak{N}}_{p_{\varepsilon,z},\lambda}(T),\Poi_{{\kappa} JT})\leq \frac{p_{\varepsilon,z}}{2\sqrt{1-p_{\varepsilon,z}}}+\mathbb{E}_{\alpha(\varepsilon)}\bigl[\bigl| p_{\varepsilon,z}\widetilde{N}(\lambda^2T)-{\kappa}JT\bigr|\bigr].
\end{split}
\end{equation*}
Therefore it is sufficient to show the convergence
\begin{equation*
\lim\limits_{scaling} \mathbb{E}_{\alpha(\varepsilon)}\bigl[\bigl| p_{\varepsilon,z}\widetilde{N}(\lambda^2T)-{\kappa}JT\bigr|\bigr] =0.
\end{equation*}
With $\kappa_\varepsilon := 1/\mathbb{E}_{\alpha(\varepsilon)}[\widetilde{\sigma}_1]$ we obtain
\begin{align*}
&\mathbb{E}_{\alpha(\varepsilon)}\bigl[\bigl| p_{\varepsilon,z}\widetilde{N}(\lambda^2T)-{\kappa}JT\bigr|\bigr]
= \mathbb{E}_{\alpha(\varepsilon)}\bigl[\bigl| \lambda ^{-2} \widetilde{N}(T\lambda^2) -\kappa T \bigr|\bigr] \\
&\le \mathbb{E}_{\alpha(\varepsilon)}\bigl[\bigl| \lambda ^{-2} \widetilde{N}(\lambda^2T) -\kappa_\varepsilon T \bigr|\bigr] + |\kappa-\kappa_\varepsilon|\cdot T.
\end{align*}
The vanishing of $|\kappa -\kappa_\varepsilon|\to 0$ is a reformulation of (A2) and due to (A2) together with (A3) we can apply a suitable version of the uniform renewal theorem such as Theorem~10 in \cite{L76} in order to conclude \[
\lim_{scaling} \mathbb{E}_{\alpha(\varepsilon)}\bigl[\bigl| \lambda ^{-2} \widetilde{N}(T\lambda^2) -\kappa_\varepsilon T \bigr|\bigr] \le \lim_{\lambda\to\infty} \sup_{\frac{z}{4}>\varepsilon>0} \mathbb{E}_{\alpha(\varepsilon)}\bigl[\bigl| \lambda ^{-2} \widetilde{N}(T\lambda^2) -\kappa_\varepsilon T \bigr|\bigr]=0.
\] This finishes the proof.
\end{proof}
Now we only have to do one more last step. Observe that we have not yet reached exactly what we want. In order to describe the event $\lbrace T_z>T\rbrace$ we need to consider the event
\begin{displaymath}
\lbrace {N}(\lambda^2T)=k,\forall {\sigma}_k\leq t\leq T:X_t^1<z\rbrace,
\end{displaymath}
this means we also have to make sure that during the cycle started before time $T$ but not completed before this time the level $z$ has not been hit.
\begin{proof}[Proof of Proposition~\ref{p:mainProp}]
From the fact $T \in [\sigma_{N(\lambda^2T)},\sigma_{N(\lambda^2T)+1})$ we see \[
\P_{\alpha(\varepsilon)}(T_z>T) \in \left(\P_{\alpha(\varepsilon)}(T_z>\sigma_{N(\lambda^2T)+1}),\P_{\alpha(\varepsilon)}(T_z>\sigma_{N(\lambda^2T)})\right]
\]
and by the previous Proposition~\ref{p:LimTillCycle} the upper bound has the asserted scaling limit. For the lower, we may define \[\widetilde{\mathfrak{N}}_{p_{\varepsilon,z},\lambda}^+(T):=\widetilde{\xi}_1^++\dots +\widetilde{\xi}_{\widetilde{N}(\lambda^2T)+1}^+\] with $(\widetilde{\xi}_i^+)_{i\ge 1}$ being an independent family of Bernoulli distributed random variables with $\mathrm{P}(\widetilde{\xi}_1^+=1)= p_{\varepsilon,z}$ and independent of the counting process $\widetilde{N}(T)$ and repeat the argumentation as in the proof of Proposition~\ref{p:LimTillCycle}:
\begin{align*}
&\sum_{k=0}^{\infty}\P_{\alpha(\varepsilon)}\biggl({N}(\lambda^2T)=k,\forall 1\leq i\leq k+1 : \sup_{\tau_{i}\leq t\leq \sigma_i}X_t^1 < z\biggr) \\
&= \sum_{k=0}^{\infty} \P_{\beta(\varepsilon)}(T_\varepsilon < T_z)^{k+1} \, \P_{\alpha(\varepsilon)}(\widetilde{N}(\lambda^2T)=k) = \mathrm{P}(\widetilde{\mathfrak{N}}_{p_{\varepsilon,z},\lambda}^+(T) = 0).
\end{align*} Then, in the scaling limit \[
\lim\limits_{scaling} \mathbb{E}_{\alpha(\varepsilon)}\bigl[\bigl| p_{\varepsilon,z}\bigl(\widetilde{N}(\lambda^2T)+1\bigr)-{\kappa}JT\bigr|\bigr] =0
\] still holds and the assertion is shown.
\end{proof}
Let us now start from a point $x>0$ and derive the law of $T_z$ with respect to $\P_x$. Starting at $x$ there are two cases to consider:
\begin{itemize}
\item The diffusion reaches ${\alpha(\varepsilon)}$ before hitting $z$.
\item The process hits $z$ before visiting ${\alpha(\varepsilon)}$.
\end{itemize}
\begin{proof}[Proof of Theorem~\ref{t:mainThm}]
\[
\P_x(T_z>T) = \P_x(T_{\alpha(\varepsilon)}\wedge T_z > T) + \P_x(T_{\alpha(\varepsilon)} \le T < T_z).
\]
From (B1) it follows, that the first summand vanishes in the scaling limit and writing
\[
\P_x(T_{\alpha(\varepsilon)} \le T < T_z) = \P_x(T_{\alpha(\varepsilon)}< T_z)\cdot \P_x (T_{\alpha(\varepsilon)} \le T < T_z \mid T_{\alpha(\varepsilon)} < T_z)
\]
we see by (B2) that the first factor of that product has the scaling limit
\[
\lim_{scaling}\P_x(T_{\alpha(\varepsilon)}< T_z) =\frac{\int_x^z \exp \left(-\int_y^z \frac{b_2(l)}{\sigma^2(l)} \,dl\right) \,dy}{\int_0^z \exp \left(-\int_y^z \frac{ b_2(l)}{\sigma^2(l)} \,dl\right) \,dy}.
\]
For the second factor, with notation $\widetilde{T}_z := T_z(\widetilde{X})$ an application of the strong Markov property at time $\widetilde{T}_{\alpha(\varepsilon)}$ leads to
\begin{align*}
&\P_x (T_{\alpha(\varepsilon)} \le T < T_z \mid T_{\alpha(\varepsilon)} < T_z) =\P_x(\widetilde{T}_{\alpha(\varepsilon)} \le T < \widetilde{T}_z) \\
&=\int\limits_{\lbrace \widetilde{T}_{\alpha(\varepsilon)} \le T\rbrace} \P_{\alpha(\varepsilon)} (T_z > T - \widetilde{T}_{\alpha(\varepsilon)}(\omega)) \,\P_x(d\omega)\\
& =\int\limits_{\lbrace \widetilde{T}_{\alpha(\varepsilon)} \le T\rbrace} \left(1-\P_{{\alpha(\varepsilon)}} (T_z \le T - \widetilde{T}_{\alpha(\varepsilon)}(\omega)) \right) \,\P_x(d\omega)\\
&= \P_x (\widetilde{T}_{\alpha(\varepsilon)} \le T)-\int\P_{\alpha(\varepsilon)} (\widetilde{T}_{\alpha(\varepsilon)}(\omega) + T_z \le T ) \,\P_x(d\omega).
\end{align*}
The first summand has scaling limit $1$ and the integral may be seen as a probability of the convolution \begin{align*}
&\int\P_{\alpha(\varepsilon)} (\widetilde{T}_{\alpha(\varepsilon)}(\omega) + T_z \le T ) \,\P_x(d\omega) = \left[ \left(\P_x \circ(\widetilde{T}_{\alpha(\varepsilon)})^{-1} \right) \ast \left(\P_{\alpha(\varepsilon)} \circ (T_z)^{-1}\right) \right] ([0,T]).
\end{align*} Due to the independence the characteristic function (as mapping of $s$) is the product \[\mathbb{E}_x [e^{is\widetilde{T}_{\alpha(\varepsilon)}}] \cdot \mathbb{E}_{\alpha(\varepsilon)}[e^{isT_z}]\] and while we see the first factor has scaling limit $1$ we finish the proof by recalling Proposition~\ref{p:mainProp}.
\end{proof}
\section{Examples}
In the section we present two important classes of examples which illustrate our approach. The second example is motivated by a specific quantum mechanic situation.
\begin{remark}
The formal generator associated to our SDE is given by
\begin{equation*}
L:= \frac{\lambda^2}{2}\sigma^2(x)\frac{d^2}{dx^2}+\frac{\lambda^2}{2}\bigl(\varepsilon b_1(x)-b_2(x)\bigr)\frac{d}{dx}.
\end{equation*}
The scale function $s$ (up do multiplicative constants) defined by the relation
\[
\P_x(T_R < T_r) = \frac{s(x)-s(r)}{s(R)-s(r)}
\]
for $0<r<x<R$ is given by
\[
s(x) = \int_c^x \exp \left(-\int_c^y \frac{\varepsilon b_1(l) -b_2(l)}{\sigma^2(l)} \,dl\right) \,dy = \int_c^x 1/p_c(y) \,dy.
\]
The speed measure is
\begin{align*}
m(dx) = \frac{2}{\lambda^2 \sigma^2(x)s'(x)} \,dx = \frac{2p(x)}{\lambda^2 \sigma^2(x)} \, dx = 2r(x) \, dx.
\end{align*}
The generator can be written in divergence form as
\begin{equation}\label{e:div-form}
Lu(x) = \frac{1}{2r(x)} \frac{d}{dx} \left(p(x) \frac{du}{dx} (x)\right).
\end{equation}
For more details we refer to standard book on stochastic processes such as e.g. \cite{B11}.
\end{remark}
\subsection{Asymptotic linear stochastic differential equations}
\begin{enumerate}[label=(E\theenumi)]
\item Let $b_1$ a positive continuously differentiable mapping from $[0,\infty)$ to $(0,\infty)$ uniformly bounded away from $0$ and from above, i.e. \[0< a_- := \inf b_1,\quad \sup b_1 =: a_+ < \infty;\] specifically, $a:= b_1(0) >0$.
\item $b_2$ a nonnegative twice continuously differentiable function on $[0,\infty)$ with $b_2(0)=0$ and $b:= b_2'(0) > 0$.
\item $\sigma: [0,\infty) \to [0,\infty)$ twice continuously differentiable with \begin{enumerate}[leftmargin=1.0cm]
\item $\sigma(x) = 0 \Leftrightarrow x=0$,
\item $\sigma := \sigma'(0) > 0$.
\end{enumerate}
\end{enumerate}
This example class can be viewed as generalization of the specification
\begin{equation}\label{e:scaleinv}
b_1(x) := 1,\qquad b_2(x) := b\cdot x, \qquad \sigma(x) := x
\end{equation}
in the sense that at the origin the coefficients exhibit the same behavior. Note, that in the situation of \eqref{e:scaleinv} a strong form of scale invariance holds, i.e. $Y_t := X_t/\varepsilon$ fulfills the SDE
\[
dY_t = \frac{\lambda^2}{2}\left(1-b\cdot Y_t\right) \, dt + \lambda \cdot Y_t \, dB_t
\]
making it plausible to choose $\alpha(\varepsilon) $ and $\beta(\varepsilon) $ of linear order. Our next goal is to perform the needed calculations for showing (A2), (A3), (B1) and (B2) where we set $\alpha(\varepsilon):=\varepsilon$ and $\beta(\varepsilon) := 2\varepsilon$.
\begin{remark} \label{r:taylor}
By Taylor's theorem, there is $M>0$, $(a\wedge b \wedge \sigma^2)/(2M)>\delta_0>0$ such that
\begin{align*}
|b_1(x)-a| \le M x,\, |b_2(x)-bx| \le M x^2,\, |\sigma^2(x)-\sigma^2x^2| \le M x^3
\end{align*}
for all $0\le x \le \delta_0$.
\end{remark}
To verify $p_{\varepsilon,z}\xrightarrow{\varepsilon\downarrow 0} 0$ we choose $\delta_0>0$ so that the inequalities in Remark~\ref{r:taylor} above hold on $x\le \delta_0$, set $\delta:= \delta_0 \wedge z/2$ and write
\begin{align}\label{e:p_ez}
p_{\varepsilon,z}:= \P_{2\varepsilon}(T_z<T_\varepsilon) = \frac{\int_\varepsilon^{2\varepsilon} 1/p_\delta(y) \, dy}{\int_\varepsilon^{z} 1/p_\delta(y) \, dy}.
\end{align}
Then the numerator tends to $0$ as
\begin{align} \label{e:p_ezNum}
\int_\varepsilon^{2\varepsilon} 1/p_\delta(y) \, dy \le \int_\varepsilon^{2\varepsilon} \exp\left(3\,\varepsilon\, a/\sigma^2 \cdot (1/y-1/\delta)\right) (y/\delta)^{b/(3\sigma^2)} \, dy \to 0,
\end{align}
whereas the denominator does not vanish:
\begin{align} \label{e:p_ezDen}
\int_\varepsilon^{z} 1/p_\delta(y) \, dy
&\ge \int_\delta^{z} \exp\left(-\delta \int_\delta^y \frac{b_1(l)}{\sigma^2(l)}\, dl\right) \exp\left(\int_\delta^y \frac{b_2(l)}{\sigma^2(l)} \,dl\right)\, dy >0.
\end{align}\\
In order to prove the validity of (A2) we investigate $\mathbb{E}_\varepsilon[\widetilde{\sigma}_1]$ for small $\varepsilon>0$. As preparation and for later use we college some explicit estimates
\begin{lemma} \label{l:intEst}
The following assertions are true:
\begin{itemize}
\item[a)] For $0<y\le w< \delta_0/\varepsilon$ the estimates
\begin{align*}
\varepsilon^2\frac{r(y\varepsilon)}{p(w\varepsilon)} \, \begin{matrix}
\ge\\
\le
\end{matrix} \, & \frac{1}{\sigma^2 \cdot y^2 \pm My^3\varepsilon}\left( \frac{\sigma^2 \mp Mw\varepsilon}{\sigma^2 \mp My\varepsilon} \cdot \frac{y}{w}\right)^{ \pm \varepsilon M(a+\sigma^2)/\sigma^4} \\
&\exp\left( \frac{a}{\sigma^2} (1/w - 1/y) \right)
\left(\frac{w}{y}\right)^{b/\sigma^2} \left(\frac{ \sigma^2 \pm M w\varepsilon}{\sigma^2 \pm My\varepsilon}\right) ^{\mp (b/\sigma^2+1)}
\end{align*}
\item[b)] For $1 < w\le y < \delta/\varepsilon $ with $\delta:= \delta_0 \wedge z/2$ with $\pm$ and $\mp$ interchanged except the first $\pm$ in the denominator.
\item [c)] For $0<y,w<\delta_0/\varepsilon$ and for $1 < w\le y < \delta/\varepsilon $ we have
\begin{displaymath}
\lim_{\varepsilon \rightarrow 0}\varepsilon^2\frac{r(y\varepsilon)}{p(w\varepsilon)}=\frac{1}{\sigma^2}e^{\frac{a}{\sigma^2}(1/w-1/y)}\frac{w^{b/\sigma^2}}{y^{b/\sigma^2+2}}.
\end{displaymath}
\end{itemize}
\end{lemma}
\begin{proof}
In order to prove assertion a) we use Remark~\ref{r:taylor} in the case $y\le w$ and conclude
\begin{align*}
\frac{r(y\varepsilon)}{p(w\varepsilon)} &= \frac{1}{\sigma^2(y\varepsilon)}\exp\left(\varepsilon \int_{w\varepsilon}^{y\varepsilon} \frac{b_1(l)}{\sigma^2(l)} \,dl \right) \exp\left( \int_{y\varepsilon}^{w\varepsilon} \frac{b_2(l)}{\sigma^2(l)} \,dl \right) \\
& \, \begin{matrix}
\ge\\
\le
\end{matrix} \, \frac{1}{\sigma^2 \cdot (y\varepsilon)^2 \pm M(y\varepsilon)^3}\exp\left(\varepsilon \int_{w\varepsilon}^{y\varepsilon} \frac{a\pm Ml}{\sigma^2 l^2 \mp Ml^3} \,dl \right) \exp\left( \int_{y\varepsilon}^{w\varepsilon} \frac{b\mp Ml}{\sigma^2 l \pm Ml^2} \,dl \right).
\end{align*}
An application of partial fraction decomposition allows the integrals explicitly and give the estimates given in assertion a).
The proof of b) is completely analogous and assertion c) follows immediately from a) and b).
\end{proof}
Since in (A2) we consider the process $X_t = X_t^1$ with parameter $\lambda=1$, we now write $T_z$ for $T_z(X^1)$. By the strong Markov property, \[\mathbb{E}_{\alpha(\varepsilon)}[\widetilde{\sigma}_1] = \mathbb{E}_{\alpha(\varepsilon)}[T_{\beta(\varepsilon)}] + \mathbb{E}_{\beta(\varepsilon)}[T_{\alpha(\varepsilon)} \mid T_{\alpha(\varepsilon)} < T_z] = \mathbb{E}_{\alpha(\varepsilon)}[T_{\beta(\varepsilon)}] + \mathbb{E}_{\beta(\varepsilon)}[\widetilde{T}_{\alpha(\varepsilon)}] \] allowing us to handle both summands separately.
\begin{proposition}[implying (A2)] Let $0<\alpha<\beta$ arbitrary and (by an abuse of notation) we set $\alpha(\varepsilon)=\alpha\varepsilon$ and $\beta(\varepsilon)=\beta\varepsilon$. The expected time of going from $\alpha\varepsilon$ to $\beta\varepsilon$ and back again without hitting $z$ is well behaved in the sense, that
\begin{align*}
&\lim\limits_{\varepsilon\downarrow 0} \Bigl(\mathbb{E}_{\alpha\varepsilon}[T_{\beta\varepsilon}] + \mathbb{E}_{\beta\varepsilon}[\widetilde{T}_{\alpha\varepsilon}]\Bigr) = \lim\limits_{\varepsilon\downarrow 0} \Bigl(\mathbb{E}_{\alpha\varepsilon}[T_{\beta\varepsilon}] + \mathbb{E}_{\beta\varepsilon}[T_{\alpha\varepsilon}]\Bigr) \\
&= \frac{2}{\sigma^2} \bigg[ \int_0^\infty \int_\alpha^\beta \exp\left( \frac{a}{\sigma^2} (1/w - 1/y) \right) \frac{w^{b/\sigma^2}}{y^{b/\sigma^2+2}} \,dw \,dy \bigg] \in (0,\infty).\end{align*}
\end{proposition}
\begin{proof}
We observe that for non-negative bounded and continuous functions $f$ we will have \begin{align}\label{e:exAsGreen}\mathbb{E}_{\alpha\varepsilon}\biggl[\int_0^{T_{\beta\varepsilon}} f(X_s) \, ds \biggr] = \int_0^{\beta\varepsilon} g(\alpha\varepsilon,y) f(y) r(y) \,dy ,\end{align} where $g$ denotes the Green kernel of $L$. In order to determine the Green kernel we calculate two solutions $u$ and $v$ of $Lw=0$:
\begin{itemize}
\item First the constant function $u\equiv1$ is a solution and notice that the function $u$ belongs to $L^2(r(x)dx)$.
\item $v(x):=\int_x^{\beta\varepsilon}\frac{1}{p(w)}\,dw$ solves $Lv=0$ with the additional property that ${v(\beta\varepsilon)=0}$.
\end{itemize}
Therefore we conclude that the Green kernel is given by
\begin{align} \label{e:green}
g(x,y)=\begin{cases}
\frac{2}{W(v,u)}v(x)u(y)\quad &\text{if $x\geq y$}\\
\frac{2}{W(v,u)}u(x)v(y)\quad &\text{if $x < y$},
\end{cases}
\end{align}
where $W(v,u)=v\cdot pu'- u\cdot pv'= 1$ is the Wronskian determinant (cf. Theorem~13.21 in \cite{W03}).
Inserting $f\equiv 1$ in equation~(\ref{e:exAsGreen}) we conclude \begin{align*}
\mathbb{E}_{\alpha\varepsilon}[T_{\beta\varepsilon}] & = 2 \varepsilon^2 \bigg[ \int_0^\alpha \int_\alpha^{\beta}\frac{r(y\varepsilon)}{p(w\varepsilon)}\,dw \,dy + \int_\alpha^{\beta} \int_y^{\beta} \frac{r(y\varepsilon)}{p(w\varepsilon)}\,dw \,dy \bigg].
\end{align*}
For $\varepsilon<\delta_0/\beta$ Lemma~\ref{l:intEst} part a) demonstrates using $ \sigma^2-M\beta\varepsilon >\sigma^2 - M\delta_0 > \sigma^2/2 $ that
\begin{align} \label{e:maj}
\frac{1}{y^2 \sigma^2/2 } \left( \frac{w}{y}\right)^{(a+\sigma^2)/(2\beta\sigma^2)} \exp\left( \frac{a}{\sigma^2} (1/w - 1/y) \right) \left(\frac{w}{y}\right)^{b/\sigma^2}
\end{align}
is majorizing the integrand. The majorant given in \eqref{e:maj} is integrable on the domain $D= (\alpha,\beta)\times (0,\alpha) \cup \lbrace(w,y) \mid \alpha < y < \beta,\, y \le w < \beta\rbrace$. Using dominated convergence and Lemma~\ref{l:intEst} part c) we conclude
\[
0 < \lim\limits_{\varepsilon\downarrow 0} \mathbb{E}_{\alpha\varepsilon}[T_{\beta\varepsilon}] = \frac{2}{\sigma^2} \bigg[ \int_D \exp\left( \frac{a}{\sigma^2} (1/w - 1/y) \right) \frac{w^{b/\sigma^2}}{y^{b/\sigma^2+2}} \,d(w,y) \bigg] < \infty.
\]
We now turn to $\mathbb{E}_{\beta\varepsilon}[\widetilde{T}_{\alpha\varepsilon}]$. The generator of the diffusion process conditioned not to hit $z$ before hitting $\alpha\varepsilon$ can be calculated as an $h$-transform of $L$:
\[
L^h f = \frac{1}{2}\sigma^2(x) f'' + \left(\frac{1}{2}\left(\varepsilon b_1(x)-b_2(x)\right) + \sigma^2(x) \frac{h'(x)}{h(x)}\right) f',
\]
where
\begin{equation}\label{e:harm-hitting}
h(x) := \P_{x}(T_{\alpha\varepsilon} < T_z) = \int_x^z 1/{p}(y) \,dy \left/ \int_{\alpha\varepsilon}^z 1/{p}(y) \,dy .\right.
\end{equation}
$L^h$ may be rewritten in divergence form as in \eqref{e:div-form} by letting \[p^h(x) = \exp \left(\int_c^x \frac{\varepsilon b_1(l) -b_2(l)}{\sigma^2(l)} + 2\frac{h'(l)}{h(l)} \,dl\right)\qquad \text{and} \qquad r^h(x) = \frac{p^h(x)}{ \sigma^2(x)}.
\]
Again using the corresponding Green's function we find
\begin{align}
\mathbb{E}_{\beta\varepsilon}[\widetilde{T}_{\alpha\varepsilon}] \label{e:eDownCross}&=2\varepsilon^2\bigg[\int_\alpha^{\beta}\int_\alpha^y\frac{r^h(y\varepsilon)}{p^h(w\varepsilon)}\,dw \,dy + \int_{\beta}^{z/\varepsilon}\int_\alpha^{\beta}\frac{r^h(y\varepsilon)}{p^h(w\varepsilon)}\,dw \,dy
\bigg].
\end{align}
Here we have
\[\frac{r^h(y\varepsilon)}{p^h(w\varepsilon)} = \frac{r(y\varepsilon)}{p(w\varepsilon)} \left(\frac{h(y\varepsilon)}{h(w\varepsilon)}\right)^2.\]
Since on the integration domains the relation $w\le y$ holds and since by \eqref{e:harm-hitting} the harmonic function $h$ is non-increasing, we get \begin{equation}\label{e:h-quotient}
\left(\frac{h(y\varepsilon)}{h(w\varepsilon)}\right)^2 \leq 1.
\end{equation}
Similar to (\ref{e:p_ezNum}) and (\ref{e:p_ezDen}) one sees that \[
\frac{h(y\varepsilon)}{h(w\varepsilon)} \ge \frac{h(y\varepsilon)}{h(\alpha\varepsilon)} = \frac{\int_{y\varepsilon}^z 1/{p}(l) \,dl}{\int_{\alpha\varepsilon}^z 1/{p}(l) \,dl} = 1- \frac{\int_{\alpha\varepsilon}^{y\varepsilon} 1/{p}(l) \,dl}{\int_{\alpha\varepsilon}^z 1/{p}(l) \,dl} \xrightarrow[\varepsilon\downarrow 0]{} 1.
\]
So applying Lemma~\ref{l:intEst} part b) and c) in order to find an integrable majorant as well as the pointwise limit we derive by Lebegue's theorem
\[
\lim\limits_{\varepsilon\downarrow 0} 2\varepsilon^2\int_\alpha^{\beta}\int_\alpha^y\frac{r^h(y\varepsilon)}{p^h(w\varepsilon)}dw \,dy = \frac{2}{\sigma^2} \bigg[ \int_\alpha^\beta \int_\alpha^y \exp\left( \frac{a}{\sigma^2} (1/w - 1/y) \right) \frac{w^{b/\sigma^2}}{y^{b/\sigma^2+2}} \,dw \, dy \bigg].
\]
We decompose the second integral in (\ref{e:eDownCross}) into two parts
\[
2\int_{\beta\varepsilon}^z \int _{\alpha\varepsilon}^{\beta\varepsilon} \frac{r(y)}{p(w)} \left(\frac{h(y)}{h(w)}\right)^2 \,dw\,dy = 2(I_1 + I_2),
\]
where
\begin{align*}
I_1 &:= \int_{\beta\varepsilon}^\delta \int _{\alpha\varepsilon}^{\beta\varepsilon}\frac{r(y)}{p(w)} \left(\frac{h(y)}{h(w)}\right)^2 \,dw\,dy,\\
I_2 &:= \int_{\delta}^z \int _{\alpha\varepsilon}^{\beta\varepsilon} \frac{1}{\sigma^2(y)}\exp\left( \int_{w}^{\delta} \frac{\varepsilon b_1(l)-b_2(l)}{\sigma^2(l)} \,dl \right)\\
&\qquad \qquad\quad\times \exp\left( \int_{\delta}^{y} \frac{\varepsilon b_1(l)-b_2(l)}{\sigma^2(l)} \,dl \right) \left(\frac{h(y)}{h(w)}\right)^2 \,dw\,dy
\end{align*}
with $\delta := \delta_0 \wedge z/2$. We can apply the argument leading to \eqref{e:h-quotient}and Lemma~\ref{l:intEst} b) to $I_1$ since we have for $\varepsilon < \delta/\beta \wedge \left( \sigma^4/[M(a+\sigma^2)] \right) $ in order to get the majorant
\[
\frac{1}{y^2 \sigma^2/2} \cdot \frac{2y}{w} \exp\left( \frac{a}{\sigma^2} \left(\frac{1}{w} - \frac{1}{y}\right)\right) \left(\frac{w}{y}\right)^{b/\sigma^2} \left(\frac{3}{2}\right)^{b/\sigma^2+1}.
\]
By part c) of Lemma~\ref{l:intEst} his results in
\[
I_1\xrightarrow{\varepsilon\downarrow 0} \int_{\beta}^\infty \int _\alpha^{\beta} \frac{1}{y^2\sigma^2 } \exp\left( \frac{a}{\sigma^2} \left(\frac{1}{w} - \frac{1}{y}\right)\right) \left(\frac{w}{y}\right)^{b/\sigma^2} \,dw\,dy.
\]
The statement $I_2\to 0$ can be deduced in bounding \[ I_2 \le \int_{\delta}^z \frac{1}{\sigma^2(y)}\exp\left( \int_{\delta}^{y} \frac{\varepsilon b_1(l)-b_2(l)}{\sigma^2(l)} \,dl \right) \,dy \cdot \int_{\alpha\varepsilon}^{\beta\varepsilon}\exp\left( \int_{w}^{\delta} \frac{\varepsilon b_1(l)-b_2(l)}{\sigma^2(l)} \,dl \right) \,dw \] as product where the first factor is monotonously decreasing and bounded (e.g. set $\varepsilon :=1$ in that expression) and the second one vanishes: \begin{align*}
&\int_{\alpha\varepsilon}^{\beta\varepsilon}\exp\left( \int_{w}^{\delta} \frac{\varepsilon b_1(l)-b_2(l)}{\sigma^2(l)} \,dl \right) \,dw \le\int_{\alpha\varepsilon}^{\beta\varepsilon}\exp\left( \varepsilon \int_{\alpha\varepsilon}^{\delta} \frac{a+Ml}{\sigma^2 l^2 - Ml^3} \,dl \right) \,dw \\
&= (\beta-\alpha) \varepsilon \left( \frac{\sigma^2-M\alpha\varepsilon}{\sigma^2-M\delta} \cdot \frac{\delta}{\alpha\varepsilon}\right)^{\varepsilon \frac{M(a+\sigma^2)}{\sigma^4}} \exp\left( \frac{a}{\sigma^2}\left(1/\alpha - \varepsilon/\delta \right) \right) \to 0,
\end{align*}
where we have again used \eqref{r:taylor} in the inequality.
\end{proof}
This gives the required property of the first moment of the cycle length. We will now establish the uniform boundedness of the second moment.
\begin{proposition}[A3] \label{p:e1A3}
The Cycle lengths have finite second moment uniformly in $\varepsilon>0$:
\[
\limsup_{\varepsilon\downarrow 0} \mathbb{E}_{\alpha\varepsilon}[\widetilde{\sigma}_1^2] <\infty.
\]
\end{proposition}
\begin{proof}
We show the finiteness of both $\limsup_{\varepsilon\downarrow 0} \mathbb{E}_{\alpha\varepsilon} [(T_{\beta\varepsilon})^2]$ and $\limsup_{\varepsilon\downarrow 0} \mathbb{E}_{\beta\varepsilon}[\bigl(\widetilde{T}_{\alpha\varepsilon}\bigr)^2]$.\\
With the already calculated Green's kernel (\ref{e:green}), we use a generalized version of Kac's moments formula as stated in section 4 of \cite{LLL11} (see also \cite{FP99} for a general extensive analysis) to infer
\begin{align*}
\mathbb{E}_{\alpha\varepsilon}[T_{\beta\varepsilon}^2] = 2\int_{0}^{\beta\varepsilon} g(\alpha\varepsilon,y) \, \mathbb{E}_y[T_{\beta\varepsilon}] \, r(y) \, dy.
\end{align*}
Together with \begin{align*}
\mathbb{E}_y[T_{\beta\varepsilon}] \le 2 \varepsilon^2 \int_{0}^{\beta} \int_{\widehat{y}}^{\beta} \frac{r(\widehat{y}\varepsilon)}{p(\widehat{w}\varepsilon)} \, d\widehat{w} \, d\widehat{y}
\end{align*}
this yields
\begin{align*}
\mathbb{E}_{\alpha\varepsilon}[T_{\beta\varepsilon}^2] &\le 4\varepsilon^2 \int_{0}^{\beta} \int_{y}^{\beta} \frac{r(y \varepsilon)}{p(w\varepsilon)} \mathbb{E}_{y\varepsilon}[T_{\beta\varepsilon}] \,dw\,dy \le 2 \biggl[ 2\varepsilon^2 \int_{0}^{\beta} \int_{y}^{\beta} \frac{r(y \varepsilon)}{p(w\varepsilon)} \,dw\,dy \biggr]^2 .\\
\xrightarrow[\varepsilon\downarrow 0]{} & 2 \biggl[ \frac{2}{\sigma^2} \int_{0}^{\beta} \int_{y}^{\beta} \exp\left(\frac{a}{\sigma^2} \left(1/w-1/y\right)\right) \frac{w^{b/\sigma^2}}{y^{b/\sigma^2+2}} \,dw\,dy \biggr]^2,
\end{align*}
where we used Lemma~\ref{l:intEst} c) with majorant~(\ref{e:maj}) in the last step. To see the integrability of the majorant (\ref{e:maj}) on the extended integration domain, notice that by L'H\^opital's rule
\begin{align}\label{e:integrability}
\lim_{y\downarrow 0} \frac{\int_y^\beta \exp\left(\frac{a}{\sigma^2} \frac{1}{w}\right) w^{\frac{b}{\sigma^2}+\frac{a+\sigma^2}{2\beta \sigma^2}} \,dw}{ \exp\left(\frac{a}{\sigma^2} \frac{1}{y}\right) y^{\frac{b}{\sigma^2}+2+\frac{a+\sigma^2}{2\beta \sigma^2}}} = \lim_{y\downarrow 0} \frac{-\exp\left(\frac{a}{\sigma^2}\frac{1}{y}\right) y^{\frac{b}{\sigma^2} + \frac{a+\sigma^2}{2\beta \sigma^2}}}{\exp\left(\frac{a}{\sigma^2}\frac{1}{y}\right) y^{\frac{b}{\sigma^2} + \frac{a+\sigma^2}{2\beta \sigma^2}}(-\frac{a}{\sigma^2} + (\frac{b}{\sigma^2}+2+\frac{a+\sigma^2}{2\beta \sigma^2})y)}
\end{align}
and therefore
\begin{displaymath}
\lim_{y\downarrow 0} \frac{\int_y^\beta \exp\left(\frac{a}{\sigma^2} \frac{1}{w}\right) w^{\frac{b}{\sigma^2}+\frac{a+\sigma^2}{2\beta \sigma^2}} \,dw}{ \exp\left(\frac{a}{\sigma^2} \frac{1}{y}\right) y^{\frac{b}{\sigma^2}+2+\frac{a+\sigma^2}{2\beta \sigma^2}}} = \frac{\sigma^2}{a}.
\end{displaymath}
The required integrability now follows because the integral on the left hand side of \eqref{e:integrability} is bounded near zero.
Estimating the quotients of $h$-functions by $1$, the second moment of second cycle phase is bounded by
\begin{align}\label{e:secondmom-down-2}
\mathbb{E}_{\beta\varepsilon}[(\widetilde{T}_{\alpha\varepsilon})^2] \le 4 \biggl[ \int_{\alpha\varepsilon}^{\beta\varepsilon} \int_{\alpha\varepsilon}^{y} \frac{r(y)}{p(w)} \mathbb{E}_y[T_{\alpha\varepsilon}] \,dw\,dy + \int_{\beta\varepsilon}^{z} \int_{\alpha\varepsilon}^{\beta\varepsilon} \frac{r(y)}{p(w)} \mathbb{E}_y[T_{\alpha\varepsilon}] \,dw\,dy\biggr].
\end{align}
The first integral is readily seen to be finite uniformly in $\varepsilon>0$ by noting $\mathbb{E}_y[T_{\alpha\varepsilon}] \le \mathbb{E}_{\beta\varepsilon}[T_{\alpha\varepsilon}]$. The latter implies that
\begin{equation*}
\int_{\alpha\varepsilon}^{\beta\varepsilon} \int_{\alpha\varepsilon}^{y} \frac{r(y)}{p(w)} \mathbb{E}_y[T_{\alpha\varepsilon}] \,dw\,dy \leq 2 \mathbb{E}_{\beta\varepsilon}[T_{\alpha\varepsilon}]^2.
\end{equation*}
We have seen before that the last expectation remains bounded. To analyze the second integral on the right hand side of \eqref{e:secondmom-down-2}, we estimate on the integration domain $\alpha\varepsilon \le w \le y \le z$:
\begin{align}\label{e:estimatebyf}
\frac{r(y)}{p(w)} \le \sigma^{-2}(y) \exp \left( \varepsilon \int_{\alpha\varepsilon}^y \frac{b_1(l)}{\sigma^2(l)} \,dl \right) \le f(y) :=\begin{cases}
c_1 y^{-2} & \text{ for } y \le \delta:= \delta_0 \wedge z/2, \\
c_2 & \text{ for } y \in [\delta,z],
\end{cases}
\end{align}
with positive constants
\begin{align*}
c_1 &:= \frac{2}{\sigma^2} \exp\left(\frac{3a}{\sigma^2\alpha} \right), \\
c_2 &:= \sup_{u\in[\delta,z]} \sigma^{-2}(u) \exp\left(\frac{3a}{\sigma^2\alpha} \right) \exp\left(\int_\delta^z \frac{b_1(l)}{\sigma^2(l)} \, dl \right).
\end{align*}
This gives
\begin{equation*}
\begin{split}
\int_{\beta\varepsilon}^{z} \int_{\alpha\varepsilon}^{\beta\varepsilon} &\frac{r(y)}{p(w)} \mathbb{E}_y[T_{\alpha\varepsilon}] \,dw\,dy \leq \int_{\beta\varepsilon}^{z} \int_{\alpha\varepsilon}^{\beta\varepsilon} f(y)\mathbb{E}_y[T_{\alpha\varepsilon}] \,dw\,dy \\
&= \varepsilon^2 \int_{\beta}^{z / \varepsilon} \int_{\alpha}^{\beta}f(\varepsilon y)\mathbb{E}_{y\varepsilon}[T_{\alpha\varepsilon}] \,dw\,dy = (\beta-\alpha)\varepsilon^2 \int_{\beta}^{z/\varepsilon} f(\varepsilon y)\mathbb{E}_{y\varepsilon}[T_{\alpha\varepsilon}] \,dy.
\end{split}
\end{equation*}
Estimating in our formula for the expectation $\mathbb{E}_{y\varepsilon}[T_{\alpha\varepsilon}]$ the quotient $r/p$ by $f$ as above one sees that it suffices to show
\begin{align}\label{e:last-estimate-goal}
\limsup_{\varepsilon\downarrow 0} \varepsilon^4 \int_{\beta}^{z/\varepsilon} f(y\varepsilon) \biggl[ \int_{\alpha}^{y} \widehat{y} \, f(\widehat{y}\varepsilon) \,d\widehat{y} + \int_{y}^{z/\varepsilon} y \, f(\widehat{y}\varepsilon) \, d\widehat{y} \biggr] \,dy <\infty.
\end{align}
For $y \ge \delta/\varepsilon$ we bound using \eqref{e:estimatebyf} the $y$-integrand by
\begin{align*}
f(y\varepsilon) \biggl[ \int_{\alpha}^{y} \widehat{y} \, f(\widehat{y}\varepsilon) \,d\widehat{y} + \int_{y}^{z/\varepsilon} y \, f(\widehat{y}\varepsilon) \, d\widehat{y} \biggr] &\le c_2 \biggl[ \int_{\alpha}^{\delta/\varepsilon} \widehat{y} \, c_1 \frac{1}{\widehat{y}^2\varepsilon^2} \,d\widehat{y} + \int_{\delta/\varepsilon}^{z/\varepsilon} y \, c_2 \, d\widehat{y} \biggr] \\
&\le \frac{c_2c_1}{\varepsilon^2} \ln \left(\frac{\delta}{\alpha\varepsilon}\right) + c_2^2 z \frac{y}{\varepsilon}.
\end{align*}
It follows
\begin{equation}\label{e:last-estimate-I}
\begin{split}
&\varepsilon^4 \int_{\delta/\varepsilon}^{z/\varepsilon} f(y\varepsilon) \biggl[ \int_{\alpha}^{y} \widehat{y} \, f(\widehat{y}\varepsilon) \,d\widehat{y} + \int_{y}^{z/\varepsilon} y \, f(\widehat{y}\varepsilon) \, d\widehat{y} \biggr] \,dy \\
& \le c_1 c_2 z \, \varepsilon \ln \left(\frac{\delta}{\alpha\varepsilon}\right) + \frac{1}{2} c_2^2 z^3\, \varepsilon \xrightarrow[\varepsilon\to 0]{} 0.
\end{split}
\end{equation}
Analogously
\begin{align*}
\varepsilon^4 \int_{\beta}^{\delta/\varepsilon} f(y\varepsilon) \int_{\delta/\varepsilon}^{z/\varepsilon} y \, f(\widehat{y}\varepsilon) \, d\widehat{y} \,dy \le \varepsilon^4 \int_{\beta}^{\delta/\varepsilon} \frac{c_1}{y^2\varepsilon^2} c_2 y \frac{z}{\varepsilon} \, dy = c_1c_2 z \, \varepsilon \, \ln\left(\frac{\delta}{\beta\varepsilon}\right) \to 0.
\end{align*}
On the other hand we have
\begin{align*}
\varepsilon^4 \int_{\beta}^{\delta/\varepsilon} f(y\varepsilon) \biggl[ \int_{\alpha}^{y} \widehat{y} \, f(\widehat{y}\varepsilon) \,d\widehat{y} + \int_{y}^{\delta/\varepsilon} y \, f(\widehat{y}\varepsilon) \, d\widehat{y} \biggr] \,dy \le c_1^2 \int_{\beta}^{\infty} \frac{1}{y^2} \biggl[ \ln\left(\frac{y}{\alpha}\right) +1 \biggr] \,dy,
\end{align*} which is a finite bound independent of $\varepsilon$. Therefore summing the last two estimates shows that
\begin{equation}\label{e:last-estimate-II}
\limsup_{\varepsilon \rightarrow 0} \varepsilon^4 \int_{\beta}^{\delta/\varepsilon} f(y\varepsilon) \biggl[ \int_{\alpha}^{y} \widehat{y} \, f(\widehat{y}\varepsilon) \,d\widehat{y} + \int_{y}^{z/\varepsilon} y \, f(\widehat{y}\varepsilon) \, d\widehat{y} \biggr] \,dy<\infty.
\end{equation}
As \eqref{e:last-estimate-I} and \eqref{e:last-estimate-II} imply \eqref{e:last-estimate-goal} this finishes the proof.
\end{proof}
It remains to consider an arbitrary starting point $x>0$. (In other words: proving (B1) and (B2).) We first make the following preparation:\\
\begin{lemma}
In the scaling limit $\lambda \rightarrow \infty$, $\varepsilon \rightarrow 0$ with $\lambda^2p_{\varepsilon,z}=J \in (0,\infty)$ we have \[ p_{\varepsilon,z} \in O(\varepsilon^{b/\sigma^2 +1}) . \] In particular, $\lim\limits_{scaling}\frac{\ln \varepsilon }{\lambda ^2} = 0$.
\end{lemma}
\begin{proof}
By inequality~(\ref{e:p_ezDen}) the denominator in expression~(\ref{e:p_ez}) is bounded away from $0$. For the numerator, assuming $\varepsilon < \delta_0/2$ we are entirely in the regime, where the approximations of coefficient functions given in Remark~\ref{r:taylor} hold. It follows \begin{align*}
&\int_\varepsilon^{2\varepsilon} \exp\left(-\varepsilon \int_{\delta_0}^y \frac{b_1(l)}{\sigma^2(l)}\, dl\right) \exp\left(\int_{\delta_0}^y \frac{b_2(l)}{\sigma^2(l)} \,dl\right)\, dy \\
&\le \varepsilon \int_1^{2} \left(\frac{\sigma^2-My\varepsilon}{\sigma^2 -M\delta_0}\right)^{\varepsilon M (a+\sigma^2)/\sigma^4} \left(\frac{\delta_0}{y\varepsilon}\right)^{\varepsilon M(a+\sigma^2)/\sigma^4} \\
&\qquad\qquad \times \exp\left(\frac{a}{\sigma^2}\left(\frac{1}{y}-\frac{\varepsilon}{\delta_0}\right)\right) \left(\frac{y\varepsilon}{\delta_0}\right)^{b/\sigma^2} \left(\frac{\sigma^2+M\delta_0}{\sigma^2+My\varepsilon}\right)^{b/\sigma^2+1} \, dy.
\end{align*} An application of the dominated convergence theorem finishes the proof.
\end{proof}
\begin{proposition}[First part of (B1)] \label{p:B1p1Proof}
\[ \lim_{scaling}\mathbb{E}_x[T_\varepsilon \wedge T_z] = 0 \text{ for } 0 < x <z. \]
\end{proposition}
\begin{proof}
Making use of Green's kernel $g(x,y) = \frac{2}{K\lambda^2} u(x\wedge y) v(x\vee y)$, where $K := \int_{\varepsilon}^{z}\frac{1}{p(w)}\,dw$, $u(x) := \int_{\varepsilon}^{x} 1/p(w)\,dw$ and $v(x):= \int_{x}^{z} 1/p(w) \,dw$ we write \begin{align*}
\mathbb{E}_x[T_\varepsilon \wedge T_z] &= \frac{2}{K \lambda^2} \left[ v(x)\int_{\varepsilon}^{x}\int_{\varepsilon}^{y} \frac{r(y)}{p(w)} \,dw \, dy + u(x)\int_{x}^{z}\int_{y}^{z} \frac{r(y)}{p(w)} \,dw \, dy \right]\\
&\le \frac{2}{\lambda^2} \left[ \int_{\varepsilon}^{x}\int_{\varepsilon}^{y} \frac{r(y)}{p(w)} \,dw \, dy + \int_{x}^{z}\int_{y}^{z} \frac{r(y)}{p(w)} \,dw \, dy \right]
\end{align*} with integrand \begin{align*}
\frac{r(y)}{p(w)} = \frac{1}{\sigma^2(y)} \exp\left(\varepsilon \int_w^y \frac{b_1(l)}{\sigma^2(l)}\, dl\right) \exp\left(\int_y^w \frac{b_2(l)}{\sigma^2(l)}\,dl\right).
\end{align*} Since on the integration domain of the second integral $w\ge y$ holds, the exponential with the $\varepsilon$ term in it is bounded by $1$ and the integral is overall bounded. It follows that the second integral will vanish in the scaling limit. The first integral may be decomposed in \begin{align} \label{e:int0Decomp}
\int_{\varepsilon}^{x}\int_{\varepsilon}^{y} \frac{r(y)}{p(w)} \,dw \, dy = \int_{\varepsilon}^{\delta}\int_{\varepsilon}^{y} \frac{r(y)}{p(w)} \,dw \, dy + \int_{\delta}^{x}\int_{\varepsilon}^{\delta} \frac{r(y)}{p(w)} \,dw \, dy + \int_{\delta}^{x}\int_{\delta}^{y} \frac{r(y)}{p(w)} \,dw \, dy
\end{align} with $\delta:= \delta_0 \wedge x/2$. The last one is bounded, since the $\varepsilon$-exponential is monotonically decreasing. For the second one we note \begin{align*}
&\int_{\delta}^{x}\int_{\varepsilon}^{\delta} \frac{r(y)}{p(w)} \,dw \, dy \\
&\le \exp\left(\varepsilon \int_{\delta}^x \frac{b_1(l)}{\sigma^2(l)}\, dl\right) \sup_{z\in[\delta,x]} \sigma^{-2}(z) \int_{\delta}^{x}\int_{\varepsilon}^{\delta} \exp\left(\varepsilon \int_w^{\delta} \frac{b_1(l)}{\sigma^2(l)}\, dl\right) \,dw \, dy;
\end{align*} the remaining integrand being bounded by \[
\left(\frac{\sigma^2-Mw}{\sigma^2-M\delta} \frac{\delta}{w}\right)^{\varepsilon M(a+\sigma^2)/\sigma^4} \exp\left(\frac{a}{\sigma^2}\varepsilon(w^{-1}-\delta^{-1})\right) \le \left( \frac{2\delta}{\varepsilon}\right)^{\varepsilon M(a+\sigma^2)/\sigma^4} \exp\left(\frac{a}{\sigma^2}\right) .
\] For the first integral in expression~(\ref{e:int0Decomp}) we attain the estimate
\begin{align*}
&\int_{\varepsilon}^{\delta}\int_{\varepsilon}^{y} \frac{r(y)}{p(w)} \,dw \, dy \le 2e^{a/\sigma^2} \int_{1}^{\delta/\varepsilon}\int_{1}^{y} \frac{1}{y^2} \left(\frac{2\delta}{\varepsilon}\right)^{\varepsilon M(a+\sigma^2)/\sigma^4} \,dw \, dy.
\end{align*} For $\varepsilon$ sufficiently small, this is \begin{align*}
&\le 4e^{a/\sigma^2} \int_{1}^{\delta/\varepsilon}\int_{1}^{y} \frac{1}{y^2} \,dw \, dy \le 4e^{a/\sigma^2} \int_{1}^{\delta/\varepsilon} \frac{1}{y} \, dy = 4e^{a/\sigma^2} \ln \delta + 4e^{a/\sigma^2} \ln \frac{1}{\varepsilon}
\end{align*} which proves the asserted limit of the product by the previously shown scaling limit $\frac{\ln \varepsilon}{\lambda^2}\to 0$.
\end{proof}
In order to finish the proof of (B1) we first show (B2).
\begin{proposition} [B2] \label{p:B2proof}
\[\P_x (T_\varepsilon < T_z) \xrightarrow[\varepsilon\to 0]{} \frac{
\int_x^z \exp \left(-\int_y^z \frac{b_2(l)}{\sigma^2(l)} \,dl\right) \,dy}{\int_0^z \exp \left(-\int_y^z \frac{ b_2(l)}{\sigma^2(l)} \,dl\right) \,dy} \qquad \text{ for } 0< x < z.\]
\end{proposition}
\begin{proof} We first recall that
\begin{align*} \P_x (T_\varepsilon < T_z) = \frac{
\int_x^z \exp \left(\int_y^z \frac{\varepsilon b_1(l) -b_2(l)}{\sigma^2(l)} \,dl\right) \,dy}{\int_\varepsilon^z \exp \left(\int_y^z \frac{\varepsilon b_1(l) -b_2(l)}{\sigma^2(l)} \,dl\right) \,dy}.
\end{align*}
By an direct application of dominated convergence \begin{align}\label{e:b2Num}
\lim_{\varepsilon\downarrow 0} \int_x^z \exp \left(\int_y^z \frac{\varepsilon b_1(l) -b_2(l)}{\sigma^2(l)} \,dl\right) \,dy = \int_x^z \exp \left(-\int_y^z \frac{b_2(l)}{\sigma^2(l)} \,dl\right) \,dy.
\end{align} Since \begin{align*}
&\int_{\varepsilon}^{\delta_0} \exp \left(\varepsilon \int_y^z \frac{ b_1(l) }{\sigma^2(l)} \,dl\right) \,dy \\
&\le \exp \left(\varepsilon \int_{\delta_0}^{z}\frac{b_1(l)}{\sigma^2(l)} \,dl \right) \int_{\varepsilon}^{\delta_0} \left(\frac{2\delta_0}{\varepsilon}\right)^{\varepsilon M (a+\sigma^2)/\sigma^4} \exp(a/\sigma^2) \, dy
\end{align*} we obtain \begin{align}\label{e:b2Den}
\lim_{\varepsilon \rightarrow 0} \int_\varepsilon^z \exp \left(\int_y^z \frac{\varepsilon b_1(l) -b_2(l)}{\sigma^2(l)} \,dl\right) \,dy = \int_0^z \exp \left(-\int_y^z \frac{ b_2(l)}{\sigma^2(l)} \,dl\right) \,dy
\end{align} Both assertions \eqref{e:b2Num} and \eqref{e:b2Den} together imply the Proposition.
\end{proof}
We now complete the discussion of the example with
\begin{proposition}[Finishing (B1)]
\[ \lim_{scaling}\mathbb{E}_x[\widetilde{T}_\varepsilon ] = 0 \text{ for } x >0. \]
\end{proposition}
\begin{proof}
By Propositions~\ref{p:B1p1Proof} and \ref{p:B2proof}
\begin{align*}
\mathbb{E}_x[\widetilde{T}_{\varepsilon}] = \frac{\mathbb{E}_x[T_\varepsilon \mathds {1}_{\lbrace T_\varepsilon < T_z \rbrace} ]}{\P_x(T_\varepsilon < T_z)} \le \frac{\mathbb{E}_{x}[T_\varepsilon \wedge T_z]}{\P_x(T_\varepsilon < T_z)} \xrightarrow[\varepsilon\downarrow 0]{} 0.
\end{align*}
\end{proof}
\subsection{Homodyne detection of Rabi oscillation}
As is carefully described in \cite{BB17} an analysis of homodyne detection of Rabi oscillations leads to the following stochastic differential equation on the state space $\Theta=(0,2 \pi)$
\begin{equation}\label{e:RabiSDE}
d\theta_t=-\lambda^2 \sin\theta_t\bigl(1-\cos\theta_t\bigr)\,dt+\lambda\bigl(1-\cos\theta_t\bigr)dB_t
\end{equation}
Following a suggestion of \cite[sections 2.3, 6.2]{BB17} we investigate a 'linearized' version of \eqref{e:RabiSDE}, i.e. the case where in \eqref{e:mainSDE}
\[
b_1(x) = 1, \qquad b_2(x) = b\cdot x, \qquad \sigma(x) = x^2,
\]
with $b>0$ some positive real number. Note, in this model $\sigma^2(x)=x^4$.
\begin{remark}[Heuristics for the choice of $\alpha$ and $\beta$]
One way to guess the form of the functions $\alpha$ and $\beta$ appearing in the cycle decomposition is the following. First it is of course natural to assume that the point, where the drift changes sign does play a specific role. Therefore, let us define $\alpha(\varepsilon):=\varepsilon/b$. In order to get an idea, of how to choose $\beta(\varepsilon)$ one can e.g. first transform the stochastic differential equation using a transformation going back to at least to Feller \cite{F52}. We replace $X_t$ by $Y_t := F(X_t)$, where $F(x) := \int_{\infty}^{x} \frac{1}{\sigma(u)} \, du = -1/x$. According to It\^o's lemma the SDE then becomes
\[
dY_t := \frac{1}{2} \left( \varepsilon Y_t^2 + b Y_t + \frac{1}{Y_t} \right) \, dt + dB_t.
\]
Thus we end up with diffusion process with unit diffusion coefficient. For the diffusion $X$ started from $\alpha(\varepsilon)$ to complete a cycle it has to get from $\alpha(\varepsilon)$ to $\beta(\varepsilon)$ and back. During a downcrossing from $F(\beta(\varepsilon))$ to $F(\alpha(\varepsilon))$ one make use of the fact that the drift always points in towards $F(\alpha(\varepsilon))$ and it turns out that the deterministic part is strong enough to get finite expectation for the part of the cycle.
The diffusion $Y$ makes also an upcrossing from $F(\alpha(\varepsilon))$ to $F(\beta(\varepsilon))$ during a cycle of $X$. During such an upcrossing the drift in the equation of $Y$ is of order $\varepsilon$ near $\alpha(\varepsilon)$ and therefore the Brownian part has be essential to complete this part of the cycle sufficiently fast. Therefore, it seems reasonable to take $\beta(\varepsilon)=\alpha(\varepsilon)+\varepsilon^2$ as this gives
\begin{displaymath}
F(\beta(\varepsilon)-F(\alpha(\varepsilon)=\frac{b^2}{1+\varepsilon\,b}.
\end{displaymath}
The exit times of Brownian motion from bounded sets have moments of all order, thus this might be a reasonable first guess. Working with $\beta(\varepsilon)=\alpha(\varepsilon)+\varepsilon$ in contrast leads to a distance $F(\beta(\varepsilon))-F(\alpha(\varepsilon))$, which is of order $\varepsilon^{-1}$ and therefore the expected time to complete this part of the cycle can be expected to diverge with $\varepsilon \rightarrow 0$.
\end{remark}
We now show, that Theorem~\ref{t:mainThm} applies to this situation, which means that we need to prove $(A2), (A3), (B1)$ and $(B2)$ for
\[
\alpha(\varepsilon) := \varepsilon/b, \qquad \beta(\varepsilon) := \varepsilon/b + \varepsilon^2.
\]
By Taylor's theorem, for $x \ge 1/b$
\begin{align} \label{e:taylor_le}
\frac{1}{3x^3} - \frac{b}{2x^2} \le -\frac{b^3}{6} + \frac{b^5}{2} \left(x-\frac{1}{b}\right)^2
\end{align}
and
\begin{align} \label{e:taylor_ge}
\frac{1}{3x^3} - \frac{b}{2x^2} \ge -\frac{b^3}{6} + \frac{b^5}{2} \left(x-\frac{1}{b}\right)^2 - \frac{4b^6}{3} \left(x-\frac{1}{b}\right)^3.
\end{align}
As preparation for the following proofs we start with
\begin{lemma} \label{l:hitProb}
For $l\ge 0$ \[
\P_{\varepsilon/b + l\varepsilon^2} (T_z < T_{\varepsilon/b}) \sim \varepsilon^2 \exp\left(-\frac{1}{\varepsilon^2} \frac{b^3}{6}\right)\frac{\int_{0}^{l} \exp\left(\frac{b^5}{2}x^2 \right) \,dx}{\int_{0}^{z} \exp\left(- \frac{b}{2x^2}\right) \,dx} \qquad \text{as }\; \varepsilon \downarrow 0 .
\]
\end{lemma}
\begin{proof}
\begin{align*}
\P_{\varepsilon/b + l\varepsilon^2} (T_z < T_{\varepsilon/b}) = \frac{\int_{\varepsilon/b}^{\varepsilon/b+l\varepsilon^2}1/p(x)\, dx}{\int_{\varepsilon/b}^z 1/p(x)\, dx},
\end{align*} where \begin{align*}
\frac{1}{p(x)} &= \exp\left(\varepsilon \int_{x}^{c} \frac{1}{t^4}\, dt\right) \exp\left(b\int_{c}^x \frac{1}{t^3} \, dt\right) \\
& = \exp\left(\frac{\varepsilon}{3} \left(\frac{1}{x^3} - \frac{1}{c^3}\right)\right) \exp\left(\frac{b}{2}\left(\frac{1}{c^2} - \frac{1}{x^2}\right)\right).
\end{align*} Plugging in and reducing the fraction yields \begin{align*}
\frac{\int_{\varepsilon/b}^{\varepsilon/b+l\varepsilon^2}1/p(x)\, dx}{\int_{\varepsilon/b}^z 1/p(x)\, dx} &= \frac{\int_{\varepsilon/b}^{\varepsilon/b+l\varepsilon^2} \exp\left(\frac{\varepsilon}{3x^3} - \frac{b}{2x^2} \right) \, dx}{\int_{\varepsilon/b}^z \exp\left(\frac{\varepsilon}{3x^3} - \frac{b}{2x^2} \right) \, dx}.
\end{align*}
For the numerator, we use the estimate~(\ref{e:taylor_le}) and obtain
\begin{align*}
&\int_{1/b}^{1/b+l\varepsilon} \exp\left( \frac{1}{\varepsilon^2} \left(\frac{1}{3x^3} - \frac{b}{2x^2} \right) \right) \,dx \le \exp\left( -\frac{1}{\varepsilon^2} \frac{b^3}{6}\right) \varepsilon \int_{0}^{l}\exp\left( \frac{b^5}{2} x^2 \right) \,dx.
\end{align*}
Estimate~(\ref{e:taylor_ge}) gives
\begin{align*}
&\int_{1/b}^{1/b+l\varepsilon} \exp\left( \frac{1}{\varepsilon^2} \left(\frac{1}{3x^3} - \frac{b}{2x^2} \right) \right) \,dx \\
&\ge \int_{1/b}^{1/b+l\varepsilon} \exp\left( \frac{1}{\varepsilon^2} \left( -\frac{b^3}{6} + \frac{1}{2}b^5 \left(x-\frac{1}{b}\right)^2 - \frac{4}{3}b^6 \left(x-\frac{1}{b}\right)^3 \right) \right) \,dx \\
& \sim \exp\left( -\frac{1}{\varepsilon^2} \frac{b^3}{6} \right) \varepsilon \int_{0}^{l} \exp \left( \frac{1}{2}b^5 x^2 \right) \,dx.
\end{align*}
Summarizing we arrive
\begin{equation}\label{e:intEpsEst}
\int_{1/b}^{1/b+l\varepsilon} \exp\left( \frac{1}{\varepsilon^2} \left(\frac{1}{3x^3} - \frac{b}{2x^2} \right) \right) \,dx \sim \exp\left( -\frac{1}{\varepsilon^2} \frac{b^3}{6} \right) \varepsilon \int_{0}^{l} \exp \left( \frac{1}{2}b^5 x^2 \right) \,dx.
\end{equation}
For the denominator \begin{align*}
\int_{\varepsilon/b}^z \exp\left(\frac{\varepsilon}{3x^3} - \frac{b}{2x^2} \right) \, dx = \int_{0}^z \mathds {1}_{\lbrace x>\varepsilon/b \rbrace}\exp\left(\frac{\varepsilon}{3x^3} - \frac{b}{2x^2} \right) \, dx
\end{align*} due to $x\mapsto \varepsilon/(3x^3) - b/(2x^2)$ being non-positive for $x \ge 2\varepsilon/(3b)$, the integrand is bounded in-between $0$ and $1$ allowing to integrate over the limit $\varepsilon\to 0$, which results in \[
\lim_{\varepsilon\to 0} \int_{0}^z \mathds {1}_{\lbrace x>\varepsilon/b \rbrace}\exp\left(\frac{\varepsilon}{3x^3} - \frac{b}{2x^2} \right) \, dx = \int_0^z \exp\left(-\frac{b}{2x^2}\right)\,dx.
\]
Thus the assertion is shown by composing these calculations.
\end{proof}
\begin{proposition}[A2] \label{p:rabiA2}
\begin{align*}
\lim\limits_{\varepsilon\downarrow 0} \mathbb{E}_{\varepsilon/b}[\widetilde{\sigma}_1] = \lim\limits_{\varepsilon\downarrow 0} \mathbb{E}_{\varepsilon/b}[\sigma_1] = 4b^4 \int_{0}^{\infty} \int_{0}^{1} \exp\left(\frac{b^5}{2}\left(w^2-y^2\right)\right)\,dw\,dy. \end{align*}
\end{proposition}
\begin{proof}
Using again the appropriate Green kernel we arrive at
\begin{align}\label{e:cycLength}
\mathbb{E}_{\varepsilon/b}[T_{\varepsilon/b+\varepsilon^2}] &= 2\varepsilon^2 \left[ \int_{0}^{1/b} \int_{1/b}^{1/b+\varepsilon} \frac{r(y\varepsilon)}{p(w\varepsilon)} \,dw \, dy + \int_{1/b}^{1/b+\varepsilon} \int_{y}^{1/b+\varepsilon} \frac{r(y\varepsilon)}{p(w\varepsilon)} \,dw \, dy \right]
\end{align}
where we can explicitly write
\begin{align} \label{e:intFrac}
\frac{r(y\varepsilon)}{p(w\varepsilon)} &= \frac{1}{(y\varepsilon)^4} \exp\left( \frac{1}{\varepsilon^2} \left(\left(\frac{1}{3w^3} - \frac{b}{2w^2} \right) - \left( \frac{1}{3y^3} - \frac{b}{2y^2} \right)\right) \right).
\end{align}
Observe that the right hand side of expression~\eqref{e:intFrac} factorizes in a function of $w$ and a function of $y$.
To calculate the limit $\varepsilon\to 0$ of the first term in \eqref{e:cycLength}, we consider the asymptotic behavior of both factors given
by the integral with respect to $y$ and $w$, respectively. We have
\begin{align*}
&\int_{0}^{\frac{1}{b}} \frac{1}{y^4} \exp\left( \frac{1}{\varepsilon^2} \left( - \frac{1}{3y^3} + \frac{b}{2y^2} \right) \right) \,dy
= \exp\left(\frac{b^3}{6\varepsilon^2}\right) \varepsilon \int_0^\infty (y\varepsilon+b)^2 \exp \left(-\frac{b}{2}y^2 - \frac{\varepsilon}{3} y^3\right) \,dy.
\end{align*}
Making use of (\ref{e:intEpsEst}) in order to find the asymptotic behavior of the integral with respect to $w$ and multiplying both together shows \begin{align*}
&2\varepsilon^2 \int_{0}^{1/b} \int_{1/b}^{1/b+\varepsilon} \frac{r(y\varepsilon)}{p(w\varepsilon)} \,dw \, dy \xrightarrow[\varepsilon\downarrow 0]{} 2b^4 \int_{0}^{\infty} \int_{0}^{1}\exp\left( \frac{b^5}{2} \left(w^2-y^2\right) \right) \,dw\,dy .
\end{align*}
The second summand in expression~(\ref{e:cycLength}) is analyzed with use of (\ref{e:taylor_ge}) and (\ref{e:taylor_le}). By a very similar analysis we obtain
\begin{align}\begin{split}\label{e:expcycleI}
\mathbb{E}_{\varepsilon/b}[T_{\varepsilon/b+\varepsilon^2}]\xrightarrow[\varepsilon\downarrow 0]{}
&2b^4 \int_{0}^{\infty} \int_{0}^{1}\exp\left( \frac{b^5}{2} \left(w^2-y^2\right) \right) \,dw \,dy \\
& + 2b^4 \int_{0}^{1} \int_{y}^{1} \exp\left(\frac{b^5}{2} \left(w^2-y^2\right)\right) \, dw \, dy < \infty.
\end{split}\end{align}
To infer the expected cycle length of the second phase, where the process starts from $\beta(\varepsilon)=\varepsilon/b +\varepsilon^2$ and is conditioned to hit $\alpha(\varepsilon)=\varepsilon/b$ prior to some arbitrary level $z>\beta(\varepsilon)$, we will again use a $h$-transform in the sense of Doob in order find the dynamics of the conditioned process.
We find
\begin{align} \label{e:rabi1Mom}
\mathbb{E}_{\frac{\varepsilon}{b}+\varepsilon^2}[T_{\varepsilon/b} \mid T_{\varepsilon/b} < T_z] = 2\varepsilon^2 \left[ \int_{\frac{1}{b}}^{\frac{1}{b}+\varepsilon} \int_{\frac{1}{b}}^{y} \frac{r^h(y\varepsilon)}{p^h(w\varepsilon)} \,dw \, dy + \int_{\frac{1}{b}+\varepsilon}^{\frac{z}{\varepsilon}} \int_{\frac{1}{b}}^{\frac{1}{b}+\varepsilon} \frac{r^h(y\varepsilon)}{p^h(w\varepsilon)} \,dw \, dy \right],
\end{align}
where the integrand is given by
\[
\frac{r^h(y\varepsilon)}{p^h(w\varepsilon)} = \left(\frac{h(y\varepsilon)}{h(w\varepsilon)}\right)^2 \cdot \frac{r(y\varepsilon)}{p(w\varepsilon)}.
\]
We recall that the harmonic function under consideration is $h(s) := \P_s(T_{\varepsilon/b} < T_z)$.
Let us start with the first summand. Because on the integration domain $w\le y$ holds, the estimate
\[
\frac{r^h(y\varepsilon)}{p^h(w\varepsilon)} \le \frac{r(y\varepsilon)}{p(w\varepsilon)}
\]
allows us use a strategy very similar to the situation of the first cycle phase. In particular, we have
\begin{align*}
&\limsup_{\varepsilon\to 0} 2\varepsilon^2 \int_{1/b}^{1/b+\varepsilon} \int_{1/b}^{y} \frac{r^h(y\varepsilon)}{p^h(w\varepsilon)} \,dw \, dy \le 2b^4 \int_{0}^{1} \int_{0}^{y} \exp\left(\frac{b^5}{2} \left(w^2-y^2\right)\right) \, dw \, dy.
\end{align*}
In order to derive a matching result for the limes inferior we use our standard estimates to find
\begin{align*}
&2\varepsilon^2 \int_{1/b}^{1/b+\varepsilon} \int_{1/b}^{y} \frac{r^h(y\varepsilon)}{p^h(w\varepsilon)} \,dw \, dy \\
&\ge 2 \int_{0}^{1} \frac{1}{(y\varepsilon+1/b)^4 } \exp \left( - \frac{1}{2}b^5y^2 \right) \int_{0}^{y} \left(\frac{h((y\varepsilon+1/b)\varepsilon)}{h((w\varepsilon+1/b)\varepsilon)}\right)^2 \exp \left(\frac{1}{2} b^5 w^2 - \varepsilon \frac{4}{3}b^6y^3 \right) \,dw \,dy.
\end{align*}
By the bounded convergence theorem we can interchange the limit and the integrals and using Lemma~\ref{l:hitProb} to conclude
\begin{align} \label{e:intLim}
\begin{split}
\lim\limits_{\varepsilon\downarrow 0} 2\varepsilon^2 \int_{1/b}^{1/b+\varepsilon} \int_{1/b}^{y} \frac{r^h(y\varepsilon)}{p^h(w\varepsilon)} \,dw \, dy &= \lim\limits_{\varepsilon\downarrow 0} 2\varepsilon^2 \int_{1/b}^{1/b+\varepsilon} \int_{1/b}^{y} \frac{r(y\varepsilon)}{p(w\varepsilon)} \,dw \, dy \\
&= 2b^4 \int_{0}^{1} \int_{0}^{y} \exp\left(\frac{b^5}{2} \left(w^2-y^2\right)\right) \, dw \, dy.
\end{split}
\end{align}
Here we have used $h((l\varepsilon+1/b)\varepsilon)\to 1$, $l\in [0,1]$.
We now consider the second term in equation~(\ref{e:rabi1Mom}). We rewrite this term as
\begin{equation}\label{e:last-term-1Mom}
\begin{split}
&2\varepsilon^2 \int_{1/b+\varepsilon}^{z/\varepsilon} \int_{1/b}^{1/b+\varepsilon} \frac{r^h(y\varepsilon)}{p^h(w\varepsilon)} \,dw \, dy = 2 \varepsilon^4 \int_{1}^{z/\varepsilon^2-1/(b\varepsilon)} \int_{0}^{1} \frac{r^h((y\varepsilon+1/b)\varepsilon)}{p^h((w\varepsilon+1/b)\varepsilon)} \,dw \, dy\\
&=2 \int_{1}^{\infty} \int_{0}^{1} \mathds {1}_{\lbrace y < z/\varepsilon^2-1/(b\varepsilon) \rbrace} \frac{h((y\varepsilon+1/b)\varepsilon)^2}{h((w\varepsilon+1/b)\varepsilon)^2} \frac{1}{(y\varepsilon+1/b)^4} \\
&\quad \exp\left(\frac{1}{\varepsilon^2} \left( \frac{1}{3(w\varepsilon+1/b)^3} -\frac{b}{2(w\varepsilon+1/b)^2} - \frac{1}{3(y\varepsilon+1/b)^3} + \frac{b}{2(y\varepsilon+1/b)^2} \right) \right) \,dw \, dy.
\end{split}
\end{equation}
Elementary algebra gives
\begin{equation}\label{e:estimateexponent}
\begin{split}
&\frac{1}{\varepsilon^2} \left( \frac{1}{3(w\varepsilon+1/b)^3} -\frac{b}{2(w\varepsilon+1/b)^2} - \frac{1}{3(y\varepsilon+1/b)^3} + \frac{b}{2(y\varepsilon+1/b)^2} \right) \\
&= \frac{b^5(3b^2w^3\varepsilon^2y-3b^2w\varepsilon^2y^3+bw^3\varepsilon + 9bw^2\varepsilon y-9bw \varepsilon y^2 - b\varepsilon y^3 + 3w^2-3y^2)}{6(bw\varepsilon+1)^3(b\varepsilon y+1)^3}.
\end{split}
\end{equation}
We observe that on the domain of integration in \eqref{e:last-term-1Mom} we always have $0\leq w \leq y$ and that therefore
\begin{displaymath}
3b^2w^3\varepsilon^2y\leq 3b^2w\varepsilon^2y^3,\,bw^3\varepsilon \leq b\varepsilon y^3,\,9bw^2\varepsilon y \leq 9bw \varepsilon y^2.
\end{displaymath}
Estimating the denominator $6(bw\varepsilon+1)^3(b\varepsilon y+1)^3$ in \eqref{e:estimateexponent} by $6$ we conclude that on the domain of integration in \eqref{e:last-term-1Mom}
\begin{displaymath}
\exp\left(\frac{1}{\varepsilon^2} \left( \frac{1}{3(w\varepsilon+1/b)^3} -\frac{b}{2(w\varepsilon+1/b)^2} - \frac{1}{3(y\varepsilon+1/b)^3} + \frac{b}{2(y\varepsilon+1/b)^2} \right) \right) \leq e^{\frac{b^5}{2}(w^2-y^2)}.
\end{displaymath}
Using \eqref{e:estimateexponent} and Lebesgue's dominated convergence theorem then implies
\[
\lim\limits_{\varepsilon \downarrow 0} 2\varepsilon^2 \int_{1/b+\varepsilon}^{z/\varepsilon} \int_{1/b}^{1/b+\varepsilon} \frac{r^h(y\varepsilon)}{p^h(w\varepsilon)} \,dw \, dy = 2b^4 \int_1^\infty \int_0^1 \exp\left(\frac{b^5}{2} \left(w^2-y^2\right)\right) \,dw \,dy.
\]
This gives together with \eqref{e:intLim} the required limit for the cycle phase and adding \eqref{e:expcycleI} therefore finishes the proof.
\end{proof}
\begin{proposition}[A3]
\[\limsup_{\varepsilon\downarrow 0} \mathbb{E}_{\varepsilon/b}[\widetilde{\sigma}_1^2] <\infty.\]
\end{proposition}
\begin{proof}
Analogously to the proof of Proposition~\ref{p:e1A3} we use Kac's moment formula and start with showing $\limsup_{\varepsilon\downarrow 0}\mathbb{E}_{\varepsilon/b}[(T_{\varepsilon/b+\varepsilon^2})^2] <\infty$.\\
On the second double integral in \begin{align*}&\mathbb{E}_{\varepsilon/b}[T_{\varepsilon/b+\varepsilon^2}^2] = \text{\Romannum{1}}_\varepsilon + \text{\Romannum{2}}_\varepsilon\\
&=4\varepsilon^2 \left[ \int_0^{\frac{1}{b}}\int_{\frac{1}{b}}^{\frac{1}{b}+\varepsilon} \frac{r(y\varepsilon)}{p(w\varepsilon)} \mathbb{E}_{y\varepsilon}[T_{\frac{\varepsilon}{b}+\varepsilon^2}] \,dw\,dy + \int_{\frac{1}{b}}^{\frac{1}{b}+\varepsilon}\int_y^{\frac{1}{b}+\varepsilon} \frac{r(y\varepsilon)}{p(w\varepsilon)} \mathbb{E}_{y\varepsilon}[T_{\frac{\varepsilon}{b}+\varepsilon^2}] \,dw\,dy \right]\end{align*} we may estimate $\mathbb{E}_{y\varepsilon}[T_{\frac{\varepsilon}{b}+\varepsilon^2}] \le \mathbb{E}_{\frac{\varepsilon}{b}}[T_{\frac{\varepsilon}{b}+\varepsilon^2}]$ and therefore finiteness follows by the convergence of the first moment shown in Proposition~\ref{p:rabiA2} as by using \eqref{e:cycLength}
\begin{align} \label{e:greenNeglect}
\limsup_{\varepsilon \downarrow 0}\text{\Romannum{2}}_\varepsilon \le \limsup_{\varepsilon \downarrow 0} 2\left(\mathbb{E}_{\varepsilon/b}[T_{\varepsilon/b+\varepsilon^2}]\right)^2 <\infty.
\end{align}
For the first integral we need to show \begin{align} \label{e:upClaim1}
\limsup_{\varepsilon\downarrow 0} \varepsilon^4 \int_{0}^{\frac{1}{b}} \int_{\frac{1}{b}}^{\frac{1}{b}+\varepsilon} \int_{0}^{y} \int_{y}^{\frac{1}{b}+\varepsilon} \frac{r(y\varepsilon)}{p(w\varepsilon)} \frac{r(\widetilde{y}\varepsilon)}{p(\widetilde{w}\varepsilon)} \, d\widetilde{w} \, d\widetilde{y} \, dw \,dy < \infty
\end{align} and \begin{align} \label{e:upClaim2}
\limsup_{\varepsilon\downarrow 0} \varepsilon^4 \int_{0}^{\frac{1}{b}} \int_{\frac{1}{b}}^{\frac{1}{b}+\varepsilon} \int_{y}^{\frac{1}{b}+\varepsilon} \int_{\widetilde{y}}^{\frac{1}{b}+\varepsilon} \frac{r(y\varepsilon)}{p(w\varepsilon)} \frac{r(\widetilde{y}\varepsilon)}{p(\widetilde{w}\varepsilon)} \, d\widetilde{w} \, d\widetilde{y} \, dw \,dy < \infty.
\end{align}
Using (\ref{e:intFrac}) and (\ref{e:intEpsEst}) and writing $f(x) := \frac{1}{3x^3} - \frac{b}{2x^2}$ we conclude \begin{align*}&\varepsilon^4 \int_{0}^{\frac{1}{b}} \int_{\frac{1}{b}}^{\frac{1}{b}+\varepsilon} \int_{0}^{y} \int_{y}^{\frac{1}{b}+\varepsilon} \frac{r(y\varepsilon)}{p(w\varepsilon)} \frac{r(\widetilde{y}\varepsilon)}{p(\widetilde{y}\varepsilon)} \, d\widetilde{w} \, d\widetilde{y} \, dw \,dy \sim \int_{0}^{1} \exp\left(\frac{b^5}{2} w^2 \right) \, dw \times \\
&\times \frac{1}{\varepsilon^3} \int_{0}^{\frac{1}{b}} \int_{0}^{y} \int_{y}^{\frac{1}{b}+\varepsilon} \frac{1}{(y\widetilde{y})^4} \exp\left(\frac{1}{\varepsilon^2} \left(-\frac{b^3}{6} - f(y) +f(\widetilde{w}) -f(\widetilde{y}) \right)\right) \, d\widetilde{w} \, d\widetilde{y} \,dy.
\end{align*} Substituting to the reciprocals, translating by $b$ and enlarging the integration domain implies \begin{align*}
&\frac{1}{\varepsilon^3} \int_{0}^{\frac{1}{b}} \int_{0}^{y} \int_{y}^{\frac{1}{b}+\varepsilon} \frac{1}{(y\widetilde{y})^4} \exp\left(\frac{1}{\varepsilon^2} \left(-\frac{b^3}{6} - f(y) +f(\widetilde{w}) -f(\widetilde{y}) \right)\right) \, d\widetilde{w} \, d\widetilde{y} \,dy\\
&\le \frac{1}{\varepsilon^3} \int_{0}^{\infty} \int_{y}^{\infty} \int_{-b^2\varepsilon}^{y} \left(\frac{(y+b)(\widetilde{y}+b)}{\widetilde{w}+b}\right)^2 \\ &\qquad \qquad \qquad \times \exp\left(\frac{1}{\varepsilon^2} \left(-\frac{b^3}{6} - f\left(\frac{1}{y+b}\right) +f\left(\frac{1}{\widetilde{w}+b}\right) -f\left(\frac{1}{\widetilde{y}+b}\right) \right)\right) \, d\widetilde{w} \, d\widetilde{y} \,dy.\end{align*}
Since $-b^2\varepsilon \le \widetilde{w} \le y$, \begin{align*}
&-\frac{b^3}{6} - f\left(\frac{1}{y+b}\right) +f\left(\frac{1}{\widetilde{w}+b}\right) -f\left(\frac{1}{\widetilde{y}+b}\right) \le -\frac{b}{2}\widetilde{y}^2 + \frac{b^5}{2}\varepsilon^2;
\end{align*} we continue our estimation with extending the integration domain and using Fubini's theorem to deduce with dominated convergence
\begin{align*}
&\exp\left(\frac{b^5}{2}\right)\int_{0}^{\infty} \int_{y}^{\infty} \int_{-b^2}^{y} \left(\frac{(y\varepsilon+b)(\widetilde{y}\varepsilon+b)}{\widetilde{w}\varepsilon+b}\right)^2 \exp\left(-\frac{b}{2}\widetilde{y}^2\right) \, d\widetilde{w} \, d\widetilde{y} \,dy\\
&=\exp\left(\frac{b^5}{2}\right)\int_{0}^{\infty} \int_{0}^{\infty} \int_{-b^2}^{y} \left(\frac{(y\varepsilon+b)(\widetilde{y}\varepsilon+y\varepsilon+b)}{\widetilde{w}\varepsilon+b}\right)^2 \exp\left(-\frac{b}{2}\left(\widetilde{y}+y\right)^2\right) \, d\widetilde{w} \, d\widetilde{y} \,dy\\
&\le\exp\left(\frac{b^5}{2}\right)\int_{-b^2}^{\infty} \int_{0}^{\infty} \int_{\widetilde{w}}^{\infty} \left(\frac{(y\varepsilon+b)(\widetilde{y}\varepsilon+y\varepsilon+b)}{\widetilde{w}\varepsilon+b}\right)^2 \exp\left(-\frac{b}{2}\left(\widetilde{y}+y\right)^2\right) dy \, d\widetilde{y} \, d\widetilde{w}\\
&=\exp\left(\frac{b^5}{2}\right)\int_{-b^2}^{\infty} \int_{0}^{\infty} \int_{0}^{\infty} \left(\frac{(y\varepsilon+\widetilde{w}\varepsilon+b)(\widetilde{y}\varepsilon+y\varepsilon+\widetilde{w}\varepsilon+b)}{\widetilde{w}\varepsilon+b}\right)^2 \exp\left(-\frac{b}{2}\left(\widetilde{y}+y+\widetilde{w}\right)^2\right) dy \, d\widetilde{y} \, d\widetilde{w}\\
&\xrightarrow[]{\varepsilon\to 0}\exp\left(\frac{b^5}{2}\right)b^2\int_{-b^2}^{\infty} \int_{0}^{\infty} \int_{0}^{\infty} \exp\left(-\frac{b}{2}\left(\widetilde{y}+y+\widetilde{w}\right)^2\right) dy \, d\widetilde{y} \, d\widetilde{w} < \infty
\end{align*}
which shows (\ref{e:upClaim1}). Proving (\ref{e:upClaim2}) can be performed very similar to (\ref{e:upClaim1}). Reusing the transformation $x\mapsto \frac{1}{x+b}$ yields the bound
\begin{align*}
&\int_{0}^{\infty} \int_{-b^2}^{y} \int_{-b^2}^{\widetilde{y}} \left(\frac{(y\varepsilon+b)(\widetilde{y}\varepsilon+b)}{\widetilde{w}\varepsilon+b}\right)^2 \exp\left(\frac{b}{2}\left(\widetilde{w}^2-\widetilde{y}^2-y^2\right)\right) \, d\widetilde{w} \, d\widetilde{y} \,dy.
\end{align*} Noting $\widetilde{w}^2-\widetilde{y}^2 \le b^4$ and again using Fubini's theorem on the extended integration domain we end up with the same expression with $y$ and $\widetilde{y}$ switched which is the same quantity.\\
We now move on to the second cycle phase, i.e. proving \[\limsup_{\varepsilon\downarrow 0}\mathbb{E}_{\varepsilon/b+\varepsilon^2}[T_{\varepsilon/b}^2 \mid T_{\varepsilon/b} < T_z] <\infty.\]
In the spirit of \eqref{e:greenNeglect} it reduces to consider one summand and the analoga to \eqref{e:upClaim1} and \eqref{e:upClaim2} are \begin{align*}
\limsup_{\varepsilon\downarrow 0} \varepsilon^4 \int_{\frac{1}{b}+\varepsilon}^{\frac{z}{\varepsilon}} \int_{\frac{1}{b}}^{\frac{1}{b}+\varepsilon} \int_{\frac{1}{b}}^{y} \int_{\frac{1}{b}}^{\widetilde{y}} \frac{r(y\varepsilon)}{p(w\varepsilon)} \frac{r(\widetilde{y}\varepsilon)}{p(\widetilde{y}\varepsilon)} \, d\widetilde{w} \, d\widetilde{y} \, dw \,dy < \infty.
\end{align*} and \begin{align*}
\limsup_{\varepsilon\downarrow 0} \varepsilon^4 \int_{\frac{1}{b}+\varepsilon}^{\frac{z}{\varepsilon}} \int_{\frac{1}{b}}^{\frac{1}{b}+\varepsilon} \int_{y}^{\frac{z}{\varepsilon}} \int_{\frac{1}{b}}^{y} \frac{r(y\varepsilon)}{p(w\varepsilon)} \frac{r(\widetilde{y}\varepsilon)}{p(\widetilde{y}\varepsilon)} \, d\widetilde{w} \, d\widetilde{y} \, dw \,dy < \infty.
\end{align*}
Again using (\ref{e:intEpsEst}) and enlarging the domain of integration it is sufficient to show with familiar abbreviation $f(x):= \frac{1}{3x^3} - \frac{b}{2x^2}$
\begin{align} \label{e:downClaim1}
\limsup_{\varepsilon\downarrow 0} \frac{1}{\varepsilon^3} \int_{\frac{1}{b}}^{\infty} \int_{\frac{1}{b}}^{y} \int_{\frac{1}{b}}^{\widetilde{y}} \frac{1}{(y\widetilde{y})^4} \exp\left(\frac{1}{\varepsilon^2}\left(-\frac{b^3}{6}-f(y)+f(\widetilde{w})-f(\widetilde{y})\right)\right) \, d\widetilde{w} \, d\widetilde{y} \,dy < \infty
\end{align} and \begin{align} \label{e:downClaim2}
\limsup_{\varepsilon\downarrow 0} \frac{1}{\varepsilon^3} \int_{\frac{1}{b}}^{\infty} \int_{y}^\infty \int_{\frac{1}{b}}^{y} \frac{1}{(y\widetilde{y})^4} \exp\left(\frac{1}{\varepsilon^2}\left(-\frac{b^3}{6}-f(y)+f(\widetilde{w})-f(\widetilde{y})\right)\right) \, d\widetilde{w} \, d\widetilde{y} \,dy < \infty.
\end{align}
We use similar techniques resulting in
\begin{align*}
&\frac{1}{\varepsilon^3} \int_{\frac{1}{b}}^{\infty} \int_{\frac{1}{b}}^{y} \int_{\frac{1}{b}}^{\widetilde{y}} \frac{1}{(y\widetilde{y})^4} \exp\left(\frac{1}{\varepsilon^2}\left(-\frac{b^3}{6}-f(y)+f(\widetilde{w})-f(\widetilde{y})\right)\right) \, d\widetilde{w} \, d\widetilde{y} \,dy \\
&= \frac{1}{\varepsilon^3} \int_{0}^{b} \int_{0}^{y} \int_{0}^{\widetilde{y}} \left(\frac{(b-y)(b-\widetilde{y})}{b-\widetilde{w}}\right)^2 \exp\left(\frac{1}{\varepsilon^2} \left(f\left(\frac{1}{y}\right) -f\left(\frac{1}{\widetilde{w}}\right) +f\left(\frac{1}{\widetilde{y}}\right) \right)\right) \, d\widetilde{w} \, d\widetilde{y} \,dy.\end{align*}
Due to $f$ being monotonously increasing on $[1/b,\infty)$ the difference $f(1/\widetilde{y})-f(1/\widetilde{w})\le 0$ is non-positive which combined with the fact $f(1/y)=y^3/3-by^2/2 \le -by^2/6$ provides for the estimate \begin{align*}
&\int_{0}^{b/\varepsilon} \int_{0}^{y} \int_{0}^{\widetilde{y}} \left(\frac{(b-y\varepsilon)(b-\widetilde{y}\varepsilon)}{b-\widetilde{w}\varepsilon}\right)^2 \exp\left( -\frac{b}{6}y^2 \right) \, d\widetilde{w} \, d\widetilde{y} \,dy \le \frac{b^2}{2}\int_{0}^{\infty} y^2 \exp\left( -\frac{b}{6}y^2 \right) \,dy
\end{align*}
showing (\ref{e:downClaim1}).\\
Analogously for \eqref{e:downClaim2} \begin{align*}
&\frac{1}{\varepsilon^3} \int_{\frac{1}{b}}^{\infty} \int_{y}^\infty \int_{\frac{1}{b}}^{y} \frac{1}{(y\widetilde{y})^4} \exp\left(\frac{1}{\varepsilon^2}\left(-\frac{b^3}{6}-f(y)+f(\widetilde{w})-f(\widetilde{y})\right)\right) \, d\widetilde{w} \, d\widetilde{y} \,dy\\
&\le \int_{0}^{b/\varepsilon} \int_{y}^{b/\varepsilon} \int_{0}^{y} \left(\frac{(b-y\varepsilon)(b-\widetilde{y}\varepsilon)}{b-\widetilde{w}\varepsilon}\right)^2 \exp\left( -\frac{b}{6}\widetilde{y}^2 \right) \, d\widetilde{w} \, d\widetilde{y} \,dy\\
&\le b^2 \int_{0}^{\infty} \int_{y}^{\infty} y \cdot \exp\left( -\frac{b}{6}\widetilde{y}^2 \right) \, d\widetilde{y} \,dy = \frac{3\sqrt{6 \pi b}}{4}.
\end{align*}
\end{proof}
\begin{proposition}[First part of assertion (B1)]
\[ \lim_{scaling}\mathbb{E}_x[T_{\varepsilon/b} \wedge T_z] = 0 \text{ for } 0 < x <z. \]
\end{proposition}
\begin{proof}
As in the first example class we are in the situation \[\mathbb{E}_x[T_{\varepsilon/b} \wedge T_z] \le \frac{2}{\lambda^2} \left[ \int_{\varepsilon/b}^{x}\int_{\varepsilon/b}^{y} \frac{r(y)}{p(w)} \,dw \, dy + \int_{x}^{z}\int_{y}^{z} \frac{r(y)}{p(w)} \,dw \, dy \right]\] with the second integral being bounded. Using Lemma~\ref{l:hitProb} and equation~(\ref{e:intFrac}) we infer \[ \frac{2}{\lambda^2} \int_{\varepsilon/b}^{x}\int_{\varepsilon/b}^{y} \frac{r(y)}{p(w)} \,dw \, dy = 2\int_{1/b}^{x/\varepsilon}\int_{1/b}^{y} \frac{1}{y^4} \exp\left(\frac{1}{\varepsilon^2} \left(-\frac{b^3}{6} + f(w) - f(y)\right)\right) \,dw \, dy. \] By observing \[f:[1/b,\infty)\to\R,\quad f(x):= \frac{1}{3x^3} - \frac{b}{2x^2}\] is monotonously increasing the claimed convergence is readily seen.
\end{proof}
\begin{proposition}[B2]
\[\P_x (T_{\varepsilon/b} < T_z) \xrightarrow[\varepsilon\to 0]{} \frac{
\int_x^z \exp \left(-\frac{b}{2y^2}\right) \,dy}{\int_0^z \exp \left(-\frac{b}{2y^2}\right) \,dy} \qquad \text{ for } 0< x < z.\]
\end{proposition}
\begin{proof}
The scale function approach leads to \[\P_x (T_{\varepsilon/b} < T_z) = \frac{
\int_x^z \exp \left(\frac{\varepsilon}{3y^3}-\frac{b}{2y^2}\right) \,dy}{\int_{\varepsilon/b}^z \exp \left(\frac{\varepsilon}{3y^3}-\frac{b}{2y^2}\right) \,dy};\]
\end{proof} dominated convergence theorem may be applied to numerator and denominator separately finishing the proof. For the denominator observe that $\frac{\varepsilon}{3y^3}-\frac{b}{2y^2} \le -b/(6y^2)$ holds.
\begin{proposition}[Finishing (B1)]
\[ \lim_{scaling}\mathbb{E}_x[\widetilde{T}_{\varepsilon/b} ] = 0 \text{ for } x >0. \]
\end{proposition}
\begin{proof}
As in the first example class.
\end{proof}
\section{Conclusion}
This work was mainly motivated by \cite{BB17} of M. Bauer and D. Bernard. Using a clear probabilistic heuristic we prove a version of Conjecture B under general abstract conditions and demonstrate their usability in the example sections. We believe that the approach presented above is flexible enough to cover most one-dimensional examples of interest. As already discussed in \cite{BB17} the natural question of extending the results to multi-dimensional situations remains unanswered, even though numerical simulations seem very promising in the sense that a point process could be obtained in an appropriate scaling regime. The tools and key concepts used throughout our approach appear relatively general and it would be clearly interesting to see, whether the approach of this work can be extended to higher dimensional situations. We leave this for future investigation.
\section{Acknowledgment}
The authors thank M. Bauer, D. Bernard, A. Klump, A. Tilloy and P. Trykacz for useful discussions about the topic of this work.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,351
|
{"url":"https:\/\/www.tutorialspoint.com\/the-number-of-boys-and-girls-in-a-class-are-in-the-ratio-7-5-the-number-of-boys-is-8-more-than-the-number-of-girls-whar-is-the-total-class-stren","text":"# The number of boys and girls in a class are in the ratio 7:5. The number of boys is 8 more than the number of girls. Whar is the total class strength?\n\n#### Complete Python Prime Pack\n\n9 Courses \u00a0\u00a0 \u00a02 eBooks\n\n#### Artificial Intelligence & Machine Learning Prime Pack\n\n6 Courses \u00a0\u00a0 \u00a01 eBooks\n\n#### Java Prime Pack\n\n9 Courses \u00a0\u00a0 \u00a02 eBooks\n\nGiven:\n\nThe number of boys and girls are in the ratio of $7:5$.\n\nThe number of boys is 8 more than the number of girls.\n\nTo do:\n\nWe have to find the total strength of the class.\n\nSolution:\n\nLet the number of boys be $7x$ and the number of girls be $5x$.\n\nAccording to the question,\n\n$7x = 5x+8$\n\n$7x-5x=8$\n\n$2x=8$\n\n$x=\\frac{8}{2}$\n\n$x=4$\n\nTherefore,\n\nThe number of boys $= 7x = 7(4) = 28$.\n\nThe number of girls $= 5x = 5(4) =20$.\n\nThe total strength of the class $= 28+20 = 48$.\n\nTherefore, the total number of students in the class is $48$.\n\nUpdated on 10-Oct-2022 13:47:38","date":"2022-12-08 02:36:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6463639736175537, \"perplexity\": 1039.5173100635204}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446711232.54\/warc\/CC-MAIN-20221208014204-20221208044204-00676.warc.gz\"}"}
| null | null |
Home / news / Release / Distingué synthwave composer Color Theory's new song premiered
Distingué synthwave composer Color Theory's new song premiered
Tuesday, March 03, 2020 news, Release
Solo synthwave one-man act Color Theory premiered 'This Whole Nothing' via Atwood Magazine on the 27th of February 2020. The release is due on 13 March 2020 via 11th Records. The artist has had support from acclaimed Youtube channels such as NewRetroWave, The '80s Guy, LuigiDonatello, and SoulSearchAndDestroy. His work has been streamed close to 3 million times to date. Brian Hazard (Colour Theory) won first prize at the John Lennon Songwriting Contest, this led to three of his songs featured in the Just Dance and Rock Band video game series. MTV's The Real World has featured multiple song placements in their show.
Brian was raised in California, he participated in his high school jazz band as a pianist. He would later qualify for a Bachelor of Music degree in Piano Performance. His early work includes a track titled 'Ponytail Girl', which mistakenly was renamed as Depeche Mode b-side. Color Theory is a fan of Depeche Mode, and incorporates that influence in his music. The artist was amused by the mistake saying, "It's pretty neat to be mistaken for your favorite band". Inspired by this mistake, he took the opportunity to produce an album titled Colour Theory presents Depeche Mode, to pay homage to the band.
'This Whole Nothing' was influenced by filling spaces in his music with adorned electronics and nostalgic synths, without overwhelming the listener. Color Theory elaborates further on the song, "This Whole Nothing" is one of those songs that people will read into as they will. The lyrics are just vague enough to be universal. My inspiration was the spacious emptiness one experiences when the voice of the mind quiets and the world can be seen as it truly is."
Follow Color Theory:
Website - Facebook - Twitter - Soundcloud - Instagram - Youtube - Spotify
Distingué synthwave composer Color Theory's new song premiered Reviewed by J Tuck on Tuesday, March 03, 2020 Rating: 5
Blog archive July 2020 (6) June 2020 (1) May 2020 (3) April 2020 (4) March 2020 (3) February 2020 (4) January 2020 (7) December 2019 (3) November 2019 (5) October 2019 (6) September 2019 (5) August 2019 (6) July 2019 (8) June 2019 (2) May 2019 (4) April 2019 (1) March 2019 (2) February 2019 (1) January 2019 (2) November 2018 (6) October 2018 (2) September 2018 (4) August 2018 (2) July 2018 (2) June 2018 (2) May 2018 (2) April 2018 (1) March 2018 (2) February 2018 (5) January 2018 (5) December 2017 (10) November 2017 (10) October 2017 (2) September 2017 (2) August 2017 (3) July 2017 (2) June 2017 (4) May 2017 (2) April 2017 (3) March 2017 (4) February 2017 (6) January 2017 (3) December 2016 (1) November 2016 (2) October 2016 (1) September 2016 (6) August 2016 (3) June 2016 (7) March 2016 (6) February 2016 (1) January 2016 (4) September 2015 (4) August 2015 (6) July 2015 (1) June 2015 (7) February 2015 (2) September 2014 (2) August 2014 (1) June 2014 (3) May 2014 (4)
Pop singer shows a softer side on song 'Eyes Off Me'
Frythm creates his own musical 'Flow' with new album
Interview with music producer Vencres
album alt-pop Ambient chill wave song Electronic Music Exclusive Interview french disco Instrumental Interview interviews Multi-Instrumentalist Music Music News music video New New Album New music New Release news Release remixes singer Song synth-pop Synth-Wave videos Wave Beats Wave Music
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,006
|
Big Little Lies Season 2 Eyes Spring 2018 Production Start
Michael Baculinao 5 years ago
Last September, TVOvermind reported in an article that a second season of the hit HBO limited series Big Little Lies is now a huge possibility. Now, the production start for the second season has been set according to a report by TVLine. HBO is now eyeing a Spring 2018 production start for the second season of Big Little Lies. The show's writer and executive producer, David E. Kelley, recently implied in an interview that the show is nearing on its scheduling stage.
The cast has not been locked yet to return, but they are all expected to come back except for Alexander Skarsgard whose character died in the finale (unless he appears in a flashback). Jean-Marc Vallee, who helmed the first season, is not expected to come back to direct the second season due to scheduling conflicts. They are currently searching for a female director who can succeed him. Vallee was previously against the plans for a second season but reverted his stance after the show swept the Emmys.
To clear her schedule for the production of the second season, Reese Witherspoon has stepped back to produce and star in the Fox Searchlight sci-fi pic, Pale Blue Dot, from Noah Hawley. The project is based on a spec by Brian C. Brown and Elliott DiGuiseppi and tells the story of a female astronaut who, upon returning to Earth from a mission in space, begins to slowly unravel and lose touch with reality. It is unknown if it will also affect the production of Witherspoon's morning show drama with Jennifer Aniston that recently landed on Apple. There is a possibility that the production of that show will start after Witherspoon finishes her work on Big Little Lies season 2.
During the 69th Primetime Emmy Awards, Big Little Lies won eight awards including Outstanding Limited Series and Outstanding Lead Actress in a Limited Series or TV Movie for Nicole Kidman. We'll keep you updated for any news regarding this.
Big Little Lies Is Officially Renewed for a 2nd season on HBO
Big Little Lies: Here's What Those Fancy Homes on the Show Cost
"Big Little Lies" Season 2 Set to Premiere This June
Michael Baculinao February 11, 2019
2017 Mother's Day Awards: This Year's Best TV Mothers
Araceli Aviles May 14, 2017
2017 Father's Day Awards: This Year's Worst TV Dads
Big Little Lies Intro Sounds a Little Like "Have Yourself a Merry Little Christmas"
Aiden Mason April 6, 2017
The Year's Biggest OMG Moments of the 2016-2017 TV Season
Michael Baculinao
Michael loves to watch TV shows and movies and he does it on his free time. He aspires to be a successful filmmaker someday.
Big Little Lies Season 2 is Getting Closer
10 Things You Didn't Know about Kiki Layne
Scandal Season 5 Episode 9 Review: "Baby, It's Cold Outside"
Grimm 2.21 "The Walking Dead" Recap
Behind-the-Scenes Secrets from the Set of Days of Our Lives
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 8,955
|
Jeremiah "Jerry" P. Thomas (October 30, 1830 – December 15, 1885) was an American bartender who owned and operated saloons in New York City. Because of his pioneering work in popularizing cocktails across the United States as well, he is considered "the father of American mixology". In addition to writing the seminal work on cocktails, Bar-Tender's Guide, Thomas displayed creativity and showmanship while preparing drinks and established the image of the bartender as a creative professional. As such, he was often nicknamed "Professor" Jerry Thomas.
Early life, education and work
Thomas was born about 1830 in Sackets Harbor, New York. (His 1885 obituary in the New York Times said 1832.) As a young man, he learned bartending in New Haven, Connecticut before sailing for California during its mid-19th century Gold Rush. While in California, Thomas worked as a bartender, gold prospector and minstrel show manager. According to his 1885 obituary, he was left some money by his father, which helped in these travels.
Saloon keeper and bartender
Thomas moved back to the East Coast in 1851, settling in New York City. He opened a saloon below Barnum's American Museum; it would be the first of four saloons he would run in New York City over his lifetime. After running this first bar, Thomas went on the road for several years, working as the head bartender at hotels and saloons in St. Louis, Missouri; Chicago, Illinois; San Francisco, California; Charleston, South Carolina; and New Orleans, Louisiana. At one point he toured Europe, carrying along a set of solid-silver bar tools. He was well known for his showmanship as a bartender: he developed elaborate and flashy techniques of mixing cocktails, sometimes while juggling bottles, cups and mixers. He often wore flashy jewelry and had his bar tools and cups embellished with precious stones and metals. At the Occidental Hotel in San Francisco, Thomas was earning $100 a week—more than the Vice President of the United States.
Bar-Tender's Guide
In 1862, Thomas finished Bar-Tender's Guide (alternately titled How to Mix Drinks or The Bon-Vivant's Companion), the first drink book ever published in the United States. The book collected and codified what was then an oral tradition of recipes from the early days of cocktails, including some of his own creations; the guide laid down the principles for formulating mixed drinks of all categories. He would update it several times in his lifetime to include new drinks that he discovered or created. The first edition of the guide included the first written recipes of such cocktails as the Brandy Daisy, Fizz, Flip, Sour and variations of the earliest form of mixed drink, Punch. The 1876 edition included the first written recipe for the Tom Collins, which appeared just after The Tom Collins Hoax of 1874.
Virginia City, Nevada
From "IMBIBE !" by David Wondrich:
"The fortunes of Thomas' book were likely affected by the Professor's next move: rather than stay at the Occidental (SF), where he could have passed the volume along to the steady stream of clay-moistening literati who stopped in at his bar, he pulled up stakes yet again and headed east to witness the vast and vulgar spectacle that was unfolding 200 miles away in Virginia City, Nevada where a city of 30,000 had sprung up overnight on top of the massive mountain of silver known as the Comstock Lode. By 1864, Thomas was there, either (as local legend has it) at the famous Delta Saloon or at the Spalding Saloon on C Street, where the city directory found him- or, of course, at both.
Sign Located to the Right of Entrance Doors of The Delta Saloon in Virginia City, NevadaNunc Est BibendumHead Bartender at the Delta Saloon in 1863 was Prof. Jerry Thomas, most celebrated barman in American history. Coming to Virginia City, according to the Territorial Enterprise** of that year, from the Occidental in San Francisco, he did much to elevate the tastes and drinking habits of the then uncouth Comstock.** Territorial Enterprise newspaper (Samuel Clements, aka Mark Twain, reporter)
San Francisco & the Blue Blazer
Thomas developed his signature drink, the Blue Blazer, at the El Dorado gambling saloon in San Francisco. The drink is made by lighting whiskey afire and passing it back and forth between two mixing glasses, creating an arc of flame. Thomas continued to develop new drinks throughout his life. His mixing of the "Martinez", which recipe was published in the 1887 edition of his guide, has sometimes been viewed as a precursor to the modern martini. Thomas claimed to have invented the Tom and Jerry and did much to popularize it in the United States; however, the history of the drink predated him.
In New York City
Upon returning to New York City, he became head bartender at the Metropolitan hotel. In 1866 he opened his own bar again, on Broadway between 21st and 22nd Streets, which became his most famous establishment. Thomas was one of the first to display the drawings of Thomas Nast. In his saloon he hung Nast's caricatures of the political and theatrical figures; one notable drawing, now lost, was of Thomas "in nine tippling postures colossally." The saloon included funhouse mirrors. This historic bar has been adapted for use as a Restoration Hardware store.
Thomas was an active man about town, a flashy dresser who was fond of kid gloves and his gold Parisian watch. He enjoyed going to bare-knuckle prize fights, and was an art collector. He enjoyed traveling. By middle age he was married and had two daughters. Always a good sport, at 205 pounds he was one of the lighter members of the Fat Men's Association. He had a side interest in gourds; at one point in the late 1870s, Thomas served as president of The Gourd Club after producing the largest specimen.
Later years and death
Toward the end of his life, Thomas tried speculating on Wall Street, but bad judgments rendered him broke. He had to sell his successful saloon and auction off his considerable art collection; he tried opening a new bar but was unable to maintain the level of popularity as his more famous location. He died in New York City of a stroke (apoplexy) in 1885 at the age of 55. His death was marked by substantial obituaries across the United States. The New York Times obituary noted that Thomas was "at one time better known to club men and men about town than any other bartender in this city, and he was very popular among all classes." He is interred at Woodlawn Cemetery in the Bronx, New York City.
History
While the Occidental Cigar Club was established in 2001, its roots go far back in San Francisco history to the 1860's and The Occidental Hotel. This "Quiet House of Peculiar Excellence" was the first real cocktail lounge in the City. Principal Barman Professor Jerry Thomas plied his trade there and was the originator of the Martini. The first bi-coastal celebrity bartender, he brought civility to the bar scene as well as creativity to mixology. The wood cut etchings found in his book, The Bon Vivant's Companion or How To Mix Drinks adorn our walls, and his spirit is embodied in the drinks we pour today. The Occidental Cigar Club pays homage to that San Francisco institution.
Sites Today
The Delta Saloon, Virginia City, Nevada
Occidental Cigar Club, San Francisco: "While the Occidental Cigar Club was established in 2001, its roots go far back in San Francisco history to the 1860's and The Occidental Hotel. This "Quiet House of Peculiar Excellence" was the first real cocktail lounge in the City. Principal Barman Professor Jerry Thomas plied his trade there and was the originator of the Martini. The first bi-coastal celebrity bartender, he brought civility to the bar scene as well as creativity to mixology. The wood cut etchings found in his book, The Bon Vivant's Companion or How To Mix Drinks adorn our walls, and his spirit is embodied in the drinks we pour today. The Occidental Cigar Club pays homage to that San Francisco institution.
The Jerry Thomas Speakeasy, Rome, Italy
Bibliography
Thomas is known to have authored two books: How to Mix Drinks, or The Bon-Vivant's Companion (originally published in 1862, with new and updated editions in 1876, and again posthumously in 1887) and Portrait Gallery of Distinguished Bar-Keepers (originally published in 1867 and considered a lost book).
The titles of the books are organized by their outside cover titles / inside cover titles.
How to Mix Drinks / How to Mix Drinks, or The Bon-Vivant's Companion (Dick & Fitzgerald Publishers, 1862)
Portrait Gallery of Distinguished Bar-Keepers (1867)
How to Mix Drinks (1876)
Jerry Thomas' Bar-Tenders Guide / The Bar-Tender's Guide, or How to Mix All Kinds of Plain and Fancy Drinks - An Entirely New and Enlarged Edition (Fitzgerald Publishing Corporation, 1887)
Legacy and honors
March 2003, a tribute was held for Jerry Thomas at the Oak Room at the Plaza Hotel in New York City, where bartenders gathered to make the many cocktails published in his books. The event was organized by David Wondrich, author of Esquire Drinks and a later biography of Thomas, and Slow Food, the organization devoted to traditional preparations of food.
Thomas is featured in the exhibits of the Museum of the American Cocktail, founded in 2004.
Cocktail writer David Wondrich has written a book about Jerry Thomas entitled Imbibe!: From Absinthe Cocktail to Whiskey Smash, a Salute in Stories and Drinks to "Professor" Jerry Thomas, Pioneer of the American Bar. The book includes an extensively researched biography of Jerry Thomas, as well as the majority of Thomas' cocktails, taken directly from his books and adapted to modern-day measurement methods (ie: 1 ounce as opposed to 1 pony). The book was first published in 2007 and was the first cocktail book to win a James Beard Award. After years of additional research, Wondrich published a revised edition in 2015.
The Jerry Thomas Speakeasy opened in Rome, Italy, is named for the bartender.
References
Further reading
David Wondrich, Imbibe! (Perigee Books, 2007; Penguin Group, 2015), a biography of Jerry Thomas and annotated recipe book of his drinks, by the drink correspondent for Esquire.
External links
Digitized copy of the 1862 edition of How to Mix Drinks or The Bon-Vivant's Companion from Google Book Search
1830 births
1885 deaths
Bartenders
American bartenders
People from Sackets Harbor, New York
Burials at Woodlawn Cemetery (Bronx, New York)
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,048
|
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Reading epochs from a raw FIF file — MNE 0.12.0 documentation</title>
<link rel="stylesheet" href="../../_static/basic.css" type="text/css" />
<link rel="stylesheet" href="../../_static/pygments.css" type="text/css" />
<link rel="stylesheet" href="../../_static/gallery.css" type="text/css" />
<link rel="stylesheet" href="../../_static/bootswatch-3.3.4/flatly/bootstrap.min.css" type="text/css" />
<link rel="stylesheet" href="../../_static/bootstrap-sphinx.css" type="text/css" />
<link rel="stylesheet" href="../../_static/style.css" type="text/css" />
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT: '../../',
VERSION: '0.12.0',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '.html',
HAS_SOURCE: true
};
</script>
<script type="text/javascript" src="../../_static/jquery.js"></script>
<script type="text/javascript" src="../../_static/underscore.js"></script>
<script type="text/javascript" src="../../_static/doctools.js"></script>
<script type="text/javascript" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
<script type="text/javascript" src="../../_static/js/jquery-1.11.0.min.js"></script>
<script type="text/javascript" src="../../_static/js/jquery-fix.js"></script>
<script type="text/javascript" src="../../_static/bootstrap-3.3.4/js/bootstrap.min.js"></script>
<script type="text/javascript" src="../../_static/bootstrap-sphinx.js"></script>
<link rel="shortcut icon" href="../../_static/favicon.ico"/>
<link rel="top" title="MNE 0.12.0 documentation" href="../../index.html" />
<link rel="up" title="Examples Gallery" href="../index.html" />
<link rel="next" title="Creating MNE objects from data arrays" href="plot_objects_from_arrays.html" />
<link rel="prev" title="Reading and writing raw files" href="plot_read_and_write_raw_data.html" />
<link href='http://fonts.googleapis.com/css?family=Open+Sans:400italic,700italic,400,700' rel='stylesheet' type='text/css'>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-37225609-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
<script type="text/javascript">
!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);
js.id=id;js.src="http://platform.twitter.com/widgets.js";
fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");
</script>
<script type="text/javascript">
(function() {
var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true;
po.src = 'https://apis.google.com/js/plusone.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s);
})();
</script>
<link rel="canonical" href="https://mne.tools/stable/index.html" />
</head>
<body role="document">
<div id="navbar" class="navbar navbar-default navbar-fixed-top">
<div class="container">
<div class="navbar-header">
<!-- .btn-navbar is used as the toggle for collapsed navbar content -->
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".nav-collapse">
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="../../index.html"><img src="../../_static/mne_logo_small.png">
</a>
<span class="navbar-text navbar-version pull-left"><b>0.12.0</b></span>
</div>
<div class="collapse navbar-collapse nav-collapse">
<ul class="nav navbar-nav">
<li><a href="../../getting_started.html">Get started</a></li>
<li><a href="../../tutorials.html">Tutorials</a></li>
<li><a href="../index.html">Gallery</a></li>
<li><a href="../../python_reference.html">API</a></li>
<li><a href="../../manual/index.html">Manual</a></li>
<li><a href="../../faq.html">FAQ</a></li>
<li class="dropdown globaltoc-container">
<a role="button"
id="dLabelGlobalToc"
data-toggle="dropdown"
data-target="#"
href="../../index.html">Site <b class="caret"></b></a>
<ul class="dropdown-menu globaltoc"
role="menu"
aria-labelledby="dLabelGlobalToc"><ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../../getting_started.html">Getting started</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../tutorials.html">Tutorials</a></li>
<li class="toctree-l1 current"><a class="reference internal" href="../index.html">Examples Gallery</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../faq.html">Frequently Asked Questions</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../contributing.html">Contribute to MNE</a></li>
</ul>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../python_reference.html">Python API Reference</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../manual/index.html">User Manual</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../whats_new.html">What’s new</a></li>
</ul>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../cite.html">How to cite MNE</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../references.html">Related publications</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../cited.html">Publications from MNE users</a></li>
</ul>
</ul>
</li>
<li class="dropdown">
<a role="button"
id="dLabelLocalToc"
data-toggle="dropdown"
data-target="#"
href="#">Page <b class="caret"></b></a>
<ul class="dropdown-menu localtoc"
role="menu"
aria-labelledby="dLabelLocalToc"><ul>
<li><a class="reference internal" href="#">Reading epochs from a raw FIF file</a></li>
</ul>
</ul>
</li>
</ul>
<form class="navbar-form navbar-right" action="../../search.html" method="get">
<div class="form-group">
<input type="text" name="q" class="form-control" placeholder="Search" />
</div>
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="sphinxsidebar" role="navigation" aria-label="main navigation">
<div class="sphinxsidebarwrapper">
<p class="logo"><a href="../../index.html">
<img class="logo" src="../../_static/mne_logo_small.png" alt="Logo"/>
</a></p><ul>
<li><a class="reference internal" href="#">Reading epochs from a raw FIF file</a></li>
</ul>
<li>
<a href="plot_read_and_write_raw_data.html" title="Previous Chapter: Reading and writing raw files"><span class="glyphicon glyphicon-chevron-left visible-sm"></span><span class="hidden-sm hidden-tablet">« Reading and w...</span>
</a>
</li>
<li>
<a href="plot_objects_from_arrays.html" title="Next Chapter: Creating MNE objects from data arrays"><span class="glyphicon glyphicon-chevron-right visible-sm"></span><span class="hidden-sm hidden-tablet">Creating MNE ... »</span>
</a>
</li>
<form action="../../search.html" method="get">
<div class="form-group">
<input type="text" name="q" class="form-control" placeholder="Search" />
</div>
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
<div class="col-md-12 content">
<div class="section" id="reading-epochs-from-a-raw-fif-file">
<span id="sphx-glr-auto-examples-io-plot-read-epochs-py"></span><h1>Reading epochs from a raw FIF file<a class="headerlink" href="#reading-epochs-from-a-raw-fif-file" title="Permalink to this headline">¶</a></h1>
<p>This script shows how to read the epochs from a raw file given
a list of events. For illustration, we compute the evoked responses
for both MEG and EEG data by averaging all the epochs.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="c1"># Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr></span>
<span class="c1"># Matti Hamalainen <msh@nmr.mgh.harvard.edu></span>
<span class="c1">#</span>
<span class="c1"># License: BSD (3-clause)</span>
<span class="kn">import</span> <span class="nn">mne</span>
<span class="kn">from</span> <span class="nn">mne</span> <span class="kn">import</span> <span class="n">io</span>
<span class="kn">from</span> <span class="nn">mne.datasets</span> <span class="kn">import</span> <span class="n">sample</span>
<span class="k">print</span><span class="p">(</span><span class="n">__doc__</span><span class="p">)</span>
<span class="n">data_path</span> <span class="o">=</span> <span class="n">sample</span><span class="o">.</span><span class="n">data_path</span><span class="p">()</span>
</pre></div>
</div>
<p>Set parameters</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">raw_fname</span> <span class="o">=</span> <span class="n">data_path</span> <span class="o">+</span> <span class="s1">'/MEG/sample/sample_audvis_filt-0-40_raw.fif'</span>
<span class="n">event_fname</span> <span class="o">=</span> <span class="n">data_path</span> <span class="o">+</span> <span class="s1">'/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'</span>
<span class="n">event_id</span><span class="p">,</span> <span class="n">tmin</span><span class="p">,</span> <span class="n">tmax</span> <span class="o">=</span> <span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.2</span><span class="p">,</span> <span class="mf">0.5</span>
<span class="c1"># Setup for reading the raw data</span>
<span class="n">raw</span> <span class="o">=</span> <a href="../../generated/mne.io.read_raw_fif.html#mne.io.read_raw_fif"><span class="n">io</span><span class="o">.</span><span class="n">read_raw_fif</span></a><span class="p">(</span><span class="n">raw_fname</span><span class="p">)</span>
<span class="n">events</span> <span class="o">=</span> <a href="../../generated/mne.read_events.html#mne.read_events"><span class="n">mne</span><span class="o">.</span><span class="n">read_events</span></a><span class="p">(</span><span class="n">event_fname</span><span class="p">)</span>
<span class="c1"># Set up pick list: EEG + MEG - bad channels (modify to your needs)</span>
<span class="n">raw</span><span class="o">.</span><span class="n">info</span><span class="p">[</span><span class="s1">'bads'</span><span class="p">]</span> <span class="o">+=</span> <span class="p">[</span><span class="s1">'MEG 2443'</span><span class="p">,</span> <span class="s1">'EEG 053'</span><span class="p">]</span> <span class="c1"># bads + 2 more</span>
<span class="n">picks</span> <span class="o">=</span> <a href="../../generated/mne.pick_types.html#mne.pick_types"><span class="n">mne</span><span class="o">.</span><span class="n">pick_types</span></a><span class="p">(</span><span class="n">raw</span><span class="o">.</span><span class="n">info</span><span class="p">,</span> <span class="n">meg</span><span class="o">=</span><span class="bp">True</span><span class="p">,</span> <span class="n">eeg</span><span class="o">=</span><span class="bp">False</span><span class="p">,</span> <span class="n">stim</span><span class="o">=</span><span class="bp">True</span><span class="p">,</span> <span class="n">eog</span><span class="o">=</span><span class="bp">True</span><span class="p">,</span>
<span class="n">exclude</span><span class="o">=</span><span class="s1">'bads'</span><span class="p">)</span>
<span class="c1"># Read epochs</span>
<span class="n">epochs</span> <span class="o">=</span> <a href="../../generated/mne.Epochs.html#mne.Epochs"><span class="n">mne</span><span class="o">.</span><span class="n">Epochs</span></a><span class="p">(</span><span class="n">raw</span><span class="p">,</span> <span class="n">events</span><span class="p">,</span> <span class="n">event_id</span><span class="p">,</span> <span class="n">tmin</span><span class="p">,</span> <span class="n">tmax</span><span class="p">,</span> <span class="n">proj</span><span class="o">=</span><span class="bp">True</span><span class="p">,</span>
<span class="n">picks</span><span class="o">=</span><span class="n">picks</span><span class="p">,</span> <span class="n">baseline</span><span class="o">=</span><span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">0</span><span class="p">),</span> <span class="n">preload</span><span class="o">=</span><span class="bp">True</span><span class="p">,</span>
<span class="n">reject</span><span class="o">=</span><span class="nb">dict</span><span class="p">(</span><span class="n">grad</span><span class="o">=</span><span class="mf">4000e-13</span><span class="p">,</span> <span class="n">mag</span><span class="o">=</span><span class="mf">4e-12</span><span class="p">,</span> <span class="n">eog</span><span class="o">=</span><span class="mf">150e-6</span><span class="p">))</span>
<span class="n">evoked</span> <span class="o">=</span> <span class="n">epochs</span><span class="o">.</span><span class="n">average</span><span class="p">()</span> <span class="c1"># average epochs to get the evoked response</span>
</pre></div>
</div>
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-python"><div class="highlight"><pre><span></span>Opening raw data file /home/ubuntu/mne_data/MNE-sample-data/MEG/sample/sample_audvis_filt-0-40_raw.fif...
Read a total of 4 projection items:
PCA-v1 (1 x 102) idle
PCA-v2 (1 x 102) idle
PCA-v3 (1 x 102) idle
Average EEG reference (1 x 60) idle
Current compensation grade : 0
Range : 6450 ... 48149 = 42.956 ... 320.665 secs
Ready.
72 matching events found
Applying baseline correction (mode: mean)
Created an SSP operator (subspace dimension = 3)
4 projection items activated
Loading data for 72 events and 106 original time points ...
Rejecting epoch based on EOG : [u'EOG 061']
Rejecting epoch based on EOG : [u'EOG 061']
Rejecting epoch based on EOG : [u'EOG 061']
Rejecting epoch based on EOG : [u'EOG 061']
Rejecting epoch based on EOG : [u'EOG 061']
Rejecting epoch based on MAG : [u'MEG 1711']
Rejecting epoch based on EOG : [u'EOG 061']
Rejecting epoch based on EOG : [u'EOG 061']
Rejecting epoch based on EOG : [u'EOG 061']
Rejecting epoch based on EOG : [u'EOG 061']
Rejecting epoch based on EOG : [u'EOG 061']
Rejecting epoch based on EOG : [u'EOG 061']
Rejecting epoch based on EOG : [u'EOG 061']
Rejecting epoch based on EOG : [u'EOG 061']
Rejecting epoch based on EOG : [u'EOG 061']
Rejecting epoch based on EOG : [u'EOG 061']
Rejecting epoch based on EOG : [u'EOG 061']
17 bad epochs dropped
</pre></div>
</div>
<p>Show result</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">evoked</span><span class="o">.</span><span class="n">plot</span><span class="p">()</span>
</pre></div>
</div>
<img alt="../../_images/sphx_glr_plot_read_epochs_001.png" class="align-center" src="../../_images/sphx_glr_plot_read_epochs_001.png" />
<p><strong>Total running time of the script:</strong>
(0 minutes 1.421 seconds)</p>
<div class="sphx-glr-download container">
<strong>Download Python source code:</strong> <a class="reference download internal" href="../../_downloads/plot_read_epochs.py"><code class="xref download docutils literal"><span class="pre">plot_read_epochs.py</span></code></a></div>
<div class="sphx-glr-download container">
<strong>Download IPython notebook:</strong> <a class="reference download internal" href="../../_downloads/plot_read_epochs.ipynb"><code class="xref download docutils literal"><span class="pre">plot_read_epochs.ipynb</span></code></a></div>
</div>
</div>
</div>
</div>
<footer class="footer">
<div class="container">
<p class="pull-right">
<a href="#">Back to top</a>
<br/>
</p>
<p>
© Copyright 2012-2016, MNE Developers. Last updated on 2016-05-10.<br/>
</p>
</div>
</footer>
<script src="https://mne.tools/versionwarning.js"></script>
</body>
</html>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,768
|
AACSB Data and Research Blog
AACSB International research staff writing about data, trends, and practices in business education.
« Matchmaker, Matchmaker, Make Me a Match! | Main | A First Look at Employment Data: The 2014-15 BSQ Employment Module »
Re(Designing the B-School Curriculum)
By Elliot Davis
Business schools are preparing students for the future of work, iterating and redesigning their programs to get out ahead of industry trends and develop content that carries both immediate relevancy while also offering lasting, even lifetime, value. This is certainly a tall order, but is a challenge that many business schools have taken on. The recently concluded Curriculum Conference: Re(Invest+Think+Design) explored ways in which a curriculum may be redesigned to balance these needs. As a precursor to this conference, AACSB hosted three symposia on the topic of Redesigning the MBA. The symposia agendas focused on a number of topical areas relevant to the MBA, including items such as managing in a global context, leadership development, innovation and creativity, critical thinking and communication skills, and experiential learning. We reached out to the attendees of those symposia years later, to check-in and see what changes had been made to their programs following their time at the event.
What we found was quite intriguing. Seventy-four percent of those surveyed indicated that they had attended the symposium because their school was either considering or in the midst of an MBA redesign. As such, it was not surprising that since attending the symposia, 64 percent of respondents reported changes made to their MBA curriculum and/or program (86 percent of those stating that the symposium impacted the redesign). When asked what changes were made, 81 percent of respondents noted changes had been made to course or program content; 41 percent shared that changes had been made to their program's architecture (which would consist largely of formatting changes, e.g., changing credit hour requirements); and 37 percent mentioned changes in pedagogical approach (such as new modes of teaching delivery).
Many of the content-related changes involved the creation of a new set of coursework, including business plan capstones, professional development (and "soft" competency development) activities, and case competitions. Some schools have sought to close the gap between academe and industry. For instance, a school in India noted the creation of a program that partners students with businesses for their final term, with the student working in an internship-like capacity. A U.K.-based university has developed a module designed for the student's personal career development, focusing on self-reflection and post-graduation planning.
With respect to architectural changes, schools reported a wide range of outcomes, from tweaks to entire overhauls of their programs. Some of the reported changes were more minor in nature, such as a school in Canada that gave MBA students the option of waiving two courses if they would instead complete a project. A more significant change came from a school in India, which changed from a trimester system to a semester system, along with redesigning several existing courses and introducing a new foundations program for incoming students. Several schools reported reducing the credit-hour requirement on "core" coursework, or essential classes, while increasing elective credit-hour requirements. Changes of this sort may be aimed at providing students with added flexibility in catering their studies to their own preferences and career aspirations.
The pedagogical changes were just as varied, with schools implementing new modes of delivery, such as online and distance-learning courses. Additionally, there has been a growth in focus on creating learning environments and objectives that are experiential in nature. One school in Oceania mentioned a changed pedagogical approach to teaching through the introduction of an experientially based strategic consulting program. A school in Canada introduced a preparatory math and Microsoft Excel (spreadsheet software) "boot camp," designed to ensure that all students have an equal footing in these areas.
If you are interested in learning more about what other schools are doing within their curriculum and what should be done to better prepare your school's programming for the future, keep your eye on the Curriculum Development Series. Until then, what has your school been redesigning in the curriculum?
Elliot Davis on 10 June 2015 in Best and Effective Practices, Innovation in Curricula and the Classroom, MBA, Program Data | Permalink
Data and Research Blog Posts Now on AACSB Blog
Preliminary Results From the 2017–18 Business School Questionnaire
BSQ Survey Data Validation: Why We Do It and Why You Should Care
AACSB and AAUP Faculty Salary Comparative Analysis
Trends in Employment of Business School Graduates (Part II): Industry of Employment for Full-time MBAs
Employment Trends of Business School Graduates (Part I): Post-Graduation Employment Rates
A Qualitative Look at Learning Goals Across AACSB Assurance of Learning Reports
Factors Affecting School Size Categories (Part II)—Or, What Makes A School Large, Small, or Somewhere in Between?
How Prepared Are Business School Deans?
Data Sneak Peek! Joint Study on the Digital Generation
Academic and Professional Faculty Qualifications
Accounting Programs Questionairre (APQ)
Best and Effective Practices
Business School Questionnaire (BSQ)
Collaboration and Partnership Survey (CPS)
Collaborations and Partnerships
Degrees Conferred
Doctoral Education
Effective Practices Survey (EPS)
Global Salary Survey
Innovation in Curricula and the Classroom
Program Data
AACSB International
AACSB Accreditation
AACSB Data
AACSB Member School Profiles
AACSB Publications
NCES IPEDS Data Center (U.S. Data)
Principles for Responsible Management Education
United Nations Data
Connect with AACSB
Follow the latest AACSB data, research, and news on social media.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 794
|
Índice
Portada
Procedencia de los textos
Introducción
I. El principio de la relatividad
1. Sobre la electrodinámica de los cuerpos en movimiento
2. ¿Depende la inercia de un cuerpo de su contenido de energía?
3. Sobre la influencia de la gravitación en la propagación de la luz
4. El fundamento de la teoría de la relatividad general
5. El principio de Hamilton y la teoría de la relatividad general
6. Consideraciones cosmológicas sobre la teoría de la relatividad general
7. ¿Desempeñan los campos gravitatorios un papel esencial en la estructura de las partículas elementales de la materia?
II. Relatividad: la teoría especial y general
Prólogo
Primera parte. Sobre la teoría de la relatividad especial
Segunda parte. Sobre la teoría de la relatividad general
Tercera parte. Consideraciones acerca del universo como un todo
Apéndice 1. Una derivación sencilla de la transformación de Lorentz (Anexo al epígrafe 11)
Apéndice 2. El mundo cuadridimensional de Minkowski (Anexo al epígrafe 17)
Apéndice 3. Sobre la confirmación de la teoría de la relatividad general por la experiencia
Apéndice 4. La estructura del espacio en conexión con la teoría de la relatividad general
III. Otras consideraciones sobre la relatividad
1. El éter y la teoría de la relatividad
2. Geometría y experiencia
IV. El significado de la relatividad (antología)
1. El espacio y el tiempo en la física pre-relativista
V. La evolución de la física (antología)
1. Campo y relatividad
2. Los cuantos
VI. Notas autobiográficas
1. Notas autobiográficas
VII. Mis últimos años (antología)
1. La teoría de la relatividad
2. E = Mc2
3. ¿Qué es la teoría de la relatividad?
4. Física y realidad
5. Los fundamentos de la física teórica
6. El lenguaje común de la ciencia
7. Las leyes de la ciencia y las leyes de la ética
8. Una derivación elemental de la equivalencia de masa y energía
Notas
Créditos
Te damos las gracias por adquirir este EBOOK
Visita Planetadelibros.com y descubre una nueva forma de disfrutar de la lectura
**¡Regístrate y accede a contenidos exclusivos!**
Próximos lanzamientos
Clubs de lectura con autores
Concursos y promociones
Áreas temáticas
Presentaciones de libros
Noticias destacadas
**Comparte tu opinión en la ficha del libro**
**y en nuestras redes sociales:**
| | | |
---|---|---|---|---
Explora Descubre Comparte
### PROCEDENCIA DE LOS TEXTOS
El presente volumen es una antología de los grandes textos de Albert Einstein. Para su elaboración se ha procurado acudir a traducciones castellanas ya existentes. Sólo cuando alguno de los ensayos era inédito en castellano, se ha traducido _ex professo_ desde el original inglés. En consecuencia, el lector debe tener en cuenta que cada sección funciona de forma aislada; en este sentido, las equivalencias y estilos que se utilizan, sobre todo en la nomenclatura y la presentación de fórmulas, son válidas dentro de cada uno de los textos y con independencia del resto. Los títulos originales de los artículos se ofrecen a pie de página de cada sección; la información general sobre las distintas procedencias se detalla a continuación:
Los textos incluidos en la primera parte (EL PRINCIPIO DE LA RELATIVIDAD) han sido traducidos por Javier García Sanz en distintas ocasiones: «Sobre la electrodinámica de los cuerpos en movimiento», en John Stachel, ed., _Einstein 1905: un año milagroso_ , Crítica, Barcelona, 2001, pp. 111-144; «¿Depende la inercia de un cuerpo de su contenido de energía?», en John Stachel, ed. citada, 2001, pp. 145-150; «Sobre la influencia de la gravitación en la propagación de la luz», en S. Hawking, ed., _A hombros de gigantes. Las grandes obras de la física y la astronomía_ , Crítica, Barcelona, 2003, pp. 1055-1062; «El fundamento de la teoría de la relatividad general», en S. Hawking, ed. citada, pp. 1062-1106; «El principio de Hamilton y la teoría de la relatividad general», en S. Hawking, ed. citada, pp. 1106-1111; «Consideraciones cosmológicas sobre la teoría de la relatividad general», en S. Hawking, ed. citada, pp. 1111-1119; «¿Desempeñan los campos gravitatorios un papel esencial en la estructura de las partículas elementales de la materia?», en S. Hawking, ed. citada, pp. 1120-1126.
La segunda parte (RELATIVIDAD: LA TEORÍA ESPECIAL Y GENERAL) reproduce la obra _Sobre la teoría de la relatividad especial y general_ publicada por Alianza, Madrid, 1999 (20064), pp. 7-146, en traducción castellana de Miguel Paredes Larrucea.
El artículo «Geometría y experiencia» incluido en la tercera parte (OTRAS CONSIDERACIONES SOBRE LA RELATIVIDAD) ha sido traducido por primera vez en castellano por Javier García Sanz. El texto «El éter y la teoría de la relatividad» reproduce, en cambio, el publicado en José Manuel Sánchez Ron, ed., _Albert Einstein_ , Crítica, Barcelona, 2005, pp. 135-145, según traducción castellana de Mercedes García Garmilla.
También el ensayo «El espacio y el tiempo en la física pre-relativista» incluido en la parte cuarta (EL SIGNIFICADO DE LA RELATIVIDAD) se publica aquí por primera vez en castellano en versión de Javier García Sanz.
Los textos de la parte cuarta («Campo y relatividad» y «Los cuantos») proceden de la versión castellana publicada en Albert Einstein y Leopold Infeld, _La evolución de la física_ , Salvat, Barcelona, 1993, páginas 99-198 y 199-239, respectivamente.
Las «Notas autobiográficas» que constituyen la parte sexta de este volumen se extraen de José Manuel Sánchez Ron, ed., _Albert Einstein_ , Crítica, Barcelona, 2005, pp. 43-83, traducción castellana de Adriana Castelar.
Finalmente, uno de los textos («Física y realidad») incluidos en la última parte de este volumen (MIS ÚLTIMOS AÑOS) reproduce la versión de Ana Goldar incluida en _Mis ideas y opiniones_ , Bosch, Barcelona, 1980, pp. 261-291. El resto («La teoría de la relatividad», « _E=Mc 2_», «¿Qué es la teoría de la relatividad?», «Los fundamentos de la física teórica», «El lenguaje común de la ciencia», «Las leyes de la ciencia y las leyes de la ética», «Una derivación elemental de la equivalencia de masa y energía») se han traducido por primera vez por Javier García Sanz.
### INTRODUCCIÓN
_Hace un par de años, el mundo celebró el centenario del año milagroso de Einstein, año en que el científico revolucionó la física al publicar una serie de nuevas ideas asombrosas que trajeron consigo cambios profundos en la manera en que los físicos contemplaban el universo. La intuición humana nos dice que el espacio es un estado en el cual se desarrollan los acontecimientos de nuestras vidas y que el tiempo está gobernado por un reloj universal. Pero en 1905 y durante la siguiente década, Einstein demostró que espacio y tiempo no tienen significados idénticos para un observador sentado en una silla que para quien está volando en un avión. Tampoco para los que orbitan con nosotros alrededor de la Tierra, respecto de los que están tomando un té en el Cúmulo de Virgo o los que están siendo engullidos por un agujero negro_.
_Hubo una vez en que las ideas de Einstein dejaron estupefactos a los físicos. Hoy esas ideas se incorporan automáticamente a las ecuaciones y formalismos que aprenden los estudiantes de física. Tal como Einstein escribió en uno de los artículos de esta colección, si sus planteamientos se adoptaban como algo válido, los alemanes lo llamarían un «sabio alemán» y los ingleses un «judío suizo». Pero si un día sus ideas fueran desacreditadas, se convertiría automáticamente en un «judío suizo» para los alemanes y en un «sabio alemán» para los ingleses. Hoy quedan pocos físicos que recuerden a Einstein como un vivaz, agudo e ingenioso ser humano. Hoy sus ideas de espacio y tiempo entrelazados están integradas en la cultura popular, incluso descritas por diversos autores a lo largo de generaciones posteriores. Pero el más lúcido, por no decir entretenido, defensor de las ideas de Einstein ha sido siempre el propio Einstein_.
_Tal y como se describe en este volumen, la teoría de Einstein de la relatividad especial de 1905 creció a partir de una simple observación. Las leyes del electromagnetismo descubiertas por James Clerk Maxwell en la década de 1860 mostraban que si uno se acerca a un haz de luz o se aleja de él, la luz siempre se le aproxima a la misma velocidad. A partir de nuestra experiencia diaria, esto no resultaría cierto. Si corres alejándote de un tren sobrevivirás durante unos pocos segundos más que si corres acercándote a él (asumiendo que no tienes intención de saltar hacia un lado). En el primer caso, la velocidad a la que se aproxima el tren será la diferencia entre su velocidad y la tuya (en referencia a la vía). En el segundo caso, la velocidad a la que se aproxima será la suma de las dos velocidades. Lo mismo, según las leyes de Maxwell, no se aplica a la luz emitida por el faro del tren. ¿Cómo es posible que la velocidad de la luz no parezca más lenta en el primer caso que en el segundo_?
_Por velocidad entendemos distancia recorrida dividida por el tiempo de viaje. Por tanto, como Einstein señaló, si tomamos las leyes de Maxwell al pie de la letra, debemos cambiar nuestras ideas de espacio y tiempo. No son fijas e invariables, se ajustan según el observador, curvando o dilatándose en su justa medida para mantener constante la velocidad de la luz. La misma curvatura y dilatación significa por supuesto que la velocidad a la que el mismo tren se aproxima no es simplemente la suma o la diferencia que se ha descrito con anterioridad. Sin embargo, a velocidades muy por debajo de la velocidad de la luz, la diferencia entre sumar o restar respecto el formalismo desarrollado por Einstein es insignificante. La misma lógica llevada más allá requiere también la equivalencia entre masa y energía, razón por la cual tenemos energía atómica y, por desgracia, armas atómicas. Los detalles del razonamiento de Einstein, y la simplicidad de su álgebra, no encuentran una mejor explicación que la hallada aquí, en palabras del propio Einstein_.
_La teoría general de la relatividad también nació de una simple observación. En las teorías del movimiento de Newton aparece una magnitud llamada masa que determina cuán fácil es acelerar un objeto cuando se le aplica una fuerza. Un camión pesado es mucho más difícil de acelerar a cierta velocidad que un ligero Volkswagen. En tiempos de Newton se conocían tres tipos de fuerzas: electricidad, magnetismo y gravedad. La resistencia al cambio de velocidad en las leyes de movimiento de Newton no depende de qué tipo de fuerza se aplica. Pero Newton también descubrió la ley por la que se regía una de ellas, la fuerza de la gravedad. En esa ley aparece otra magnitud que determina la atracción gravitatoria que un objeto ejerce sobre otro, y la atracción gravitatoria que un objeto sufre en presencia de otro objeto. Esa magnitud también se llama masa. Las dos definiciones de masa juegan papeles bastante distintos, pero las dos se llaman masa por una buena razón: resultan ser la misma cosa. ¿Por qué deberían ser equivalentes? Esta cuestión, sumada a la lógica brillante de Einstein, le llevó a darse cuenta que la estructura de espacio y tiempo reacciona en presencia de materia y energía_.
_«En estos tiempos», decía Einstein, «cuando la experiencia nos compele a buscar una nueva y más sólida fundamentación, el físico no puede simplemente entregar al filósofo la contemplación crítica de los fundamentos teóricos, porque nadie mejor que él puede explicar con mayor acierto dónde aprieta el zapato». Einstein no estaba sólo interesado en la ciencia, sino también en la filosofía y en el lenguaje de la ciencia, e incluso en sus implicaciones éticas. Algunos de sus escritos sobre estos temas también están incluidos en este libro. Y aunque Einstein escribió las palabras anteriores en 1936, hoy estamos en una época en la que los físicos también andan en busca de unos nuevos fundamentos, una época en la cual tales cuestiones metafísicas tienen tanta relevancia como la tenían entonces. Hoy, desde que Einstein describió espacio y tiempo como variables dinámicas, vemos que el universo no tiene sólo una historia, sino cualquier historia posible_.
_No sólo contemplamos la posibilidad de un espacio-tiempo curvo, sino si el universo tiene dimensiones adicionales. Y especulamos minuciosamente sobre el significado de tales conceptos, si están bien definidos o si son sólo aproximados. Hoy perseguimos una teoría unificada de todas las fuerzas, así como una estructura espacio-temporal en la cual experimentemos que el universo se expande. Es una búsqueda que Einstein hubiera aprobado, y para la que el extraordinario trabajo de este volumen proporciona los cimientos necesarios_.
S. HAWKING
# I
## EL PRINCIPIO DE LA RELATIVIDAD
A veces podemos engañarnos y pensar que los grandes avances científicos, como la teoría de la relatividad de Einstein, se desarrollaron a partir de la nada y de forma completamente independiente de trabajos anteriores. En «El Principio de la Relatividad», vemos el contexto a partir del cual Einstein desarrolló su teoría, incluyendo alguno de los artículos fundamentales en los que se basó.
Para situar en su contexto este trabajo, lo mejor es considerar el _statu quo_ de la física a principios del siglo XX. En 1864, James Clerk Maxwell desarrolló una teoría completa de electricidad y magnetismo, y demostró que un campo eléctrico es creado por una carga estacionaria, y que un campo magnético se genera por una carga en movimiento. Se entendían, pues, como fuerzas de naturaleza distinta.
Hendrick A. Lorentz, en una serie de artículos publicados en 1895 y 1904, planteaba una cuestión aparentemente simple. ¿Qué sucede si tenemos una carga en reposo y nosotros pasamos por su lado corriendo? Lorentz demostró que para un observador en movimiento, una carga en reposo «parecerá» una carga en movimiento, y por tanto, un campo eléctrico parecerá un campo magnético. Lorentz demostró además que para un observador en movimiento, una onda electromagnética se propagará a la misma velocidad que para un observador estacionario: a la velocidad de la luz.
En 1905 Einstein llegó a una conclusión similar: los fundamentos de las fuerzas eléctrica y magnética están relacionados entre sí, y pueden aparecer en proporciones distintas para observadores moviéndose a velocidades distintas. Pero Einstein demostró mucho más. Postuló que todas las leyes físicas deben ser válidas en cualquier «sistema de referencia inercial» (viajar a velocidad fija en módulo y dirección), y que para cualquiera de sus observadores la velocidad de la luz será una constante.
Estas suposiciones estaban bien de acuerdo tanto con las teorías de Maxwell como con el trabajo experimental de Michelson y Morley, quienes demostraron que la luz viaja a velocidad constante sin importar el movimiento de la Tierra. Einstein postuló que dos observadores con barras métricas y relojes idénticos, que se mueven el uno respecto del otro verán más corta la barra métrica de la otra persona, y verán cómo el reloj del otro funciona más lentamente. En esta aparente paradoja se basa la esencia de la relatividad.
Las transformaciones entre sistemas en movimiento, convencionalmente conocidas como transformaciones de Lorentz, trajeron consigo una importante corrección a las leyes de movimiento de sir Isaac Newton. Según Newton, el aplicar una fuerza constante a un cuerpo lo acelera. Y hacerlo indefinidamente aumentará la velocidad del cuerpo sin límites. Sin embargo, la teoría de la relatividad de Einstein demostró que nada puede sobrepasar la velocidad de la luz (Newton estaba equivocado, pero sólo en el límite en que las velocidades se aproximan a la de la luz).
Einstein se dio cuenta de que la relatividad estaba incompleta. Sólo se aplicaba a sistemas en los cuales los cuerpos se mueven a velocidades constantes, mientras que en campos gravitacionales los cuerpos están constantemente en aceleración. Así que entre 1911 y 1916 desarrolló su «teoría general de la relatividad» en varios artículos históricos, cuyos resultados principales están descritos en las secciones 7 y 8 de «El principio de la relatividad».
En uno de sus «experimentos mentales» (conocidos como _gedankenexperiments_ ), Einstein postuló que no debía haber ninguna diferencia entre un experimento realizado en un ascensor quieto en el suelo, y uno que está siendo acelerado hacia arriba a espacio abierto. Dado que un sistema de referencia en aceleración provocará que todos los proyectiles, incluyendo los haces de luz, sean curvados, Einstein demostró que la luz sería curvada por campos gravitacionales. De hecho, la teoría general recalca que son _espacio y tiempo_ los que se curvan, y que la luz, o cualquier otro objeto, simplemente sigue una «línea recta» a lo largo del espacio-tiempo.
Como John Archibald Wheeler decía: «la materia dice al espacio-tiempo cómo curvarse, y el espacio-tiempo dice a la materia cómo moverse». Einstein se dio cuenta de que sus ecuaciones no sólo podían gobernar el comportamiento de haces de luz y estrellas, sino también del universo como conjunto. Notó que el universo podía no ser estático y que debía o bien expandirse o bien colapsar, y así es como la relatividad general constituye la base del campo que ahora se conoce como cosmología, tal y como se detalla en la sección 10.
Para forzar al universo a un estado permanentemente estático, Einstein introdujo un término ad hoc en sus ecuaciones de campo conocido como la «constante cosmológica». Cuando en 1929 Edwin Hubble descubrió que el universo se expandía, Einstein reconoció su error y se refirió a la constante cosmológica como «la mayor metedura de pata de mi vida». Recientemente, la constante cosmológica ha sido reintroducida en la cosmología con una nueva forma: la «energía oscura» que impregna el universo. Recientes observaciones de supernovas lejanas sugieren que la energía oscura es la que proporciona el combustible para una aceleración del universo.
El modelo con el que Einstein se presentó sigue aún vigente, y todavía no ha fallado en ninguna de las pruebas u observaciones a gran escala. Cuando uno lee sus pensamientos sobre estos asuntos, lo que prevalece de manera extraordinariamente notable es todo lo que él, y posteriores investigadores, fueron capaces de deducir a partir de supuestos iniciales tan simples.
### 1
### SOBRE LA ELECTRODINÁMICA DE LOS CUERPOS EN MOVIMIENTO*
Es bien sabido que, cuando se aplica a cuerpos en movimiento, la electrodinámica de Maxwell tal como hoy se entiende normalmente conduce a asimetrías que no parecen ser inherentes a los fenómenos. Tomemos, por ejemplo, la interacción electrodinámica entre un imán y un conductor. Aquí los fenómenos observables dependen sólo del movimiento relativo del conductor y el imán, mientras que la visión habitual traza una nítida distinción entre los dos casos, en donde o bien uno u otro de los dos cuerpos está en movimiento. Pues, en efecto, si el imán está en movimiento y el conductor está en reposo, en la vecindad del imán aparece un campo electromagnético con una energía definida que produce una corriente dondequiera que haya localizados elementos del conductor. Pero si el imán está en reposo mientras que el conductor está en movimiento, no hay ningún campo eléctrico en la vecindad del imán, sino más bien una fuerza electromotriz en el conductor a la que no corresponde ninguna energía per se, sino que, suponiendo una igualdad del movimiento relativo en los dos casos, da lugar a corrientes eléctricas de la misma magnitud y el mismo curso que las producidas por las fuerzas eléctricas en el primer caso.
Ejemplos de este tipo, junto con los infructuosos intentos de detectar un movimiento de la Tierra con relación al «medio lumínico», llevan a la conjetura de que ni los fenómenos de la mecánica, ni tampoco los de la electrodinámica tienen propiedades que correspondan al concepto de reposo absoluto. Más bien, las mismas leyes de la electrodinámica y la óptica serán válidas para todos los sistemas de coordenadas en los que rigen las ecuaciones de la mecánica, como ya se ha demostrado para cantidades de primer orden. Elevaremos esta conjetura (cuyo contenido será denominado en adelante «el principio de relatividad») al estatus de un postulado e introduciremos también otro postulado, que es sólo aparentemente incompatible con él, a saber, que la luz se propaga siempre en el espacio vacío con una velocidad definida _V_ que es independiente del estado de movimiento del cuerpo emisor. Estos dos postulados bastan para conseguir una electrodinámica de cuerpos en movimiento simple y consistente basada en la teoría de Maxwell para cuerpos en reposo. La introducción de un «éter lumínico» se mostrará superflua, puesto que la idea que se va a desarrollar aquí no requerirá un «espacio en reposo absoluto» dotado de propiedades especiales, ni asigna un vector velocidad a un punto del espacio vacío donde están teniendo lugar procesos electromagnéticos.
Como toda la electrodinámica, la teoría que va a desarrollarse aquí está basada en la cinemática de un cuerpo rígido, puesto que las afirmaciones de una teoría semejante tienen que ver con las relaciones entre cuerpos rígidos (sistemas de coordenadas), relojes y procesos electromagnéticos. Una consideración insuficiente de esta circunstancia está en la raíz de las dificultades con las que debe enfrentarse actualmente la electrodinámica de los cuerpos en movimiento.
#### A. PARTE CINEMÁTICA
1. DEFINICIÓN DE SIMULTANEIDAD
Consideremos un sistema de coordenadas en el que son válidas las ecuaciones mecánicas de Newton. Para distinguir nominalmente dicho sistema de aquellos que van a introducirse más tarde, y para hacer esta presentación más precisa, le llamaremos «sistema de reposo».
Si una partícula está en reposo con respecto a este sistema de coordenadas, su posición relativa al último puede determinarse por medio de varas de medir rígidas utilizando los métodos de la geometría euclidiana y expresarse en coordenadas cartesianas.
Si queremos describir el _movimiento_ de una partícula, damos los valores de sus coordenadas como funciones del tiempo. Sin embargo, debemos tener en cuenta que una descripción matemática de este tipo sólo tiene sentido físico si tenemos ya claro lo que entendemos aquí por «tiempo». Debemos tener en cuenta que todos nuestros juicios que implican al tiempo son siempre juicios sobre _sucesos simultáneos_. Si, por ejemplo, yo digo que «El tren llega aquí a las 7 en punto», eso significa, más o menos, «La manecilla pequeña de mi reloj apuntando a las 7 y la llegada del tren son sucesos simultáneos».
Podría parecer que todas las dificultades implicadas en la definición de «tiempo» podrían superarse si sustituyo «posición de la manecilla pequeña de mi reloj» por «tiempo». Semejante definición es suficiente si va a definirse un tiempo exclusivamente para el lugar en el que está localizado el reloj; pero la definición ya no es satisfactoria cuando tienen que enlazarse temporalmente series de sucesos que ocurren en localizaciones diferentes, o —lo que es equivalente— cuando hay que evaluar temporalmente sucesos que ocurren en lugares remotos del reloj.
Por supuesto, podríamos contentarnos con evaluar el tiempo de los sucesos estacionando en el origen de las coordenadas a un observador con un reloj; este observador asigna a cada suceso a evaluar la posición correspondiente de las manecillas del reloj cuando a través del espacio vacío le llega una señal luminosa procedente de dicho suceso. Sin embargo, sabemos por experiencia que una coordinación semejante tiene el inconveniente de que no es independiente de la posición del observador con el reloj. Llegamos a un arreglo más práctico mediante el siguiente argumento.
Si existe un reloj en el punto _A_ en el espacio, entonces un observador situado en _A_ puede evaluar el tiempo de los sucesos en la inmediata vecindad de _A_ hallando las posiciones de las manecillas del reloj que son simultáneas con dichos sucesos. Si existe otro reloj en el punto _B_ que se asemeja en todos los aspectos al que hay en _A_ , entonces el tiempo de los sucesos en la inmediata vecindad de _B_ puede ser evaluado por un observador en _B_. Pero no es posible comparar el tiempo de un suceso en _A_ con uno en _B_ sin una estipulación adicional. Hasta aquí hemos definido sólo un «tiempo- _A_ » y un «tiempo- _B_ », pero no un «tiempo» común para _A_ y _B_. El último puede ahora determinarse estableciendo por definición que el «tiempo» requerido por la luz para viajar de _A_ a _B_ es igual al «tiempo» que requiere para viajar de _B_ a _A_. En efecto, supongamos que un rayo de luz parte de _A_ hacia _B_ en un «tiempo- _A_ » _t_ A, es reflejado desde _B_ hacia _A_ en un «tiempo- _B_ » _t B_, y llega de nuevo a _A_ en un «tiempo- _A_ » _t_ ' _A_. Los dos relojes son síncronos por definición si
_t B_ – _t A_ = _t_ ' _A_ – _t B_.
Suponemos que es posible que esta definición de sincronicidad esté libre de contradicciones, y que lo esté para puntos en número arbitrario, y por consiguiente son válidas en general las relaciones siguientes:
1. Si el reloj en _B_ marcha de forma síncrona con el reloj en _A_ , el reloj en _A_ marcha de forma síncrona con el reloj en _B_.
2. Si el reloj en _A_ marcha de forma síncrona con el reloj en _B_ así como con el reloj en _C_ , entonces los relojes en _B_ y _C_ también marchan de forma síncrona uno con relación al otro.
Por medio de ciertos experimentos (mentales) físicos hemos establecido lo que debe entenderse por relojes síncronos en reposo relativo y situados en diferentes lugares, y con ello hemos llegado obviamente a definiciones de «síncrono» y «tiempo». El «tiempo» de un suceso es la lectura obtenida simultáneamente de un reloj en reposo situado en el lugar del suceso, que para todas las determinaciones temporales marcha de forma síncrona con un reloj especificado en reposo y, por supuesto, con el reloj especificado.
Basados en la experiencia, estipulamos además que la cantidad
es una constante universal (la velocidad de la luz en el espacio vacío).
Es esencial que hayamos definido el tiempo por medio de relojes en reposo en el sistema de reposo; puesto que el tiempo recién definido está relacionado con el sistema en reposo, le llamaremos «el tiempo del sistema de reposo».
2. SOBRE LA RELATIVIDAD DE LONGITUDES Y TIEMPOS
Las consideraciones siguientes están basadas en el principio de relatividad y el principio de constancia de la velocidad de la luz. Definimos estos dos principios como sigue:
1. Si los dos sistemas de coordenadas están en movimiento relativo de traslación, paralela uniforme, las leyes de acuerdo con las cuales cambian los estados de un sistema físico no dependen de con cuál de los dos sistemas están relacionados dichos cambios.
2. Todo rayo luminoso se mueve en el sistema de coordenadas «de reposo» con una velocidad fija _V_ , independientemente de si este rayo luminoso es emitido por un cuerpo en reposo o en movimiento. Por lo tanto, recorrido de la luz intervalo de tiempo
donde «intervalo de tiempo» debería entenderse en el sentido de la definición dada en la sección 1.
Tomemos una vara rígida en reposo; sea _l_ su longitud, medida por una vara de medir que está también en reposo. Imaginemos ahora que se coloca el eje de la vara a lo largo del eje _X_ del sistema de coordenadas en reposo, y que la vara es puesta entonces en movimiento de traslación paralela uniforme (con velocidad ν) a lo largo del eje _X_ en la dirección de las _x_ crecientes. Preguntamos sobre la longitud de la vara de medir, que imaginamos debe establecerse por las dos operaciones siguientes:
a) El observador se mueve junto con la mencionada vara de medir y la vara rígida susceptible de ser medida, y mide la longitud de esta vara tendiendo la vara de medir de la misma manera que si la vara susceptible de ser medida, el observador y la vara de medir estuvieran en reposo.
b) Utilizando relojes en reposo y síncronos en el sistema de reposo como se esbozó en la sección 1, el observador determina en qué puntos del sistema de reposo están situados el principio y el final de la vara susceptible de ser medida en algún tiempo _t_ dado. La distancia entre estos dos puntos, medida con la vara utilizada antes —pero no en reposo—, es también una longitud que podemos llamar la «longitud de la vara».
De acuerdo con el principio de relatividad, la longitud determinada por la operación ( _a_ ), que llamaremos «la longitud de la vara en el sistema en movimiento», debe ser igual a la longitud _l_ de la vara en reposo.
La longitud determinada utilizando la operación ( _b_ ), que llamaremos «la longitud de la vara (en movimiento) en el sistema de reposo», será determinada sobre la base de nuestros dos principios, y encontraremos que difiere de _l_.
La cinemática actual supone implícitamente que las longitudes determinadas por las dos operaciones anteriores son exactamente iguales entre sí, o, en otras palabras, que en el tiempo _t_ un cuerpo rígido en movimiento es totalmente reemplazable, en cuanto a su geometría, por el _mismo_ cuerpo cuando está _en reposo_ en una posición concreta.
Además, imaginamos los dos extremos ( _A_ y _B_ ) de la vara provistos de relojes que son síncronos con los relojes del sistema de reposo, i. e., cuyas lecturas corresponden siempre al «tiempo del sistema de reposo» en las localizaciones que los relojes resultan ocupar; por lo tanto, estos relojes son «síncronos en el sistema de reposo».
Imaginemos además que cada reloj tiene un observador que se mueve con él, y que estos observadores aplican a los dos relojes el criterio para el ritmo síncrono de dos relojes formulado en la sección 1. Sea un rayo de luz que parte de _A_ en el tiempo _t_ A, es reflejado en _B_ en el tiempo _t_ B, y llega de nuevo a _A_ en el tiempo _t_ ' _A_. Teniendo en cuenta el principio de relatividad de la velocidad de la luz, encontramos que
y
donde _r_ AB denota la longitud de la vara en movimiento, medida en el sistema de reposo. Los observadores que se mueven conjuntamente con la vara encontrarían así que los dos relojes no marchan de forma síncrona, mientras que los observadores en el sistema de reposo les dirían que están marchando de forma síncrona.
Vemos así que no podemos atribuir significado absoluto al concepto de simultaneidad; en su lugar, dos sucesos que son simultáneos cuando son observados desde algún sistema de coordenadas concreto ya no pueden considerarse simultáneos cuando son observados desde un sistema que está en movimiento relativo a dicho sistema.
3. TEORÍA DE LAS TRANSFORMACIONES DE COORDENADAS Y TIEMPO DESDE EL SISTEMA DE REPOSO A UN SISTEMA EN MOVIMIENTO DE TRASLACIÓN UNIFORME RELATIVO A AQUÉL
Sean dos sistemas de coordenadas en el espacio en «reposo», i.e., dos sistemas de tres líneas rectas materiales rígidas mutuamente perpendiculares con origen en un punto. Sean coincidentes los ejes _X_ de los dos sistemas, y sean sus ejes _Y_ y _Z_ respectivamente paralelos. Cada sistema estará provisto de una vara de medir rígida y un número de relojes, y sean exactamente iguales las dos varas de medir y todos los relojes de los dos sistemas.
Ahora, pongamos el origen de uno de los dos sistemas, digamos _k_ , en un estado de movimiento con velocidad (constante) ν en la dirección de las _x_ crecientes del otro sistema ( _K_ ), que permanece en reposo; e impartamos esta nueva velocidad a los ejes coordenados de _k_ , su correspondiente vara de medir y sus relojes. A cada tiempo _t_ del sistema de reposo _K_ corresponde una localización definida de los ejes del sistema en movimiento. Por razones de simetría tenemos justificación para suponer que el movimiento de _k_ puede ser tal que en el tiempo _t_ (« _t»_ siempre denota un tiempo del sistema de reposo) los ejes del sistema en movimiento son paralelos a los ejes del sistema de reposo.
Imaginemos ahora el espacio a ser medido tanto desde el sistema de reposo _K_ , utilizando la vara de medir en reposo, como desde el sistema _k_ , utilizando la vara de medir en movimiento junto con él, y que de este modo se obtienen las coordenadas _x, y, z_ y ξ, η, ζ, respectivamente. Además, por medio de los relojes en reposo en el sistema de reposo, y utilizando rayos luminosos como se describe en la sección 1, determinamos el tiempo _t_ del sistema de reposo para todos los puntos donde hay relojes. De manera similar, aplicando de nuevo el método de señales luminosas descrito en la sección 1, determinamos el tiempo _t_ del sistema en movimiento para todos los puntos de este sistema en movimiento en los que hay relojes en reposo relativo a este sistema.
A cada conjunto de valores _x, y, z, t_ que determina por completo el lugar y tiempo de un suceso en el sistema de reposo le corresponde un conjunto de valores ξ, η, ζ, τ que fija el suceso relativo al sistema _k_ , y el problema que hay que resolver ahora es encontrar el sistema de ecuaciones que conecta dichas cantidades.
En primer lugar, es evidente que estas ecuaciones deben ser lineales debido a las propiedades de homogeneidad que atribuimos al espacio y el tiempo.
Si hacemos _x_ '= _x_ – _vt_ , entonces es evidente que a un punto en reposo en el sistema _k_ le pertenece un conjunto de valores _x_ ', _y, z_ definido e independiente del tiempo. Determinamos primero τ como una función de _x_ ', _y, z_ y _t_. Para esto, debemos expresar en ecuaciones que τ es de hecho el agregado de lecturas de relojes en reposo en el sistema _k_ , sincronizados de acuerdo con la regla dada en la sección 1.
Supongamos que en el instante τ0 se envía un rayo luminoso a lo largo del eje _X_ desde el origen del sistema _k_ a _x_ ', y que este rayo es reflejado en el instante τ1 desde allí hacia el origen, adonde llega en el instante τ2: entonces debemos tener
o, incluyendo los argumentos de la función τ y aplicando el principio de la constancia de la velocidad de la luz en el sistema de reposo,
De esto obtenemos, haciendo _x_ infinitesimalmente pequeño,
o
Habría que señalar que, en lugar del origen de coordenadas, podríamos haber escogido cualquier otro punto como origen del rayo luminoso y, por consiguiente, la ecuación recién obtenida es válida para todos los valores de _x_ ', _y_ , _z_.
Un razonamiento análogo —aplicado a los ejes _Y_ y _Z_ — da, recordando que la luz se propaga siempre a lo largo de estos ejes con la velocidad cuando se observa desde el sistema de reposo,
Estas ecuaciones dan, puesto que τ es una función _lineal_ ,
donde _a_ es una función ϕ( _ν_ ) todavía desconocida, y donde suponemos por brevedad que en el origen de _k_ tenemos _t_ = 0 cuando τ = 0.
Utilizando este resultado, podemos determinar fácilmente las cantidades ξ, η, ζ si expresamos en ecuaciones que (como exige el principio de la constancia de la velocidad de la luz en unión con el principio de relatividad) la luz se propaga también con velocidad _V_ cuando se mide en el sistema en movimiento. Para un rayo luminoso emitido en el instante t = 0 en la dirección de las ξ crecientes, tenemos
o
Pero medido en el sistema de reposo, el rayo luminoso se propaga con velocidad _V –_ _ν_ relativa al origen de _k_ , de modo que
Sustituyendo este valor de _t_ en la ecuación para ξ, obtenemos
Análogamente, considerando rayos luminosos que se mueven a lo largo de los otros dos ejes, obtenemos
donde
por lo tanto
y
Si sustituimos el valor para _x_ ', obtenemos
donde
y _ϕ_ es una función de _v_ desconocida por el momento. Si no se hace ninguna hipótesis con respecto a la posición inicial del sistema en movimiento y el punto cero de τ, entonces debe sumarse una constante a los segundos miembros de estas ecuaciones.
Ahora tenemos que demostrar que, medido en el sistema en movimiento, todo rayo luminoso se propaga con la velocidad _V_ si así lo hace, como hemos supuesto, en el sistema de reposo; pues no hemos demostrado todavía que el principio de la constancia de la velocidad de la luz es compatible con el principio de relatividad.
Supongamos que en el tiempo _t_ = τ = 0 se quite una onda esférica desde el origen de coordenadas, que en dicho instante es común a ambos sistemas, y que esta onda se propaga en el sistema _K_ con velocidad _V_. Por lo tanto, si ( _x, y, z_ ) es un punto alcanzado por esta onda, tenemos
_x_ 2 \+ _y_ 2 \+ _z_ 2 = _V_ 2 _t_ 2
Transformamos esta ecuación utilizando nuestras ecuaciones de transformación y, tras un sencillo cálculo, obtenemos
ξ2 \+ η2 \+ ζ2 = _V_ 2τ2
Así pues, nuestra onda es también una onda esférica con velocidad de propagación _V_ cuando se observa en el sistema en movimiento. Esto demuestra que nuestros dos principios fundamentales son compatibles.
Las ecuaciones de transformación que hemos obtenido contienen también una función desconocida _ϕ_ de _ν_ , que queremos determinar ahora.
Para esto introducimos un tercer sistema de coordenadas _K_ ' que, con relación al sistema _k_ , está en movimiento de traslación paralelo al eje χ, y tal que su origen se mueve a lo largo del eje χ con velocidad – _ν_. Sean coincidentes los tres orígenes de coordenadas en el tiempo _t_ = 0, y sea el tiempo _t_ ' del sistema _K_ igual a cero en _t_ = _x_ = _y_ = _z_ = 0. Denotamos por _x_ ' _, y_ ' _, z_ ' las coordenadas medidas en el sistema _K_ ', y por una doble aplicación de nuestras ecuaciones de transformación, obtenemos
Puesto que las relaciones entre _x_ ' _, y_ ' _, z_ 'y _x, y, z_ no contienen el tiempo _t_ , los sistemas _K_ y _K_ ' no están en reposo relativo mutuo, y es evidente que la transformación de _K_ a _K_ ' debe ser la transformación identidad. Por lo tanto,
Exploremos ahora el significado de _ϕ_ ( _ν_ ). Nos centraremos en la porción del eje- _Y_ del sistema _k_ que se halla entre ξ = 0, η = 0, ζ = 0 y ξ = 0, η = _l_ , ζ = 0. Esta porción del eje- _Y_ es una vara que, con relación al sistema _K_ , se mueve perpendicularmente a su eje con una velocidad _v_ y cuyos extremos tienen coordenadas en _K_ :
y
La longitud de la vara, medida en _K_ es así _l_ / _ϕ_ ( _ν_ ); esto nos da el significado de la función _ϕ_. Por razones de simetría, es ahora evidente que la longitud de una vara medida en el sistema de reposo y que se mueve perpendicularmente a su eje sólo puede depender de su velocidad y no de la dirección y sentido de su movimiento. Así pues, la longitud de la vara en movimiento medida en el sistema de reposo no cambia si se reemplaza _ν_ por – _ν_. De esto concluimos
o
De esta relación y la encontrada antes se sigue que _ϕ_ ( _ν_ ) = 1, de modo que las ecuaciones de transformación obtenidas se convierten en
donde
4. EL SIGNIFICADO FÍSICO DE LAS ECUACIONES OBTENIDAS EN LO QUE CONCIERNE A CUERPOS RÍGIDOS Y RELOJES EN MOVIMIENTO
Consideremos una esfera rígida de radio _R_ que está en reposo relativo al sistema en movimiento _k_ y cuyo centro yace en el origen de _k_. La ecuación de la superficie de esta esfera, que se mueve con velocidad ν relativa a _k_ , es
ξ2 \+ η2 \+ ζ2 = _R_ 2
Expresada en términos de _X_ , _y_ , _z_ la ecuación de esta superficie en el tiempo _t_ = 0 es
Un cuerpo rígido que tiene una forma esférica cuando se mide en reposo tiene, cuando se mide en movimiento —considerado desde el sistema de reposo—, la forma de un elipsoide de revolución con ejes
Así pues, mientras que las dimensiones _Y_ y _Z_ de la esfera (y, por lo tanto, también de todo cuerpo rígido, cualquiera que sea su forma) no parecen ser alteradas por el movimiento, la dimensión _X_ parece estar contraída en la fracción 1: , de modo que cuanto mayor es el valor de ν, mayor es la contracción. Para ν = _V_ todos los objetos en movimiento considerados desde el sistema de «reposo» se contraen en estructuras planas. Para velocidades superlumínicas nuestras consideraciones dejan de tener significado; como veremos a partir de consideraciones posteriores, en nuestra teoría la velocidad de la luz representa físicamente el papel de velocidades infinitamente grandes.
Es evidente que los mismos resultados se aplican a cuerpos en reposo en el sistema de «reposo» cuando se consideran desde un sistema en movimiento uniforme.
Imaginemos además que uno de los relojes que puede indicar el tiempo _t_ cuando está en reposo relativo al sistema de reposo y el tiempo τ cuando está en reposo relativo al sistema en movimiento, se coloca en el origen de _k_ y se pone en marcha de tal forma que indica el tiempo τ. ¿Cuál es el ritmo de este reloj cuando se considera desde el sistema de reposo?
Las cantidades _x_ , _t_ y τ que hacen referencia a la posición de dicho reloj satisfacen obviamente las ecuaciones:
y
_x_ = ν _t_.
Así pues, tenemos
de lo que se sigue que la lectura del reloj considerado desde el sistema de reposo se retrasa cada segundo en ( ) segundos, o, salvo cantidades de cuarto orden y superiores, en 1⁄2 (ν/V)2 segundos.
Esto da lugar a la siguiente consecuencia peculiar: si en los puntos _A_ y _B_ de _K_ hay relojes en reposo que, considerados desde el sistema de reposo, marchan de forma síncrona, y si el reloj de _A_ es transportado a _B_ a lo largo de la línea que los conecta con velocidad _ν_ , entonces a la llegada de dicho reloj a _B_ los dos relojes ya no marcharán de forma síncrona; en su lugar, el reloj que ha sido transportado de _A_ a _B_ se habrá retrasado 1⁄2 _t_ _ν_ _2_ / _V_ 2 segundos (salvo cantidades de cuarto orden y superiores) respecto al reloj que ha estado en _B_ desde el principio, donde _t_ es el tiempo necesario para que el reloj viaje de _A_ a _B_.
Vemos así que este resultado es válido incluso cuando el reloj se mueve de _A_ a _B_ a lo largo de cualquier línea poligonal arbitraria, e incluso cuando los puntos _A_ y _B_ coinciden.
Si suponemos que el resultado demostrado para una línea poligonal es también válido para una línea continuamente curvada, entonces llegamos al siguiente resultado: si existen en _A_ dos relojes que marchan de forma síncrona, y uno de ellos se lleva a lo largo de una curva cerrada con velocidad constante hasta que haya vuelto a _A_ , lo que necesita, digamos, _t_ segundos, entonces, a su llegada a _A_ se habrá retrasado 1⁄2 _t_ ( _ν_ / _V_ )2 respecto al reloj que no se ha movido. De esto concluimos que un reloj de volantes situado en el ecuador debe, en circunstancias por lo demás iguales, marchar ligeramente más lento que un reloj absolutamente idéntico situado en uno de los polos terrestres.
5. EL TEOREMA DE ADICIÓN DE VELOCIDADES
En el sistema _k_ en movimiento con velocidad ν a lo largo del eje _X_ del sistema _K_ , sea un punto que se mueve de acuerdo con las ecuaciones
donde _w_ ξ y _w_ η denotan constantes.
Buscamos el movimiento del punto relativo al sistema _K_. Introduciendo las cantidades _X_ , _y_ , _z_ , _t_ en las ecuaciones de movimiento del punto por medio de las ecuaciones de transformación deducidas en la sección 3, obtenemos
Así pues, de acuerdo con nuestra teoría, la suma vectorial de velocidades sólo es válida en primera aproximación. Sea
y
α debe considerarse entonces como el ángulo entre las velocidades _ν_ y _w_. Después de un cálculo sencillo obtenemos
Vale la pena señalar que ν y _w_ entran en la expresión para la velocidad resultante de una forma simétrica. Si _w_ tiene también la dirección del eje _X_ (eje χ), obtenemos
Se sigue de esta ecuación que la composición de dos velocidades que son menores que _V_ da siempre como resultado una velocidad que es menor que ν. Pues si hacemos _ν_ = _V_ – κ, y _w_ = _V_ – λ, donde κ y λ son positivas y menores que _V_ , entonces
Se sigue también que la velocidad de la luz _V_ no puede alterarse al componerla con una «velocidad sublumínica». Pues en este caso obtenemos
En el caso en que _ν_ y _w_ tienen la misma dirección, la fórmula para _U_ podría haberse obtenido también componiendo dos transformaciones de acuerdo con la sección 3. Si además de los sistemas _K_ y _k_ , que aparecen en la sección 3, introducimos un tercer sistema de coordenadas _K_ ', que se mueve paralelo a _k_ y cuyo origen se mueve con velocidad _w_ a lo largo del eje χ, obtenemos ecuaciones entre las cantidades _X, y, z, t_ y las correspondientes cantidades de _k_ ' que sólo difieren de las encontradas en la sección 3 en que «ν» es reemplazada por la cantidad
de esto vemos que tales transformaciones paralelas constituyen un grupo —como debe ser en realidad.
Hemos deducido ahora las leyes requeridas de la cinemática correspondiente a nuestros dos principios, y procedemos a su aplicación a la electrodinámica.
#### B. PARTE ELECTRODINÁMICA
6. TRANSFORMACIÓN DE LAS ECUACIONES DE MAXWELL-HERTZ PARA EL ESPACIO VACÍO. SOBRE LA NATURALEZA DE LAS FUERZAS ELECTROMOTRICES DEBIDAS AL MOVIMIENTO EN UN CAMPO MAGNÉTICO
Sean las ecuaciones de Maxwell-Hertz para el espacio vacío válidas para el sistema de reposo _K_ , de modo que tenemos
donde ( _X, Y, Z_ ) denota el vector fuerza eléctrica y ( _L, M, N_ ), el vector fuerza magnética.
Si aplicamos a estas ecuaciones las transformaciones deducidas en la sección 3, para relacionar los procesos electromagnéticos con el sistema de coordenadas en movimiento con velocidad _v_ allí introducido, obtenemos las siguientes ecuaciones:
donde
El principio de relatividad requiere que las ecuaciones de Maxwell-Hertz para el espacio vacío sean válidas también en el sistema _k_ si son válidas en el sistema _K_ , i. e., que los vectores de las fuerzas eléctrica y magnética —( _X_ ' _, Y_ ' _, Z_ ') y ( _L_ ' _, M_ ' _, N_ ')— del sistema en movimiento _k_ , que están definidos en dicho sistema por sus efectos ponderomotrices sobre cargas eléctricas y magnéticas, respectivamente, satisfagan las ecuaciones
Obviamente, los dos sistemas de ecuaciones encontrados para el sistema _k_ deben expresar exactamente lo mismo, puesto que ambos sistemas de ecuaciones son equivalentes a las ecuaciones de Maxwell-Hertz para el sistema _K_. Además, puesto que las ecuaciones para ambos sistemas están en acuerdo aparte de los símbolos que representan a los vectores, se sigue que las funciones que figuran en los sistemas de ecuaciones en lugares correspondientes deben coincidir salvo un factor ψ( _ν_ ), común a todas las funciones de uno de los sistemas de ecuaciones e independiente de ξ, η, ζ y τ, aunque posiblemente dependiente de _ν_. Así pues, tenemos las relaciones:
Si ahora invertimos este sistema de ecuaciones, resolviendo primero las ecuaciones recién obtenidas, y aplicando en segundo lugar a las ecuaciones la transformación inversa (desde _k_ a _K_ ), que está caracterizada por la velocidad – _ν_ , obtenemos, teniendo en cuenta que ambos sistemas de ecuaciones así obtenidos deben ser idénticos,
ψ( _ν_ ) · ψ(– _ν_ ) = 1.
Además, se sigue por razones de simetría que
ψ( _ν_ ) = ψ(– _ν_ );
de modo que
ψ( _ν_ ) = 1.
y nuestras ecuaciones toman la forma
Para interpretar estas ecuaciones, apuntemos los siguientes comentarios: imaginemos una carga eléctrica puntual, cuya magnitud medida en el sistema de reposo es la «unidad», i. e., que cuando está en reposo en el sistema de reposo ejerce una fuerza de 1 dina sobre una carga igual situada a una distancia de 1 cm. De acuerdo con el principio de relatividad, esta carga eléctrica es también de magnitud «unidad» si se mide en el sistema en movimiento. Si esta carga eléctrica está en reposo relativo al sistema de reposo, entonces, por definición el vector ( _X, Y, Z_ ) iguala a la fuerza que actúa sobre ella. Si, por el contrario, esta carga actuante está en reposo relativo al sistema en movimiento (al menos en el instante relevante), entonces la fuerza que actúa sobre ella medida en el sistema en movimiento es igual al vector ( _X_ ' _, Y_ ' _, Z_ '). Por lo tanto, las tres primeras de las ecuaciones anteriores pueden expresarse en palabras de las dos formas siguientes:
1. Si una carga eléctrica puntual unidad se mueve en un campo electromagnético, sobre ella actúa, además de la fuerza eléctrica, una «fuerza electromotriz» que, despreciando términos multiplicados por las potencias segunda y superiores de _v_ / _V_ , es igual al producto vectorial de la velocidad de la carga y la fuerza magnética, dividido por la velocidad de la luz. (Antiguo modo de expresión.)
2. Si una carga eléctrica puntual unidad se mueve en un campo electromagnético, la fuerza que actúa sobre ella iguala a la fuerza eléctrica en la localización de la carga unidad que se obtiene transformando el campo a un sistema de coordenadas en reposo relativo a la carga unidad. (Nuevo modo de expresión.)
Comentarios análogos son válidos para las «fuerzas magnetomotrices». Podemos ver que en la teoría aquí desarrollada, la fuerza electromotriz sólo representa el papel de un concepto auxiliar, que debe su introducción a la circunstancia de que las fuerzas eléctrica y magnética no tienen una existencia independiente del estado de movimiento del sistema de coordenadas.
Es además evidente que la asimetría en el tratamiento de las corrientes producidas por el movimiento relativo de un imán y un conductor, mencionada en la introducción, desaparece. Más aún, dejan de tener sentido las cuestiones acerca de la «sede» de las fuerzas electromotrices electrodinámicas (máquinas unipolares).
7. TEORÍA DEL PRINCIPIO DE DOPPLER Y DE LA ABERRACIÓN
Sea una fuente de ondas electromagnéticas en el sistema _K_ y muy lejos del origen de coordenadas; en una región del espacio que contiene el origen de coordenadas, dichas ondas están representadas con precisión suficiente por las ecuaciones
Aquí ( _X_ 0, _Y_ 0, _Z_ 0) y ( _L_ 0, _M_ 0, _N_ 0) son los vectores que determinan la amplitud del tren de ondas, y _a_ , _b_ , _c_ son los cosenos directores de la normal a las ondas.
Queremos saber el carácter de dichas ondas cuando son investigadas por un observador en reposo en el sistema en movimiento. Aplicando las ecuaciones de transformación para las fuerzas eléctrica y magnética encontradas en la sección 6 y aquéllas para las coordenadas y el tiempo encontradas en la sección 3, obtenemos inmediatamente:
donde hemos hecho
De la ecuación para ω' se sigue que si un observador se mueve con velocidad ν relativa a una fuente de luz de frecuencia ν infinitamente lejana, de tal modo que la línea de unión «fuente de luz-observador» forma un ángulo _ϕ_ con la velocidad del observador, siendo esta velocidad relativa a un sistema de coordenadas en reposo relativo a la fuente de luz, entonces ν', la frecuencia de la luz percibida por el observador viene dada por la ecuación
Éste es el principio de Doppler para velocidades arbitrarias. Para _ϕ_ = 0, la ecuación toma la forma sencilla
Vemos que, contrariamente a la concepción habitual, cuando _ν_ = –∞, entonces ν = ∞.
Si _ϕ_ ' denota el ángulo entre la normal a la onda (la dirección del rayo) en el sistema en movimiento y la línea de unión «fuente de luz-observador», la ecuación para α' toma la forma
Esta ecuación expresa la ley de aberración en su forma más general. Si _ϕ_ = π/2, la ecuación toma la forma simple
Aún tenemos que encontrar la amplitud de las ondas tal como aparece en el sistema en movimiento. Si _A_ y _A_ ' denotan la amplitud de la fuerza eléctrica y magnética en el sistema en reposo y el sistema en movimiento, respectivamente, obtenemos
que para _ϕ_ = 0 toma la forma más simple:
Se sigue de estos resultados que para un observador que se aproxima a una fuente luminosa a velocidad _V_ , dicha fuente debería aparecer de intensidad infinita.
8. TRANSFORMACIÓN DE LA ENERGÍA DE RAYOS LUMINOSOS. TEORÍA DE LA PRESIÓN DE RADIACIÓN EJERCIDA SOBRE ESPEJOS PERFECTOS
Puesto que _A_ 2/8π es igual a la energía luminosa por unidad de volumen, de acuerdo con el principio de relatividad tenemos que considerar _A_ '2/8π como la energía luminosa en el sistema en movimiento. Por lo tanto, _A_ '2/ _A_ 2 sería la razón entre la energía de un complejo luminoso dado «medido en movimiento» y «medido en reposo» si el volumen de un complejo luminoso fuera el mismo medido en _K_ y en _k_. Sin embargo, éste no es el caso. Si _a, b, c_ son los cosenos directores de la normal a la onda luminosa en el sistema de reposo, entonces ninguna energía atraviesa los elementos de superficie de la superficie esférica
que se mueve con la velocidad de la luz; podemos decir por consiguiente que dicha superficie encierra permanentemente el mismo complejo luminoso. Investiguemos la cantidad de energía encerrada por dicha superficie considerada desde el sistema _k_ , i. e., la energía del complejo luminoso relativa al sistema _k_.
Considerada en el sistema en movimiento, la superficie esférica es una superficie elipsoidal cuya ecuación en el instante τ = 0 es
Si _S_ denota el volumen de la esfera y _S_ ' el del elipsoide, entonces un simple cálculo muestra que
Si llamamos _E_ a la energía de la luz encerrada por esta superficie cuando es medida en el sistema de reposo y _E_ ' cuando es medida en el sistema en movimiento, obtenemos
que, para _ϕ_ = 0, se simplifica en
Es digno de mención que la energía y la frecuencia de un complejo luminoso varían con el estado de movimiento del observador de acuerdo con la misma ley.
Sea el plano de coordenadas ξ = 0 una superficie completamente reflectante en la que se reflejan las ondas planas consideradas en la sección 7. Investiguemos la presión de luz ejercida sobre la superficie reflectante, y la dirección, frecuencia e intensidad de la luz después de la reflexión.
Sea la luz incidente definida por las cantidades _A_ , cos _ϕ_ y ν (relativas al sistema _K_ ). Consideradas desde _k_ , las cantidades correspondientes son
Refiriendo los procesos al sistema _k_ , obtenemos para la luz reflejada
Finalmente, transformando de nuevo al sistema de reposo _K_ , obtenemos para la luz reflejada
La energía (medida en el sistema de reposo) que incide sobre una superficie unidad del espejo por unidad de tiempo es obviamente _A_ 2/8π ( _V_ cos _ϕ_ – _ν_ ). La energía que deja una superficie unidad del espejo por unidad de tiempo es _A_ ''2/8π (– _V_ cos _ϕ_ ''' + _ν_ ). De acuerdo con el principio de la conservación de la energía, la diferencia entre estas dos expresiones es el trabajo realizado por la presión de luz por unidad de tiempo. Igualando este trabajo a _P_ · _ν_ , donde _P_ es la presión de luz, obtenemos
En primera aproximación, en acuerdo con el experimento y con otras teorías, obtenemos
Todos los problemas de la óptica de cuerpos en movimiento pueden resolverse mediante el método aquí utilizado. El punto esencial es que los campos eléctrico y magnético de la luz que recibe la influencia de un cuerpo en movimiento se transforman a un sistema de coordenadas que está en reposo relativo a dicho cuerpo. De esta manera, todos los problemas de la óptica de cuerpos en movimiento se reducen a una serie de problemas de la óptica de cuerpos en reposo.
9. TRANSFORMACIÓN DE LAS ECUACIONES DE MAXWELL-HERTZ CUANDO SE TIENEN EN CUENTA CORRIENTES DE CONVECCIÓN
Partimos de las ecuaciones
donde
denota 4π veces la densidad de carga, y ( _u x_, _u y_, _u z_) el vector velocidad de la carga. Si se conciben las cargas eléctricas ligadas permanentemente a cuerpos rígidos y pequeños (iones, electrones), entonces estas ecuaciones constituyen el fundamento electromagnético de la electrodinámica y la óptica de Lorentz para cuerpos en movimiento.
Si, utilizando las ecuaciones de transformación presentadas en las secciones 3 y 6, transformamos estas ecuaciones, que se suponen válidas en el sistema _K_ , al sistema _k_ , obtenemos las ecuaciones
donde
y
Puesto que —como se sigue del teorema de adición de velocidades (sección 5)— el vector ( _u x_, _u y_, _u z_) es en realidad la velocidad de las cargas eléctricas medida en el sistema _k_ , hemos demostrado así que, sobre la base de nuestros principios cinemáticos, el fundamento electrodinámico de la teoría de Lorentz de la electrodinámica de cuerpos en movimiento está en acuerdo con el principio de relatividad.
Permítaseme también añadir brevemente que la siguiente proposición importante puede ser deducida fácilmente a partir de las ecuaciones que hemos obtenido. Si un cuerpo eléctricamente cargado se mueve arbitrariamente en el espacio sin alterar su carga cuando se observa desde un sistema de coordenadas que se mueve con el cuerpo, entonces su carga también permanece constante cuando se observa desde el sistema «de reposo» _K_.
10. DINÁMICA DEL ELECTRÓN (LENTAMENTE ACELERADO)
Sea una partícula eléctricamente cargada con carga _e_ (en adelante llamada un «electrón») que se mueve en un campo electromagnético; sobre su ley de movimiento hacemos solamente la siguiente hipótesis:
Si el electrón está en reposo en un instante particular, su movimiento durante el siguiente instante de tiempo tendrá lugar de acuerdo con las ecuaciones
donde _X, y, z_ denotan las coordenadas del electrón y _µ_ su masa, siempre que el electrón se mueva lentamente.
Además, sea _ν_ la velocidad del electrón en un cierto instante. Investiguemos la ley de movimiento del electrón durante el instante de tiempo inmediatamente posterior.
Sin pérdida de generalidad, podemos suponer, y así lo haremos, que el electrón está en el origen de coordenadas y se mueve con velocidad _ν_ a lo largo del eje _X_ del sistema _K_ en el momento que nos interesa. Es entonces obvio que en el momento dado ( _t_ = 0), el electrón está en reposo relativo a un sistema de coordenadas _k_ que se mueve con velocidad constante _ν_ paralelo al eje _X_.
A partir de la hipótesis anterior, combinada con el principio de relatividad, es evidente que, considerado desde el sistema _k_ , el electrón se moverá durante el período de tiempo inmediatamente siguiente (para pequeños valores de _t_ ) de acuerdo con las ecuaciones
donde todos los símbolos ξ, η, ζ, τ, _X_ ', _Y_ ', _Z_ ' se refieren al sistema _k_. Si estipulamos también que, para _t_ = _X_ = _y_ = _z_ = 0, también será válido τ = ξ = η = ζ = 0, entonces son aplicables las ecuaciones de transformación de las secciones 3 y 6, de modo que obtenemos
Con ayuda de estas ecuaciones transformamos las ecuaciones de movimiento anteriores desde el sistema _k_ al sistema _K_ , obteniendo
(A)
Siguiendo el enfoque habitual, investiguemos ahora la masa «longitudinal» y «transversal» del electrón en movimiento. Escribamos las ecuaciones (A) en la forma
y notemos en primer lugar que ∈ _X_ ', ∈ _Y_ ', ∈ _Z_ ' son las componentes de la fuerza ponderomotriz que actúa sobre el electrón, considerado en un sistema en movimiento que, en dicho instante, se está moviendo con la misma velocidad que el electrón. (Esta fuerza podría medirse, por ejemplo, mediante una balanza de resorte en reposo en el último sistema.) Si simplemente llamamos a esta fuerza «la fuerza que actúa sobre el electrón», y mantenemos la ecuación
Masa X Aceleración = Fuerza,
estipulando, además, que las aceleraciones sean medidas en el sistema en reposo _K_ , entonces las ecuaciones anteriores conducen a la definición:
Por supuesto, con una definición diferente de fuerza y aceleración obtendríamos valores diferentes para estas masas; esto muestra que debemos proceder con mucha precaución cuando comparemos diversas teorías del movimiento del electrón.
Debería advertirse que estos resultados acerca de la masa son también válidos para puntos materiales ponderables, porque de un punto material ponderable puede hacerse un electrón (en nuestro sentido de la palabra) añadiéndole una carga eléctrica _arbitrariamente pequeña_.
Determinemos ahora la energía cinética de un electrón. Si un electrón parte del origen del sistema _K_ con una velocidad inicial 0 y continúa moviéndose a lo largo del eje _X_ bajo la influencia de una fuerza electrostática _X_ , es evidente que la energía que toma del campo electrostático tiene el valor ∫ ∈ _Xdx_. Puesto que se supone que el electrón se acelera lentamente y, en consecuencia, no puede emitir ninguna energía en forma de radiación, la energía tomada del campo electrostático debe ser igualada a la energía cinética _W_ del electrón. Teniendo en cuenta que la primera de las ecuaciones (A) es válida durante todo el proceso del movimiento, obtenemos
Así pues, _W_ se hace infinitamente grande cuando ν = _V_. Igual que sucede con nuestros resultados anteriores, las velocidades superlumínicas no son posibles.
En virtud del argumento presentado antes, esta expresión para la energía cinética debe ser también válida para masas ponderables.
Enumeremos ahora las propiedades del movimiento del electrón resultante del sistema de ecuaciones (A) que son accesibles al experimento.
1. De la segunda ecuación del sistema (A) se sigue que una fuerza eléctrica _Y_ y una fuerza magnética _N_ tienen un efecto deflector igualmente intenso sobre un electrón que se mueve con velocidad _ν_ si _Y_ = _N_ ν/ _V_. Vemos así que, utilizando nuestra teoría, es posible determinar la velocidad del electrón a partir del cociente entre la deflexión magnética _A m_ y la deflexión eléctrica _A_ e para velocidades arbitrarias, aplicando la ley
Esta relación puede ser comprobada experimentalmente, puesto que la velocidad del electrón también puede medirse directamente, por ejemplo, utilizando campos eléctricos y magnéticos rápidamente oscilantes.
2. A partir de la deducción de la energía cinética de un electrón se sigue que la diferencia de potencial atravesada por el electrón y la velocidad _v_ que adquiere el electrón deben estar relacionadas por la ecuación
3. Calculamos el radio de curvatura _R_ de la trayectoria del electrón si hay presente (como única fuerza deflectante) una fuerza magnética _N_ que actúa perpendicularmente a su velocidad. De la segunda de las ecuaciones (A) obtenemos:
o
Estas tres relaciones constituyen una expresión completa de las leyes según las cuales, de acuerdo con la teoría aquí presentada, debe moverse el electrón.
Para concluir, permítaseme señalar que mi amigo y colega M. Besso me apoyó incondicionalmente en mi trabajo sobre el problema aquí discutido, y que estoy en deuda con él por varias sugerencias valiosas.
### 2
### ¿DEPENDE LA INERCIA DE UN CUERPO DE SU CONTENIDO DE ENERGÍA?*
Los resultados de una investigación electrodinámica recientemente publicada por mí en esta revista llevan a una conclusión muy interesante, que se deducirá aquí.
Basé dicha investigación en las ecuaciones de Maxwell-Hertz para el espacio vacío, junto con la expresión de Maxwell para la energía electromagnética del espacio, y también en el siguiente principio:
_Las leyes de acuerdo con las cuales cambian los estados de los sistemas físicos son independientes de cuál de los dos sistemas de coordenadas (supuestos en movimiento paralelo-traslacional uniforme uno con relación al otro) es utilizado para describir dichos cambios (el principio de relatividad)_.
Sobre esta base, yo obtuve el siguiente resultado, entre otros _(loc. cit.,_ sección 8).
Sea un sistema de ondas de luz planas que tienen energía _1_ relativa al sistema de coordenadas _(x, y, z);_ sea _ϕ_ el ángulo que forma la dirección del rayo (la normal a la onda) con el eje _x_ del sistema. Si introducimos un nuevo sistema de coordenadas (ξ, η, ζ), que está en traslación paralela uniforme con respecto al sistema ( _x_ , _y_ , _z_ ), y cuyo origen se mueve a lo largo del eje _x_ con velocidad _v_ , entonces esta cantidad de luz —medida en el sistema (ξ, η, ζ)— tiene la energía
donde _V_ denota la velocidad de la luz. Haremos uso de este resultado en lo que sigue.
Sea un cuerpo en reposo en el sistema ( _x_ , _y_ , _z_ ) cuya energía, relativa al sistema ( _x_ , _y_ , _z_ ), es _E_ 0. Sea _H_ 0 la energía del cuerpo, relativa al sistema (ξ, η, ζ), que se mueve con velocidad v como antes.
Supongamos que este cuerpo emite ondas planas de luz de energía _L_ /2, medida con relación a ( _x_ , _y_ , _z_ ), en una dirección que forma un ángulo _ϕ_ con el eje _x,_ y al mismo tiempo emite una cantidad igual de luz en la dirección opuesta. El cuerpo permanece en reposo con respecto al sistema ( _x_ , _y_ , _z_ ) durante este proceso. Este proceso debe satisfacer el principio de conservación de la energía, y debe ser cierto (de acuerdo con el principio de relatividad) con respecto a ambos sistemas de coordenadas. Si _E_ 1 y _H_ 1 denotan la energía del cuerpo después de la emisión de la luz, medida con relación al sistema ( _x_ , _y_ , _z_ ) y el sistema (ξ, η, ζ), respectivamente, obtenemos, utilizando la relación indicada arriba,
Restando, obtenemos de estas ecuaciones
Las dos diferencias de la forma _H_ – _E_ que aparecen en la expresión tienen significados físicos simples. _H_ y _E_ son los valores de la energía del mismo cuerpo respecto a los dos sistemas de coordenadas en movimiento relativo, estando el cuerpo en reposo en uno de los sistemas, el sistema ( _x_ , _y_ , _z_ ). Por lo tanto, es evidente que la diferencia _H_ – _E_ sólo puede diferir de la energía cinética del cuerpo _K_ con respecto al otro sistema, el sistema (ξ, η, ζ), en una constante aditiva _C_ , que depende de la elección de las constantes aditivas arbitrarias en las energías _H_ y _E_. Podemos así establecer
_H_ 0 – _E_ 0 = _K_ 0 \+ _C_ ,
_H_ 1 – _E_ 1 = _K_ 1 \+ _C_ ,
puesto que _C_ no cambia durante la emisión de luz. De modo que obtenemos
La energía cinética del cuerpo con respecto a (ξ, η, ζ) disminuye como resultado de la emisión de la luz en una cantidad que es independiente de las propiedades del cuerpo. Además, la diferencia _K_ 0 – _K_ 1 depende de la velocidad de la misma forma que lo hace la energía cinética de un electrón _(loc. cit_., sección 10).
Despreciando magnitudes de cuarto orden y superiores, podemos obtener
A partir de esta ecuación se concluye inmediatamente:
_Si un cuerpo emite la energía L en forma de radiación, su masa disminuye en L/V 2. _Aquí, obviamente, no es esencial que la energía tomada del cuerpo se convierta en energía radiante, de modo que nos vemos llevados a la conclusión más general:
La masa de un cuerpo es una medida de su contenido de energía; si la energía cambia en _L,_ la masa cambia en el mismo sentido en _L_ /9 · 1020 si la energía se mide en ergios y la masa en gramos.
No hay que descartar la posibilidad de poner a prueba esta teoría utilizando cuerpos cuyo contenido de energía es variable en alto grado (por ejemplo, sales de radio).
Si la teoría está de acuerdo con los hechos, entonces la radiación transporta inercia entre cuerpos emisores y absorbentes.
### 3
### SOBRE LA INFLUENCIA DE LA GRAVITACIÓN EN LA PROPAGACIÓN DE LA LUZ*
En una memoria publicada hace cuatro años traté de responder a la pregunta de si la propagación de la luz está influida por la gravitación. Vuelvo a este tema porque mi presentación previa de la cuestión no me satisface; y por una razón más importante, porque ahora veo que una de las consecuencias más importantes de mi primer tratamiento puede ponerse a prueba experimentalmente. En efecto, de la teoría que aquí se expone se sigue que los rayos de luz que pasan cerca del Sol son desviados por el campo gravitatorio de éste, de modo que la distancia angular entre el Sol y una estrella fija que parece próxima a él se incrementa aparentemente en casi un segundo de arco.
En el curso de estas reflexiones se obtienen resultados adicionales en relación a la gravitación. Pero como la exposición de todo el grupo de consideraciones sería bastante difícil de seguir, en las páginas que siguen sólo se ofrecerán algunas reflexiones muy elementales, a partir de las cuales el lector podrá informarse fácilmente acerca de las hipótesis de la teoría y su línea de razonamiento. Las relaciones aquí deducidas, incluso si el fundamento teórico es correcto, son válidas sólo en primera aproximación.
1. UNA HIPÓTESIS RESPECTO A LA NATURALEZA FÍSICA DEL CAMPO GRAVITATORIO
En un campo gravitatorio homogéneo (aceleración de la gravedad γ) sea un sistema de coordenadas estacionario _K_ , orientado de forma que las líneas de fuerza del campo gravitatorio corren en la dirección negativa del eje _z_. En un espacio libre de gravitación, sea un segundo sistema de coordenadas _K_ ', que se mueve con aceleración uniforme (γ) en la dirección positiva del eje _z_. Para evitar complicaciones innecesarias, de momento no consideramos la teoría de la relatividad sino que consideramos ambos sistemas desde el punto de vista acostumbrado en cinemática, y los movimientos que ocurren en ellos desde la mecánica ordinaria.
Con respecto a K, así como con respecto a K', los puntos materiales que están sujetos a la acción de otros puntos materiales, se mueven de acuerdo con las ecuaciones
Para el sistema acelerado K' esto se sigue directamente del principio de Galileo; pero para el sistema K, en reposo en un campo gravitatorio homogéneo, se sigue a partir de la experiencia de que todos los cuerpos en un campo semejante son igual y uniformemente acelerados. Esta experiencia, la de la caída igual de todos los cuerpos en el campo gravitatorio, es una de las más universales que ha ofrecido la observación de la naturaleza; pero a pesar de eso la ley no ha encontrado ningún lugar en el fundamento de nuestro edificio del universo físico.
Pero llegamos a una interpretación muy satisfactoria de esta ley de experiencia, si suponemos que los sistemas K y K' son exactamente equivalentes desde el punto de vista físico; es decir, si suponemos que podemos considerar igualmente bien que el sistema K está en un espacio libre de campos gravitatorios si al mismo tiempo consideramos K uniformemente acelerado. Esta hipótesis de equivalencia física exacta hace imposible que hablemos de la _aceleración absoluta_ del sistema de referencia, de la misma forma que la teoría de la relatividad habitual nos prohíbe hablar de la _velocidad absoluta_ de un sistema; y hace que la caída igual de todos los cuerpos en un campo gravitatorio parezca una norma.
Mientras nos limitemos a procesos puramente mecánicos en el dominio donde es válida la mecánica de Newton, estamos seguros de la equivalencia de los sistemas K y K'. Pero esta concepción nuestra no tendrá ninguna significación más profunda a menos que los sistemas K y K' sean equivalentes con respecto a todos los procesos físicos, es decir, a menos que las leyes de la naturaleza con respecto a K estén en completo acuerdo con las leyes con respecto a K'. Suponiendo que es así, llegamos a un principio que, si es realmente verdadero, tiene gran importancia heurística, pues por consideración teórica de los procesos que tienen lugar con respecto a un sistema de referencia con aceleración uniforme, obtenemos información acerca del curso de los procesos en un campo gravitatorio homogéneo. Ahora mostraremos, en primer lugar, desde el punto de vista de la teoría de la relatividad ordinaria, qué grado de probabilidad es inherente a nuestra hipótesis.
2. SOBRE LA GRAVITACIÓN DE LA ENERGÍA
Un resultado de la teoría de la relatividad es que la masa inerte de un cuerpo aumenta con la energía que contiene; si el aumento de energía equivale a E, el aumento en la masa inerte es igual a E/ _c_ 2, donde _c_ denota la velocidad de la luz. Ahora bien, ¿hay un aumento de masa gravitatoria correspondiente a este aumento de masa inerte? Si no lo hay, entonces un cuerpo caería en el campo gravitatorio con aceleración variable según la energía que contuviera. Ya no podría mantenerse ese resultado altamente satisfactorio de la teoría de la relatividad por el que la ley de conservación de la masa se fusiona en la ley de conservación de la energía, porque nos veríamos obligados a abandonar la ley de conservación de la masa en su forma antigua para la masa inerte, y mantenerla para la masa gravitatoria.
Pero esto debe considerarse muy improbable. Por otra parte, la teoría de la relatividad habitual no nos proporciona ningún argumento a partir del cual inferir que el peso de un cuerpo depende de la energía contenida en el mismo. Pero demostraremos que nuestra hipótesis de la equivalencia de los sistemas K y K' nos da la gravitación de la energía como una consecuencia necesaria.
Sean dos sistemas materiales S1 y S2, provistos de instrumentos de medida, situados en el eje _z_ de K a la distancia _h_ uno de otro, de modo que el potencial gravitatorio en S2 es mayor que el potencial en S1 en una cantidad γ _h_. Sea emitida una cantidad definida de energía E desde S2 hacia S1. Sean medidas las cantidades de energía en S1 y S2 por aparatos que —llevados a una misma posición _z_ en el sistema y comparados allí— serán perfectamente iguales. En cuanto al proceso de esta transmisión de energía por radiación no podemos hacer ninguna afirmación a priori, porque no conocemos la influencia del campo gravitatorio sobre la radiación y los instrumentos de medida en S1 y S2.
Pero por nuestro postulado de la equivalencia de K y K', en lugar del sistema K en un campo gravitatorio homogéneo, podemos poner el sistema K' libre de gravitación, que se mueve con aceleración uniforme en la dirección de _z_ positivo, y con cuyo eje _z_ están rígidamente conectados los sistemas materiales S1 y S2.
Juzgamos el proceso de la transferencia de energía por radiación de S2 a S1 desde un sistema K0, que debe estar libre de aceleración. Supongamos que en el instante en que la energía radiante E2 es emitida desde S2 hacia S1, la velocidad relativa de K' con respecto a K0 es cero. La radiación llegará a S1 cuando haya transcurrido un tiempo _h/c_ (en primera aproximación). Pero en este instante la velocidad de S1 con respecto a K0 es γ _h/c = v_. Por lo tanto, por la teoría de la relatividad ordinaria la radiación que llega a S1 no posee la energía E2 sino una energía mayor E1, que está relacionada con E2, en primera aproximación, por la ecuación
(1)
Por nuestra hipótesis, exactamente la misma relación es válida si el mismo proceso tiene lugar en el sistema K, que no está acelerado pero en donde existe un campo gravitatorio. En este caso podemos reemplazar γ _h_ por el potencial Φ del vector gravitación en S2, si la constante arbitraria de Φ en S1 se hace igual a cero. Entonces tenemos la ecuación
(1a)
Esta ecuación expresa la ley de energía para el proceso bajo observación. La energía E1 que llega a S1 es mayor que la energía S2, medida por los mismos medios, que fue emitida en S2, siendo el exceso la energía potencial de la masa E2/ _c_ 2 en el campo gravitatorio. Se prueba así que para el cumplimiento del principio de la energía tenemos que adscribir a la energía E, antes de su emisión en S2, una energía potencial debida a la gravedad, que corresponde a la masa gravitatoria E/ _c_ 2. Nuestra hipótesis de la equivalencia de K y K' elimina así la dificultad mencionada al principio de esta sección y que la teoría de la relatividad ordinaria deja sin resolver.
El significado de este resultado se muestra de manera particularmente clara si consideramos el siguiente ciclo de operaciones:
1. La energía E, medida en S2, es emitida en forma de radiación de S2 hacia S1, donde, por el resultado recién obtenido, se absorbe la energía E (1 + γ _h_ / _c_ 2), medida en S1.
2. Se hace descender un cuerpo W de masa M desde S2 a S1, haciéndose un trabajo Mγ _h_ en el proceso.
3. La energía E es transferida desde S1 al cuerpo W mientras W está en S1. Cámbiese por ello la masa M de modo que adquiere el valor M'.
4. Sea elevado de nuevo W hasta S2, haciéndose un trabajo M'γ _h_ en este proceso.
5. Sea E transferida de vuelta de W a S2.
El efecto de este ciclo es simplemente que S1 ha experimentado el incremento de energía Eγ _h_ / _c_ 2, y que la cantidad de energía M'γ _h_ – Mγ _h_ ha sido transmitida al sistema en forma de trabajo mecánico. Por el principio de la energía, debemos tener
o
M' – M = E/ _c_ 2...
(1b)
El incremento en la masa _gravitatoria_ es así igual a E/ _c_ 2, y por consiguiente igual al incremento en masa _inerte_ dado por la teoría de la relatividad.
El resultado se desprende aún más directamente de la equivalencia de los sistemas K y K', según la cual la masa gravitatoria respecto de K es exactamente igual a la masa inerte respecto de K'; la energía debe por lo tanto poseer una masa _gravitatoria_ que es igual a su masa _inerte_. Si se suspende una masa M0 de una balanza de resorte en el sistema K', la balanza indicará el peso aparente M0γ debido a la inercia de M0. Si se transfiere a M0 la cantidad de energía E, la balanza de resorte, por la ley de inercia de la energía, indicará (M0 \+ E/ _c_ 2)γ. Por nuestra hipótesis fundamental, exactamente lo mismo debe ocurrir cuando se repite el experimento en el sistema K, es decir, en el campo gravitatorio.
3. TIEMPO Y VELOCIDAD DE LA LUZ EN EL CAMPO GRAVITATORIO
Si la radiación emitida en el sistema uniformemente acelerado K' en S2 hacia S1 tenía la frecuencia _ν_ 2 con relación al reloj en S2, entonces, a su llegada a S1 ya no tiene la frecuencia _ν_ 2, con relación a un reloj idéntico en S1, sino una frecuencia mayor _ν_ 1, tal que en la primera aproximación
(2)
En efecto, si introducimos otra vez el sistema de referencia no acelerado K0, con respecto al cual, en el instante de la emisión de luz, K' no tiene velocidad, entonces S1, en el instante de llegada de la radiación a S1, tiene la velocidad γ _h/c_ con respecto a K0, de lo que, por el principio de Doppler, resulta inmediatamente la relación dada.
De acuerdo con nuestra hipótesis de la equivalencia de los sistemas K y K', esta ecuación también es válida para el sistema de coordenadas estacionario K, en donde hay un campo gravitatorio uniforme, si en el mismo tiene lugar la transferencia por radiación tal como se ha descrito. Se sigue, entonces, que un rayo luminoso emitido en S2 con un potencial gravitatorio definido, y que posee en su emisión la frecuencia _ν_ 2 —comparada con un reloj en S2— poseerá, a su llegada a S1, una frecuencia diferente _ν_ 1 —medida por un reloj idéntico en S1—. Para γ _h_ sustituimos el potencial gravitatorio Φ de S2 —tomando como cero el de S1— y suponemos que la relación que hemos deducido para el campo gravitatorio homogéneo es también válida para otras formas de campo. Entonces
(2a)
Este resultado (que por nuestra deducción es válido en primera aproximación) permite, en primer lugar, la siguiente aplicación. Sea _ν_ 0 el número de vibración de un generador de luz elemental, medido por un delicado reloj en el mismo lugar. Imaginemos a ambos en un lugar en la superficie del Sol (donde está localizado nuestro S2). De la luz allí emitida, una porción alcanza la Tierra (S1), donde medimos la frecuencia de la luz que llega con un reloj U que se parece en todo al recién mencionado. Entonces por (2a)
donde Φ es la diferencia (negativa) de potencial gravitatorio entre la superficie del Sol y la Tierra. Así pues, de acuerdo con nuestra idea, las líneas espectrales de la luz solar deben estar algo desplazadas hacia el rojo, comparadas con las correspondientes líneas espectrales de las fuentes de luz terrestres, en la cantidad relativa
Si se conocieran exactamente las condiciones en las que aparecen las bandas solares, este desplazamiento sería susceptible de ser medido. Pero dado que otras influencias (presión, temperatura) afectan a la posición de los centros de las líneas espectrales, es difícil descubrir si realmente existe la influencia inferida del potencial gravitatorio.
En una consideración superficial la ecuación (2), respectivamente la (2a), parece afirmar un absurdo. Si existe transmisión constante de luz de S2 a S1, ¿cómo puede llegar a S1 cualquier otro número de períodos por segundo distinto del emitido en S2? Pero la respuesta es sencilla. No podemos considerar _ν_ 2 o, respectivamente _ν_ 1, simplemente como frecuencias (como número de períodos por segundo) puesto que aún no hemos determinado el tiempo en el sistema K. Lo que denota _ν_ 2 es el número de períodos con referencia a la unidad de tiempo del reloj U en S2, mientras que ν1 denota el número de períodos por segundo con referencia al reloj idéntico en S1. Nada nos obliga a suponer que haya que considerar que los relojes U en potenciales gravitatorios diferentes marchan al mismo ritmo. Por el contrario, debemos ciertamente definir el tiempo en K de tal manera que el número de crestas y vientres de onda entre S2 y S1 sea independiente del valor absoluto del tiempo; pues el proceso bajo observación es por naturaleza estacionario. Si no satisficiéramos esta condición llegaríamos a una definición de tiempo por aplicación de la cual el tiempo se fusionaría explícitamente en las leyes de la naturaleza, y esto ciertamente sería poco natural y poco práctico. Por consiguiente, los dos relojes en S1 y S2 no dan ambos el «tiempo» correctamente. Si medimos el tiempo en S1 con el reloj U, entonces debemos medir el tiempo en S2 con un reloj que marcha 1 + Φ/ _c_ 2 veces más lentamente que el reloj U cuando se compara con U en uno y el mismo lugar. Pues cuando se mide por dicho reloj, la frecuencia del rayo de luz antes considerado es en su emisión en S2
y por consiguiente es, por (2a), igual a la frecuencia _ν_ 1 del mismo rayo de luz a su llegada a S1.
Esto tiene una consecuencia de fundamental importancia para nuestra teoría. Pues si medimos la velocidad de la luz en diferentes lugares en el sistema acelerado y libre de gravitación K', empleando relojes U de idéntica constitución, obtenemos la misma magnitud en todos estos lugares. Lo mismo es válido, por nuestra hipótesis fundamental, también para el sistema K. Pero por lo que se acaba de decir, debemos utilizar relojes de diferente constitución para medir el tiempo en lugares con diferente potencial gravitatorio. Para medir el tiempo en un lugar que, con respecto al origen de coordenadas, tiene el potencial gravitatorio Φ debemos emplear un reloj que —cuando se lleva al origen de coordenadas— va (1 + Φ/ _c_ 2) veces más lento que el reloj utilizado para medir el tiempo en el origen de coordenadas. Si llamamos _c_ 0 a la velocidad de la luz en el origen de coordenadas, entonces la velocidad de la luz _c_ en un lugar con el potencial gravitatorio Φ estará dada por la relación
(3)
El principio de constancia de la velocidad de la luz es válido según esta teoría en una forma diferente de la que normalmente subyace a la teoría de la relatividad ordinaria.
4. CURVATURA DE RAYOS LUMINOSOS EN EL CAMPO GRAVITATORIO
A partir de la proposición que se acaba de demostrar, que la velocidad de la luz en el campo gravitatorio es función del lugar, podemos inferir fácilmente, por medio del principio de Huyghens, que los rayos luminosos que se propagan a través de un campo gravitatorio sufren una desviación. En efecto, sea E un frente de onda de una onda luminosa plana en el instante _t_ , y sean P1 y P2 dos puntos en dicho plano a distancia unidad uno de otro. P1 y P2 están en el plano del papel, que se escoge de modo que el coeficiente diferencial de Φ, tomado en la dirección de la normal al plano, se anula, y por consiguiente también lo hace el de _c_. Obtenemos el correspondiente frente de onda en el instante _t + dt_ , o, más bien, su línea de intersección con el plano del papel, describiendo círculos alrededor de los puntos P1 y P2 con radios _c 1dt _y _c 2dt_, respectivamente; donde _c 1_ y _c 2_ denotan la velocidad de la luz en los puntos P1 y P2, respectivamente, y trazando la tangente a dichos círculos. El ángulo en que se desvía el rayo de luz en el camino _cdt_ es por consiguiente
si medimos el ángulo positivamente cuando el rayo se curva hacia el lado de _n'_ creciente. El ángulo de desviación por unidad de camino del rayo luminoso es por lo tanto
Finalmente, obtenemos para la desviación que experimenta un rayo luminoso hacia el lado _n'_ en cualquier trayectoria ( _s_ ) la expresión
(4)
Podríamos haber obtenido el mismo resultado directamente considerando la propagación de un rayo luminoso en el sistema uniformemente acelerado K', y trasladando el resultado al sistema K, y de allí al caso de un campo gravitatorio de cualquier forma.
Por la ecuación (4) un rayo de luz que pasa junto a un cuerpo celeste sufre una desviación hacia el lado del potencial gravitatorio decreciente, es decir, el lado dirigido hacia el cuerpo celeste, de magnitud
donde _k_ denota la constante de gravitación, M la masa del cuerpo celeste, ∆ la distancia del rayo al centro del cuerpo. En consecuencia, un rayo de luz que pasa junto al Sol sufre una desviación de 4,10–6 = 0,83 segundos de arco. La distancia angular de la estrella al centro del Sol parece estar aumentada en esta cantidad. Puesto que las estrellas fijas en regiones del cielo próximas al Sol son visibles durante los eclipses totales de Sol, esta consecuencia de la teoría puede compararse con la experiencia. Con el planeta Júpiter el desplazamiento esperado llega a aproximadamente 1⁄100 de la cantidad dada. Sería deseable que los astrónomos asumieran la cuestión aquí planteada. Pues, aparte de cualquier teoría, está la cuestión de si es posible detectar con los equipos actualmente disponibles una influencia de los campos gravitatorios en la propagación de la luz.
### 4
### EL FUNDAMENTO DE LA TEORÍA DE LA RELATIVIDAD GENERAL*
#### A. CONSIDERACIONES FUNDAMENTALES SOBRE EL POSTULADO DE RELATIVIDAD
1. OBSERVACIONES SOBRE LA TEORÍA DE LA RELATIVIDAD ESPECIAL
La teoría de la relatividad especial se basa en el siguiente postulado, que también es satisfecho por la mecánica de Galileo y Newton.
Si se escoge un sistema de coordenadas K con relación al cual son válidas las leyes físicas en su forma más simple, las _mismas_ leyes son también válidas con relación a cualquier otro sistema de coordenadas K' que se mueve con movimiento de traslación uniforme con respecto a K. Llamamos a este postulado el «principio de relatividad especial». La palabra «especial» quiere dar a entender que el principio está restringido al caso en que K' tiene un movimiento de traslación uniforme con respecto a K, pero que la equivalencia de K' y K no se extiende al caso de movimiento no uniforme de K' con respecto a K.
Así pues, la teoría de la relatividad especial no se aparta de la mecánica clásica por el postulado de relatividad, sino por el postulado de la constancia de la velocidad de la luz _in vacuo_ , a partir del cual, en combinación con el principio de relatividad especial, se sigue, en la forma bien conocida, la relatividad de la simultaneidad, la transformación lorentziana y las leyes relacionadas para el comportamiento de cuerpos y relojes en movimiento.
La modificación a la que la teoría de la relatividad especial ha sometido a la teoría del espacio y el tiempo es realmente de largo alcance, pero hay un punto importante que ha permanecido inalterado. Pues las leyes de la geometría, incluso según la teoría de la relatividad especial, tienen que ser interpretadas directamente como leyes relacionadas con las posibles posiciones relativas de cuerpos sólidos en reposo; y, de una manera más general, las leyes de la cinemática deben interpretarse como leyes que describen las relaciones de medida de cuerpos y relojes. A dos puntos materiales seleccionados de un cuerpo rígido estacionario corresponde siempre una distancia de longitud bien definida, que es independiente de la localización y orientación del cuerpo, y es también independiente del tiempo. A dos posiciones seleccionadas de las manecillas de un reloj en reposo con respecto a un sistema de referencia privilegiado, corresponde siempre un intervalo de tiempo de longitud definida, que es independiente del lugar y el tiempo. Pronto veremos que la teoría de la relatividad especial no puede adherirse a esta interpretación física sencilla del espacio y el tiempo.
2. LA NECESIDAD DE UNA EXTENSIÓN DEL POSTULADO DE RELATIVIDAD
En mecánica clásica, y no menos en la teoría de la relatividad especial, existe un defecto epistemológico inherente que fue señalado claramente, quizá por primera vez, por Ernst Mach. Lo discutiremos mediante el siguiente ejemplo: dos cuerpos fluidos del mismo tamaño y naturaleza se mantienen libremente en el espacio a una distancia tan grande uno de otro y de todas las demás masas que sólo hay que tener en cuenta aquellas fuerzas gravitatorias que aparecen a partir de la interacción de diferentes partes del mismo cuerpo. Sea invariable la distancia entre los dos cuerpos, y supongamos que en ninguno de los dos cuerpos hay movimientos relativos de unas partes con respecto a otras. Pero supongamos que una de las dos masas, juzgada por un observador en reposo con respecto a la otra masa, rota con velocidad angular constante alrededor de la línea que une ambas masas. Éste es un movimiento relativo verificable de los dos cuerpos. Imaginemos ahora que cada uno de los cuerpos ha sido examinado por medio de instrumentos de medida en reposo con respecto al mismo, y que se muestra que la superficie de S1 es una esfera y la de S2 es un elipsoide de revolución. Acto seguido planteamos la pregunta: ¿cuál es la razón de esta diferencia entre los dos cuerpos? Ninguna respuesta puede admitirse como epistemológica satisfactoria, a menos que la razón dada sea un _hecho de experiencia observable_. La ley de causalidad no tiene el significado de un enunciado acerca del mundo de la experiencia, excepto cuando _hechos observables_ aparecen en última instancia como causas y efectos.
La mecánica newtoniana no da una respuesta satisfactoria a esta pregunta. Se pronuncia como sigue: las leyes de la mecánica se aplican al espacio R1, con respecto al cual el cuerpo S1 está en reposo, pero no al espacio R2 con respecto al cual el cuerpo S2 está en reposo. Pero el espacio privilegiado R1 de Galileo, así introducido, es una causa meramente _facticia_ , y no algo que pueda ser observado. Es evidente, por lo tanto, que la mecánica de Newton no satisface realmente el requisito de causalidad en el caso bajo consideración, sino que lo hace sólo aparentemente, puesto que hace a la causa facticia R1 responsable de la diferencia observable en los cuerpos S1 y S2.
La única respuesta satisfactoria debe ser que el sistema físico consistente en S1 y S2 no revela dentro de sí mismo ninguna causa imaginable a la que pueda remitirse el diferente comportamiento de S1 y S2. Por consiguiente, la causa debe estar _fuera_ de este sistema. Tenemos que asumir que las leyes generales de movimiento, que en particular determinan las formas de S1 y S2, deben ser tales que el comportamiento mecánico de S1 y S2 está condicionado en parte, y en aspectos muy esenciales, por masas distantes que no han sido incluidas en el sistema bajo consideración. Estas masas distantes y sus movimientos con respecto a S1 y S2 deben considerarse entonces como la sede de las causas (que deben ser susceptibles de observación) del diferente comportamiento de nuestros dos cuerpos S1 y S2. Ellas asumen el papel de la causa facticia R1. De todos los espacios imaginables R1, R2, etc., en cualquier tipo de movimiento relativo mutuo, no existe ninguno que podamos considerar privilegiado a priori sin reavivar la objeción epistemológica antes mencionada. _Las leyes de la física deben ser de tal naturaleza que se aplican a sistemas de referencia en cualquier tipo de movimiento_. Por este camino llegamos a una extensión del postulado de relatividad.
Además de este poderoso argumento de la teoría del conocimiento, existe un hecho físico bien conocido en favor de una extensión de la teoría de la relatividad. Sea K un sistema de referencia galileano, i. e. un sistema con respecto al cual (al menos en la región tetradimensional en consideración) una masa, suficientemente distante de otras masas, se mueve con movimiento uniforme en línea recta. Sea K' un segundo sistema de referencia que se mueve con respecto a K con traslación _uniformemente acelerada_. Entonces, con respecto a K', una masa suficientemente distante de otras masas tendría un movimiento acelerado tal que la magnitud y dirección de su aceleración son independientes de la composición material y estado físico de la masa.
¿Permite esto a un observador en reposo con respecto a K' inferir que él está en un sistema de referencia «realmente» acelerado? La respuesta es negativa; pues la relación antes mencionada de masas libremente movibles respecto a K' puede interpretarse igualmente bien de la siguiente manera. El sistema de referencia K' no está acelerado, pero el territorio espacio-temporal en cuestión está bajo el dominio de un campo gravitatorio que genera el movimiento acelerado de los cuerpos con respecto a K'.
Esta visión se hace posible para nosotros por la enseñanza de la experiencia acerca de la existencia de un campo de fuerzas, a saber, el campo gravitatorio, que posee la extraordinaria propiedad de impartir la misma aceleración a todos los cuerpos. El comportamiento mecánico de los cuerpos con respecto a K' es el mismo que se presenta a la experiencia en el caso de sistemas que solemos considerar como «estacionarios» o como «privilegiados». Por consiguiente, desde el punto de vista físico, se sugiere inmediatamente la hipótesis de que los sistemas K y K' deben ser ambos considerados con igual derecho como «estacionarios», es decir, tienen el mismo título como sistemas de referencia para la descripción física de los fenómenos.
Se verá a partir de estas reflexiones que al seguir la teoría de la relatividad general nos veremos llevados a una teoría de la gravitación, puesto que podemos «producir» un campo gravitatorio cambiando meramente el sistema de coordenadas. También será obvio que el principio de la constancia de la velocidad de la luz _in vacuo_ debe ser modificado, puesto que fácilmente reconocemos que la trayectoria de un rayo luminoso con respecto a K' debe ser en general curvilínea, si con respecto a K la luz se propaga en línea recta con una velocidad constante definida.
3. EL CONTINUO ESPACIO-TEMPORAL. REQUISITO DE COVARIANCIA GENERAL PARA LAS ECUACIONES QUE EXPRESAN LAS LEYES GENERALES DE LA NATURALEZA
En mecánica clásica, así como en la teoría de la relatividad especial, las coordenadas de espacio y tiempo tienen un significado físico directo. Decir que un suceso tiene _x_ 1 como coordenada X1 significa que la proyección del suceso sobre el eje de X1, determinada por reglas de medir rígidas y de acuerdo con las reglas de la geometría euclidiana, se obtiene colocando una regla de medir dada (la unidad de longitud) _x_ 1 veces a partir del origen de coordenadas a lo largo del eje de X1. Decir que un suceso puntual tiene _x 4 = t_ como coordenada X4 significa que un reloj estándar, construido para medir el tiempo con un período unidad definido, y que es estacionario con respecto al sistema de coordenadas y prácticamente coincidente en el espacio con el suceso puntual, habrá medido _x 4 = t_ períodos en la ocurrencia del suceso.
Esta idea del espacio y el tiempo ha estado siempre en la mente de los físicos, incluso si, como regla, no han sido conscientes de ella. Está claro a partir del papel que estos conceptos desempeñan en las medidas físicas; también debe subyacer a las reflexiones del lector sobre la sección precedente (2) para conectar cualquier significado con lo que allí ha leído. Pero ahora demostraremos que debemos dejarla de lado y reemplazarla por una visión más general para poder completar el postulado de relatividad general, si la teoría de la relatividad especial se aplica al caso especial de ausencia de un campo gravitatorio.
En un espacio que está libre de campos gravitatorios introducimos un sistema de referencia galileano K ( _x_ , _y_ , _z_ , _t_ ), y también un sistema de coordenadas K' ( _x'_ , _y'_ , _z'_ , _t'_ ) en rotación uniforme con respecto a K. Consideramos que los orígenes de ambos sistemas así como sus ejes Z coinciden en todo momento. Demostraremos que para una medida espacio-temporal en el sistema K' no puede mantenerse la definición anterior del significado físico de longitudes y tiempos. Por razones de simetría es evidente que un círculo alrededor del origen en el plano X, Y de K puede considerarse al mismo tiempo como un círculo en el plano X', Y' de K'. Supongamos que la circunferencia y el diámetro de este círculo han sido medidos con una medida unidad infinitamente pequeña comparada con el radio, y que tenemos el cociente de ambos resultados. Si este experimento se realizara con una regla de medir en reposo con respecto al sistema galileano K, el cociente sería π. Con una regla de medir en reposo con respecto a K', el cociente sería mayor que π. Esto se entiende inmediatamente si concebimos el proceso global de medir desde el sistema «estacionario» K, y tenemos en consideración que la regla de medir aplicada a la periferia sufre una contracción lorentziana, mientras que la aplicada a lo largo del radio no la sufre. Por lo tanto, la geometría euclidiana no se aplica a K'. La noción de coordenadas definida más arriba, que presupone la validez de la geometría euclidiana, deja de ser válida por consiguiente en relación al sistema K'. Así, también, somos incapaces de introducir un tiempo correspondiente a los requisitos físicos en K', indicado por relojes en reposo con respecto a K'. Para convencernos de esta imposibilidad, imaginemos dos relojes de idéntica constitución colocados uno en el origen de coordenadas y el otro en la circunferencia del círculo, y ambos concebidos desde el sistema «estacionario» K. Por un resultado familiar de la teoría de la relatividad especial, el reloj en la circunferencia —juzgado desde K— marcha más lento que el otro, porque el primero está en movimiento y el último en reposo. Un observador en el origen común de coordenadas, capaz de observar el reloj en la circunferencia por medio de luz, vería por consiguiente que se retrasa respecto al reloj que tiene ante él. Puesto que él no estará preparado para admitir que la velocidad de la luz a lo largo del camino en cuestión dependa explícitamente del tiempo, interpretará sus observaciones como algo que demuestra que el reloj en la circunferencia «realmente» marcha más lento que el reloj en el origen. Por lo tanto, se verá obligado a definir el tiempo de tal manera que la marcha de un reloj depende de dónde pueda estar el reloj.
Por consiguiente, llegamos a este resultado: en la teoría de la relatividad general, el espacio y el tiempo no pueden definirse de manera tal que las diferencias de las coordenadas espaciales puedan medirse directamente por la regla de medir unidad, ni las diferencias en la coordenada temporal por un reloj estándar.
El método empleado hasta ahora para tender coordenadas en el continuo espacio-temporal de una manera definida deja así de ser válido, y parece que no haya otra forma que nos permitiera adaptar sistemas de coordenadas al universo tetradimensional, de modo que pudiéramos esperar de su aplicación una formulación particularmente simple de las leyes de la naturaleza. De modo que no hay nada sino considerar todos los sistemas de coordenadas imaginables, en principio, como igualmente adecuados para la descripción de la naturaleza. Esto viene a exigir que:
_Las leyes generales de la naturaleza deben expresarse por ecuaciones que sean válidas para todos los sistemas de coordenadas. Es decir, sean covariantes con respecto a cualesquiera sustituciones (generalmente covariantes)_.
Es evidente que una teoría física que satisfaga este postulado también será adecuada para el postulado de relatividad general. Pues la suma de todas las sustituciones incluye, en cualquier caso, a aquellas que corresponden a todos los movimientos relativos de sistemas de coordenadas tridimensionales. Que este requisito de covariancia general, que despoja al espacio y el tiempo del último residuo de objetividad física, es un requisito general, se verá a partir de la siguiente reflexión. Todas nuestras verificaciones espacio-temporales equivalen invariablemente a una determinación de coincidencias espacio-temporales. Si, por ejemplo, los sucesos consistieran meramente en el movimiento de puntos materiales, entonces nada sería observable en definitiva salvo los encuentros de dos o más de dichos puntos. Además, los resultados de nuestras medidas no son nada más que verificaciones de tales encuentros de los puntos materiales de nuestros instrumentos de medida con otros puntos materiales, coincidencias entre las manecillas de un reloj y puntos en la esfera de un reloj, y sucesos puntuales observados que suceden en el mismo lugar y al mismo tiempo.
La introducción de un sistema de referencia no tiene otro propósito que facilitar la descripción de la totalidad de tales coincidencias. Asignamos al universo cuatro variables espacio-temporales _x_ 1, _x_ 2, _x_ 3, _x_ 4, de tal manera que para todo suceso puntual existe un correspondiente sistema de valores de las variables _x_ 1.... _x_ 4. A dos sucesos puntuales coincidentes corresponde un sistema de valores de las variables _x_ 1 ... _x_ 4, i. e. la coincidencia se caracteriza por la identidad de las coordenadas. Si, en lugar de las variables _x_ 1 ... _x_ 4, introducimos funciones de ellas, _x'_ 1, _x'_ 2, _x'_ 3, _x'_ 4, como un nuevo sistema de coordenadas, de modo que los sistemas de valores se hacen corresponder uno a otro sin ambigüedad, la igualdad de las cuatro coordenadas en el nuevo sistema servirá también como una expresión de la coincidencia espacio-temporal de los dos sucesos puntuales. Puesto que toda nuestra experiencia física puede reducirse en última instancia a tales coincidencias, no hay ninguna razón inmediata para preferir ciertos sistemas de coordenadas a otros. Es decir, llegamos al requisito de covariancia general.
4. LA RELACIÓN DE LAS CUATRO COORDENADAS CON LA MEDIDA EN EL ESPACIO Y EL TIEMPO
No es mi propósito en esta discusión presentar la teoría de la relatividad general como un sistema tan sencillo y lógico como sea posible, y con el mínimo número de axiomas; sino que mi principal objetivo es desarrollar esta teoría de tal manera que el lector sienta que el camino en el que hemos entrado es el natural desde un punto de vista psicológico, y que las hipótesis subyacentes parecerán tener el máximo grado de seguridad posible. Con este objetivo a la vista garanticemos ahora que:
Para regiones tetradimensionales infinitamente pequeñas es apropiada la teoría de la relatividad en el sentido restringido, si las coordenadas se escogen adecuadamente.
Para este fin debemos escoger la aceleración del sistema de coordenadas («local») infinitamente pequeño de modo que no haya ningún campo gravitatorio; esto es posible para una región infinitamente pequeña. Sean X1, X2, X3 las coordenadas espaciales, y X4 la correspondiente coordenada temporal medida en la unidad apropiada. Si se imagina dada una regla rígida como unidad de medida, las coordenadas, con una orientación dada del sistema de coordenadas, tienen un significado físico directo en el sentido de la teoría de la relatividad especial. Por la teoría de la relatividad especial la expresión
(1)
tiene entonces un valor que es independiente de la orientación del sistema de coordenadas local, y que se puede determinar mediante medidas de espacio y tiempo. Llamamos _ds_ a la magnitud del elemento de línea perteneciente a puntos del continuo tetradimensional infinitamente próximos. Siguiendo a Minkowski, si el _ds_ perteneciente a los elementos _d_ X1 ... _d_ X4 es positivo, le llamamos de tipo tiempo; si es negativo, le llamamos de tipo espacio.
Al «elemento de línea» en cuestión, o a los dos sucesos puntuales infinitamente próximos, corresponderán también diferenciales definidas _dx_ 1 ... _dx_ 4 de las coordenadas tetradimensionales de cualquier sistema de referencia escogido. Si este sistema, así como el sistema «local», está dado para la región en consideración, las _d_ X _ν_ podrán representarse aquí por expresiones homogéneas lineales definidas de las _dx_ σ:
(2)
Insertando estas expresiones en (1), obtenemos
(3)
donde las _g_ στ serán funciones de las _x_ σ. Éstas ya no pueden ser dependientes de la orientación y el estado de movimiento del sistema de coordenadas «local», pues _ds_ 2 es una magnitud que se puede establecer por medidas de regla-y-reloj de sucesos puntuales infinitamente próximos en el espacio-tiempo, y definida independientemente de cualquier elección de coordenadas concreta. Las _g_ στ deben escogerse aquí de modo que _g_ στ = _g_ τσ; la suma se extiende sobre todos los valores de σ y τ, de modo que la suma consta de 4 X 4 términos, de los cuales 12 se agrupan en pares iguales.
El caso de la teoría de la relatividad ordinaria aparece a partir del caso aquí considerado, si es posible escoger, por razón de las relaciones particulares de las _g_ στ en una región finita, el sistema de referencia en la región finita de tal manera que las _g_ στ toman los valores constantes
(4)
Encontraremos en adelante que la elección de tales coordenadas no es posible, en general, para una región finita.
De las consideraciones desarrolladas en los enunciados 2 y 3 se sigue que las cantidades _g_ τσ deben considerarse desde el punto de vista físico como las cantidades que describen el campo gravitatorio en relación con el sistema de referencia escogido. Pues si ahora suponemos que la teoría de la relatividad especial se aplica a una cierta región tetradimensional con las coordenadas adecuadamente escogidas, entonces las _g_ οτ tienen los valores dados en (4). Un punto material libre se mueve entonces, con respecto a este sistema, con movimiento uniforme en línea recta. Entonces, si introducimos nuevas coordenadas espacio-temporales _x_ 1, _x_ 2, _x_ 3, _x_ 4 por medio de cualquier sustitución que escojamos, las _g_ οτ en este nuevo sistema ya no serán constantes, sino funciones del espacio y el tiempo. Al mismo tiempo, el movimiento del punto lateral libre se presentará en las nuevas coordenadas como un movimiento no-uniforme curvilíneo, y la ley de este movimiento será independiente de la naturaleza de la partícula que se mueve. Por consiguiente, interpretaremos este movimiento como un movimiento bajo la influencia de un campo gravitatorio. Encontramos así la presencia de un campo gravitatorio conectada con una variabilidad espacio-temporal de las _g_ σ. Así también, en el caso general, cuando ya no podemos mediante una adecuada elección de coordenadas aplicar la teoría de la relatividad especial a una región finita, nos atendremos a la idea de que las _g_ στ describen el campo gravitatorio.
Por lo tanto, según la teoría de la relatividad general la gravitación ocupa una posición excepcional con respecto a otras fuerzas, en particular las fuerzas electromagnéticas, puesto que las diez funciones _g_ στ que representan al campo gravitatorio definen al mismo tiempo las propiedades métricas del espacio medido.
#### B. HERRAMIENTAS MATEMÁTICAS PARA LA FORMULACIÓN DE ECUACIONES GENERALMENTE COVARIANTES
Habiendo visto en lo que precede que el postulado de relatividad general conduce al requisito de que las ecuaciones de la física sean covariantes frente a cualquier sustitución de las coordenadas _x_ 1 ... _x_ 4, tenemos que considerar cómo pueden encontrarse tales ecuaciones generalmente covariantes. Ahora nos dirigimos hacia esta tarea puramente matemática, y encontraremos que en su solución juega un papel fundamental el invariante _ds_ dado en la ecuación (3), que, tomándolo prestado de la teoría de superficies de Gauss, hemos llamado el «elemento de línea».
La idea fundamental de esta teoría general de covariantes es la siguiente: Definamos ciertos objetos («tensores») con respecto a cualquier sistema de coordenadas mediante un número de funciones de las coordenadas, llamadas «componentes» del tensor. Existen entonces ciertas reglas mediante las cuales pueden calcularse dichas componentes para un nuevo sistema de coordenadas, si se conocen para el sistema de coordenadas original. Estos objetos, en lo sucesivo llamados tensores, están además caracterizados por el hecho de que las ecuaciones de transformación para sus componentes son lineales y homogéneas. En consecuencia, todas las componentes en el nuevo sistema se anulan si todas se anulan en el sistema original. Si, por lo tanto, una ley de la naturaleza se expresa igualando a cero todas las componentes de un tensor, dicha ley es generalmente covariante. Examinando las leyes de formación de tensores adquirimos los medios de formular leyes generalmente covariantes.
5. CUADRIVECTORES CONTRAVARIANTES Y COVARIANTES
_Cuadrivectores contravariantes_. El elemento de línea está definido por las cuatro «componentes» _dx_ _ν_ , para los que la ley de transformación se expresa por la ecuación
(5)
Las _dx_ 'σ se expresan como funciones lineales y homogéneas de las _dx_ _ν_. De aquí podemos considerar estas diferenciales de coordenadas como las componentes de un «tensor» del tipo particular que llamamos cuadrivector contravariante. Cualquier objeto que está definido con relación al sistema de coordenadas por cuatro cantidades Aν, y que se transforma por la misma ley
(5a)
se llama también un cuadrivector contravariante. De (5a) se sigue inmediatamente que las sumas (Aσ ± Bσ) son también componentes de un cuadrivector, si Aσ y Bσ lo son. Relaciones correspondientes válidas para todos los «tensores» deben introducirse posteriormente. (Regla para la adición y sustracción de tensores).
_Cuadrivectores covariantes_. Llamamos a cuatro cantidades A _ν_ componentes de un cuadrivector covariante si para cualquier elección arbitraria del cuadrivector contravariante B _ν_
Σ _ν_ A _ν_ B _ν_ = Invariante.
(6)
La ley de transformación de un cuadrivector covariante se sigue de esta definición. En efecto, si sustituimos Bν en el segundo miembro de la ecuación
por la expresión resultante de la inversión de (5a),
obtenemos
Puesto que esta ecuación es verdadera para valores arbitrarios de las B'σ, se sigue que la ley de transformación es
(7)
_Nota sobre una forma simplificada de escribir las expresiones_. Una ojeada a las ecuaciones de esta sección muestra que siempre hay una suma con respecto a los índices que aparecen dos veces bajo un signo de suma (e. g. el índice _ν_ en (5)), y sólo con respecto a índices que aparecen dos veces. Por lo tanto, es posible, sin pérdida de claridad, omitir el signo de suma. En su lugar introducimos el siguiente convenio: si un índice aparece dos veces en un término de una expresión, siempre debe sumarse sobre el mismo a menos que se afirme expresamente lo contrario.
La diferencia entre cuadrivectores covariantes y contravariantes reside en la ley de transformación ((7) o (5) respectivamente). Ambas forma son tensores en el sentido del comentario general anterior. Ahí reside su importancia. Siguiendo a Ricci y Levi-Civita denotamos el carácter contravariante colocando el índice arriba, y el carácter covariante colocándolo abajo.
6. TENSORES DE SEGUNDO RANGO Y SUPERIORES
_Tensores contravariantes_. Si formamos los dieciséis productos Aµν de las componentes Aµ y Bν de dos cuadrivectores contravariantes
Aµν = Aµ Bν.
(8)
entonces por (8) y (5a) Aµν satisface la ley de transformación
(9)
Llamamos tensor contravariente de segundo rango a un objeto que está descrito con relación a cualquier sistema de referencia por dieciséis cantidades que satisfacen la ley de transformación (9). No todos los tensores semejantes pueden formarse de acuerdo con (8) a partir de dos cuadrivectores, pero se demuestra fácilmente que cualesquiera dieciséis Aµν dados pueden representarse como sumas de los AµBν de cuatro pares de cuadrivectores adecuadamente seleccionados. De aquí podemos probar casi todas las leyes que se aplican al tensor de segundo rango definido por (9) de la forma más simple demostrándolas para los tensores especiales del tipo (8).
_Tensores contravariantes de cualquier rango_. Es evidente que, en línea con (8) y (9), también pueden definirse tensores contravariantes de tercer rango y superiores con 43 componentes, y así sucesivamente. De la misma forma, se sigue de (8) y (9) que el cuadrivector contravariante puede tomarse en este sentido como un tensor contravariante de primer rango.
_Tensores covariantes_. Por otra parte, si tomamos los dieciséis productos Aµν de dos cuadrivectores covariantes Aµ y Bν
Aµν = Aµ Bν,
(10)
la ley de transformación para éstos es
(11)
Esta ley de transformación define el tensor covariante de segundo rango. Todos nuestros comentarios previos sobre tensores contravariantes se aplican igualmente a tensores covariantes.
NOTA. Es conveniente tratar los escalares (o invariantes) como tensores contravariantes o tensores covariantes de rango cero.
_Tensores mixtos_. También podemos definir un tensor de segundo rango del tipo
Aνµ = Aµ Bν.
(12)
que es covariante con respecto al índice µ y contravariante con respecto al índice ν. Su ley de transformación es
(13)
Naturalmente, existen tensores mixtos con cualquier número de índices de carácter covariante y cualquier número de índices de carácter contravariante. Los tensores covariantes y contravariantes pueden considerarse como casos especiales de tensores mixtos.
_Tensores simétricos_. Un tensor contravariante, o un tensor covariante, de segundo rango o superior se llama simétrico si dos componentes, que se obtienen una de otra por el intercambio de dos índices, son iguales. El tensor Aµν, o el tensor Aµν, son así simétricos si para cualquier combinación de los índices µ, ν,
Aµν = Aνµ,
(14)
o respectivamente,
Aµν = Aνµ.
(14a)
Hay que demostrar que la simetría así definida es una propiedad independiente del sistema de referencia. De hecho, se sigue de (9), cuando se tiene en cuenta (14), que
La penúltima ecuación depende del intercambio de los índices de suma µ y ν, i. e. meramente de un cambio de notación.
_Tensores antisimétricos_. Un tensor contravariante o un tensor covariante de segundo, tercero o cuarto rango se llama antisimétrico si dos componentes, que se obtienen una de otra por el intercambio de dos índices, son iguales y de signo opuesto. El tensor Aµν, o el tensor Aµν, es, por lo tanto, antisimétrico si siempre
Aµν = –Aνµ,
(15)
o respectivamente,
Aµν = –Aνµ.
(15a)
De las dieciséis componentes Aµν, las cuatro componentes Aµµ se anulan; el resto son iguales y de signo opuesto por pares, de modo que existen sólo seis componentes numéricamente diferentes (un seis-vector). Análogamente vemos que el tensor antisimétrico de tercer rango Aµνσ tiene sólo cuatro componentes numéricamente diferentes, mientras que el tensor antisimétrico Aµνστ tiene sólo una. No existen tensores antisimétricos de rango mayor que el cuarto en un continuo de cuatro dimensiones.
7. PRODUCTO DE TENSORES
_Producto exterior de tensores_. A partir de las componentes de un tensor de rango _n_ y de un tensor de rango _m_ , obtenemos las componentes de un tensor de rango _n + m_ multiplicando cada componente de un tensor por cada componente del otro. Así, por ejemplo, los tensores T aparecen a partir de los tensores A y B de diferentes tipos
La demostración del carácter tensorial de T está dada directamente por las representaciones (8), (10), (12), o por las leyes de transformación (9), (11), (13). Las propias ecuaciones (8), (10), (12) son ejemplos de producto exterior de tensores de primer rango.
_«Contracción» de un tensor mixto_. A partir de cualquier tensor mixto podemos formar un tensor cuyo rango es menor en dos unidades, igualando un índice de carácter covariante con uno de carácter contravariante, y sumando con respecto a dicho índice («contracción»). Así, por ejemplo, a partir del tensor mixto de cuarto rango , obtenemos el tensor mixto de segundo rango
y a partir de éste, por una segunda contracción, el tensor de rango cero
La demostración de que el resultado de la contracción posee realmente el carácter tensorial está dada o bien por la representación de un tensor de acuerdo con la generalización de (12) en combinación con (6), o bien por la generalización de (13).
_Productos interior y mixto de tensores_. Éstos consisten en una combinación de producto exterior con contracción.
_Ejemplos_. A partir del tensor covariante de segundo rango Aµν y el tensor contravariante de primer rango Bσ formamos por producto exterior el tensor mixto
Por contracción con respecto a los índices ν y σ, obtenemos el cuadrivector covariante
Llamamos a esto el producto interno de los tensores Aµν y Bσ. Análogamente, a partir de los tensores Aµν y Bστ, formamos por producto exterior y doble contracción el producto interior Aµν Bµν. Por producto exterior y una contracción, obtenemos a partir de Aµν y Bστ el tensor mixto de segundo rango Dτµ = Aµν Bντ. Esta operación puede caracterizarse apropiadamente como una operación mixta, siendo «exterior» con respecto a los índices µ y τ, e «interior» con respecto a los índices ν y σ.
Ahora demostramos una proposición que suele ser útil como evidencia del carácter tensorial. A partir de lo que se acaba de explicar, Aµν Bµν es un escalar si Aµν y Bστ son tensores. Pero también podemos hacer la siguiente afirmación: si Aµν Bµν es un escalar _para cualquier elección del tensor_ Bµν, entonces Aµν tiene carácter tensorial. En efecto, por hipótesis, para cualquier sustitución
A'στ B'στ = Aµν Bµν.
pero por una inversión de (9)
Esto, insertado en la ecuación anterior, da
Esto sólo puede satisfacerse para valores arbitrarios B'στ si se anula el paréntesis. El resultado se sigue entonces por la ecuación (11). Esta regla se aplica correspondientemente a tensores de cualquier rango y carácter, y la demostración es análoga en todos los casos.
La regla también puede demostrarse de esta forma: si Bµ y Cν son vectores cualesquiera, y si, para todos los valores de éstos, el producto interior Aµν Bµ Cν es un escalar, entonces Aµν es un tensor covariante. Esta última proposición también es válida incluso si sólo es correcta la afirmación más especial: que con cualquier elección del cuadrivector Bµ, el producto interior Aµν Bµ Cν es un escalar, si además se sabe que Aµν satisface la condición de simetría Aµν = Aνµ. En efecto, por el método dado antes demostramos el carácter tensorial de (Aµν \+ Aνµ), y de esto se sigue el carácter tensorial de Aµν debido a la simetría. Esto también puede generalizarse fácilmente al caso de tensores covariantes y contravariantes de cualquier rango.
Finalmente, a partir de lo que se ha demostrado se sigue la siguiente ley que también puede generalizarse para tensores cualesquiera: si para cualquier elección del cuadrivector Bν las cantidades Aµν Bν forman un tensor de primer rango, entonces Aµν es un tensor de segundo rango. En efecto, si Cµ es cualquier cuadrivector, entonces, debido al carácter tensorial de Aµν Bν, el producto interior Aµν Bν Cµ es un escalar para cualquier elección de los dos cuadrivectores Bν y Cµ. De lo cual se sigue la proposición.
8. ALGUNOS ASPECTOS DEL TENSOR FUNDAMENTAL _g_ µν
_El tensor fundamental covariante_. En la expresión invariante para el cuadrado del elemento de línea
_ds_ 2 = _g_ µν _dx_ µ _dx_ ν ,
la parte que desempeñan las _dx_ µ es la de un vector contravariante que puede escogerse a voluntad. Puesto que además _g_ µν = _g_ νµ, de las consideraciones de la sección precedente se sigue que _g_ µν es un tensor covariante de segundo rango. Le llamamos «tensor fundamental». En lo que sigue deducimos algunas propiedades de este tensor que, es cierto, se aplican a cualquier vector de segundo rango. Pero puesto que el tensor fundamental desempeña un papel especial en nuestra teoría, que tiene su base física en el efecto peculiar de la gravitación, sucede que las relaciones que se van a desarrollar son de importancia para nosotros sólo en el caso del tensor fundamental.
_El tensor fundamental contravariante_. Si en el determinante formado por los elementos _g_ µν tomamos el cofactor de cada uno de los _g_ µν y lo dividimos por el determinante _g_ = | _g_ µν|, obtenemos ciertas cantidades _g_ µν(= _g_ νµ) que, como demostraremos, forman un tensor contravariante.
Por una conocida propiedad de los determinantes
_g_ µσ _g_ νσ = δνµ ,
(16)
donde el símbolo δµν denota 1 o 0, según µ = ν o µ≠ν.
En lugar de la expresión anterior para _ds_ 2 podemos escribir
_g_ µσ δσν _dx_ µ _dx_ ν
o, por (16)
_g_ µσ _g_ ντ _g_ στ _dx_ µ _dx_ ν .
Pero, por las reglas del producto de la sección precedente, las cantidades
_d_ ξσ = _g_ µσ _dx_ µ
forman un cuadrivector covariante, y de hecho un vector arbitrario, puesto que las _dx_ µ son arbitrarias. Introduciendo esto en nuestra expresión obtenemos
_ds_ 2 = _g_ στ _d_ ξσ _d_ ξτ .
Puesto que esto, con la elección arbitraria del vector _d_ ξσ es un escalar, y _g_ στ por definición es simétrico en los índices σ y τ, se sigue de los resultados de la sección precedente que _g_ στ es un tensor contravariante.
Se sigue además de (16) que δνµ es también un tensor, al que podemos llamar el tensor fundamental mixto.
_El determinante del tensor fundamental_. Por la regla de multiplicación de determinantes
| _g_ µα _g_ αν| = | _g_ µα| X | _g_ αν|.
Por otra parte
| _g_ µα _g_ αν| = |δµν| = 1.
Se sigue por consiguiente que
| _g_ µν| X | _g_ µν| = 1.
(17)
_El volumen escalar_. Buscamos primero la ley de transformación del determinante _g_ = | _g_ µν|. De acuerdo con (11)
De aquí, por una doble aplicación de la regla para la multiplicación de determinantes se sigue que
o
Por otra parte, la ley de transformación del elemento de volumen
_d_ τ = ∫ _dx_ 1 _dx_ 2 _dx_ 3 _dx_ 4
es, de acuerdo con el teorema de Jacobi
Multiplicando las dos últimas ecuaciones obtenemos
(18)
En lugar de √ _g_ , en lo que sigue introducimos la cantidad √– _g_ , que es siempre real debido al carácter hiperbólico del continuo espacio-temporal. El invariante √– _g_ _d_ τ es igual a la magnitud del elemento de volumen tetradimensional en el sistema de referencia «local», medido con reglas rígidas y relojes en el sentido de la teoría de la relatividad especial.
_Nota sobre el carácter del continuo espacio-temporal_. Nuestra hipótesis de que la teoría de la relatividad especial siempre puede aplicarse a una región infinitamente pequeña, implica que _ds_ 2 siempre puede expresarse de acuerdo con (1) por medio de cantidades reales _d_ X1 ... _d_ X4. Si denotamos por _dt_ 0 el elemento de volumen «natural» _d_ X1, _d_ X2, _d_ X3, _d_ X4, entonces
_d_ τ0 = √– _g_ _d_ τ.
(18a)
Si √– _g_ fuera a anularse en un punto del continuo tetradimensional, ello significaría que en este punto un volumen «natural» infinitamente pequeño correspondería a un volumen finito en las coordenadas. Supongamos que nunca es así. Entonces _g_ no puede cambiar de signo. Supondremos que en el sentido de la teoría de la relatividad especial, _g_ tiene siempre un valor negativo finito. Ésta es una hipótesis respecto a la naturaleza física del continuo en consideración, y al mismo tiempo un convenio sobre la elección de coordenadas.
Pero si – _g_ es siempre finito y positivo, es natural establecer la elección de coordenadas a posteriori de tal manera que esta cantidad sea siempre igual a la unidad. Veremos más adelante que mediante una restricción semejante de la elección de coordenadas es posible conseguir una simplificación importante de las leyes de la naturaleza.
En lugar de (18), tenemos entonces simplemente _d_ τ'= _d_ τ, de lo cual, en vista del teorema de Jacobi, se sigue que
(19)
Así pues, con esta elección de coordenadas, sólo son permisibles sustituciones para las que el determinante es la unidad.
Pero sería erróneo creer que este paso indica un abandono parcial del postulado de relatividad general. No preguntamos: «¿Cuáles son las leyes de la naturaleza que son covariantes frente a todas las sustituciones para las que el determinante es la unidad?», sino que nuestra pregunta es: «¿Cuáles son las leyes de la naturaleza generalmente covariantes?». Hasta que no las hayamos formulado no podemos simplificar su expresión mediante una elección particular del sistema de referencia.
_La formación de nuevos tensores por medio del tensor fundamental_. El producto interior, exterior y mixto de un tensor por el tensor fundamental da tensores de diferente carácter y rango. Por ejemplo,
Aµ = _g_ µσ Aσ ,
A = _g_ µν Aµν.
Pueden señalarse especialmente las siguientes formas:
Aµν = _g_ µα _g_ νβ Aαβ ,
Aµν = _g_ µα _g_ νβ Aαβ.
(los «complementos» de tensores covariantes y contravariantes respectivamente), y
Bµν = _g_ µν _g_ αβ Aαβ .
Llamamos Bµν al tensor reducido asociado con Aµν. Análogamente,
Bµν = _g_ µν _g_ αβ Aαβ.
Puede notarse que _g_ µν no es entonces otra cosa que el complemento de _g_ µν, puesto que
_g_ µα _g_ νβ _g_ αβ = _g_ µα δνα = _g_ µν.
9. LA ECUACIÓN DE LA LÍNEA GEODÉSICA. EL MOVIMIENTO DE UNA PARTÍCULA
Puesto que el elemento de línea _ds_ está definido independientemente del sistema de coordenadas, la línea trazada entre dos puntos P y P' del continuo tetradimensional de tal manera que ∫ _ds_ es estacionario —una línea geodésica— tiene un significado que también es independiente de la elección de coordenadas. Su ecuación es
(20)
Realizando la variación de la manera habitual, obtenemos a partir de esta ecuación cuatro ecuaciones diferenciales que definen la línea geodésica; esta operación se insertará por compleción. Sea λ una función de las coordenadas _x_ ν, y defina ésta una familia de superficies que intersectan a la línea geodésica requerida así como a todas las líneas en la inmediata vecindad de ella que pasan por los puntos P y P'. Cualquiera de esta líneas puede suponerse entonces dada expresando sus coordenadas _x_ ν como funciones de λ. Indiquemos con el símbolo δ la transición de un punto de la geodésica requerida al punto correspondiente a la misma λ en una línea vecina. Entonces para (20) podemos sustituir
(20a)
Pero puesto que
y
obtenemos a partir de (20a), tras una integración parcial,
donde
(20b)
Puesto que los valores de δ _x_ σ son arbitrarios, se sigue de ello que
κσ = 0
(20c)
son las ecuaciones de la línea geodésica.
Si _ds_ no se anula a lo largo de la línea geodésica podemos escoger la «longitud del arco» _s_ , medida a lo largo de la línea geodésica, como parámetro λ. Entonces _w_ = 1, y en lugar de (20c) obtenemos
o, por un mero cambio de notación,
(20d)
donde, siguiendo a Christoffel, hemos escrito
(21)
Finalmente, si multiplicamos (20d) por _g_ στ (producto exterior con respecto a τ, interior con respecto a σ), obtenemos las ecuaciones de la línea geodésica en la forma
(22)
donde, siguiendo a Christoffel, hemos hecho
{µν, τ} = _g_ τα [µν, α].
(23)
10. LA FORMACIÓN DE TENSORES POR DIFERENCIACIÓN
Con la ayuda de la ecuación de la línea geodésica podemos ahora deducir fácilmente las leyes mediante las cuales pueden formarse nuevos tensores a partir de los viejos por diferenciación. Por este medio somos capaces de formular por primera vez ecuaciones diferenciales generalmente covariantes. Alcanzamos este objetivo por aplicación repetida de la siguiente ley simple:
Si en nuestro continuo tenemos una curva dada, cuyos puntos están especificados por la distancia de arco _s_ medida a partir de un punto fijo sobre la curva, y si, además, φ es una función invariante del espacio, entonces _d_ φ/ _ds_ también es un invariante. La demostración descansa en esto, que _ds_ es un invariante tanto como _d_ φ.
Puesto que
y por consiguiente
es también un invariante, y un invariante para todas las curvas que parten de un punto del continuo, es decir, para cualquier elección del vector _dx_ µ. De aquí se sigue inmediatamente que
(24)
es un cuadrivector covariante: el «gradiente» de φ.
De acuerdo con nuestra regla, el cociente de diferenciales
tomado sobre una curva es análogamente un invariante. Insertando el valor de ψ, obtenemos en primer lugar
La existencia de un tensor no puede deducirse inmediatamente a partir de esto. Pero si podemos tomar la curva a lo largo de la cual hemos diferenciado como una geodésica, obtenemos al sustituir _d_ 2 _x_ ν/ _ds_ 2 a partir de (22),
Puesto que podemos intercambiar el orden de las diferenciaciones, y puesto que por (23) y (21) {µν, τ} es simétrico en µ y ν, se sigue que la expresión entre paréntesis es simétrica en µ y ν. Puesto que una línea geodésica puede trazarse en cualquier dirección a partir de un punto del continuo, y por consiguiente _dx_ µ/ _ds_ es un cuadrivector con la razón de sus componentes arbitraria, se sigue de los resultados del enunciado 7 que
(25)
es un tensor covariante de segundo rango. Por lo tanto, hemos llegado a este resultado: a partir del tensor covariante de primer rango
podemos formar, por diferenciación, un tensor covariante de segundo rango
(26)
Llamamos al tensor Aµν la «extensión» (derivada covariante) del tensor Aµ. En primer lugar podemos demostrar inmediatamente que la operación conduce a un tensor, incluso si el vector Aµ no puede representarse como un gradiente. Para verlo, observamos en primer lugar que
es un vector covariante, si ψ y φ son escalares. La suma de cuatro de estos términos
es también un vector covariante, si ψ(1), φ(1).... ψ(4), φ(4) son escalares. Pero es evidente que cualquier vector covariante puede representarse en la forma Sµ. Pues, si Aµ es un vector cuyas componentes son cualesquiera funciones dadas de las _x_ ν, sólo tenemos que poner (en términos del sistema de coordenadas seleccionado)
_ψ_ (1) = A1, _φ_ (1) = _x_ 1,
_ψ_ (2) = A2, _φ_ (2) = _x_ 2,
_ψ_ (3) = A3, _φ_ (3) = _x_ 3,
_ψ_ (4) = A4, _φ_ (4) = _x_ 4,
para asegurar que Sµ será igual a Aµ.
Por consiguiente, para demostrar que Aµν es un tensor si se inserta _cualquier_ vector covariante en el segundo miembro de Aµ, sólo tenemos que demostrar que esto es así para el vector Sµ. Pero para este último propósito es suficiente, como nos enseña si damos una ojeada al segundo miembro de (26), ofrecer la demostración para el caso
Ahora el segundo miembro de (25) multiplicado por ψ,
es un tensor. Análogamente
siendo el producto exterior de dos vectores, es un tensor. Mediante suma, se sigue el carácter tensorial de
Como mostrará una ojeada a (26), esto completa la demostración para el vector
y en consecuencia, a partir de lo que ya se ha demostrado, para cualquier vector Aµ.
Mediante la extensión del vector, podemos definir fácilmente la «extensión» de un tensor covariante de cualquier rango. Esta operación es una generalización de la extensión de un vector. Nos restringimos al caso de un tensor de segundo rango, puesto que esto basta para dar una idea clara de la ley de formación.
Como ya se ha observado, cualquier tensor covariante de segundo rango puede representarse como suma de tensores del tipo Aµ Bν. Por lo tanto, será suficiente con deducir la expresión para la extensión de un tensor de este tipo especial. Por (26) las expresiones son tensores. Por producto exterior de la primera por Bν, y de la segunda por Aµ, obtenemos en cada caso un tensor de tercer rango. Sumándolos, tenemos el tensor de tercer rango
(27)
donde hemos puesto Aµν = Aµ Bν. Puesto que el segundo miembro de (27) es lineal y homogéneo en las Aµν y sus primeras derivadas, esta ley de formación lleva a un tensor, no sólo en el caso de un tensor del tipo Aµ Bν sino también en el caso de una suma de tales tensores, i. e., en el caso de cualquier tensor covariante de segundo rango. Llamamos a Aµνσ la extensión del tensor Aµν.
Es evidente que (26) y (24) conciernen sólo a casos especiales de extensión (la extensión de los tensores de rango uno y cero respectivamente).
En general, todas las leyes especiales de formación de tensores están incluidas en (27) en combinación con el producto de tensores.
11. ALGUNOS CASOS DE ESPECIAL IMPORTANCIA
_El tensor fundamental_. Demostraremos primero algunos lemas que serán útiles en lo sucesivo. Por la regla para la diferenciación de determinantes
_dg_ = _g_ _µν_ _gdg_ _µν_ = – _g_ _µν_ _gdg_ _µν_.
(28)
El último miembro se obtiene del penúltimo si se tiene en cuenta que _g_ µν _dg_ µ'ν= δµ'µ de modo que _g_ µν _g_ µν = 4, y en consecuencia
_g_ _µν_ _dg_ _µν_ \+ _g_ _µν_ _gdg_ _µν_ = 0.
De (28) se sigue que
(29)
Además, de _g_ µσ _g_ νσ= δµν, se sigue por diferenciación que
(30)
De éstos, por producto mixto por _g_ στ y _g_ νλ respectivamente, y un cambio de notación para los índices, tenemos
(31)
y
(32)
La relación (31) admite una transformación, de la que también haremos uso con frecuencia. De (21)
(33)
Insertando esto en la segunda fórmula de (31), obtenemos, en vista de (23)
(34)
Sustituyendo el segundo miembro de (34) en (29) tenemos
(29a)
_La «divergencia» de un vector contravariante_. Si tomamos el producto interior de (26) por el tensor fundamental contravariante _g_ µν, el segundo miembro, después de una transformación del primer término, toma la forma
De acuerdo con (31) y (29), el último término de esta expresión puede escribirse
Puesto que los símbolos de los índices de suma son mudos, los primeros dos términos de esta expresión cancelan el segundo de la expresión inmediatamente anterior. Si entonces escribimos _g_ µνAµ = Aν, de modo que Aν, como Aµ, es un vector arbitrario, obtenemos finalmente
(35)
Este escalar es la divergencia del vector contravariante Aν.
_El «rotacional» de un vector covariante_. El segundo término en (26) es simétrico en los índices µ y ν. Por lo tanto Aµν – Aνµ es un tensor antisimétrico construido de forma particularmente simple. Obtenemos
(36)
_Extensión antisimétrica de un seis-vector_. Aplicando (27) a un tensor antisimétrico de segundo rango Aµν, formando además las dos ecuaciones que aparecen mediante permutaciones cíclicas de los índices y sumando estas tres ecuaciones, obtenemos el tensor de tercer rango
(37)
que es fácil probar que es antisimétrico.
_La divergencia de un seis-vector_. Tomando el producto mixto de (27) por _g_ µα _g_ νβ, obtenemos también un tensor. El primer término del segundo miembro de (27) puede escribirse en la forma
Si escribimos Aσαβ para _g_ µα _g_ νβ Aµνσ y Aαβ para _g_ µα _g_ νβ Aµν, y en el primer término transformado sustituimos
por sus valores dados por (34), resulta del segundo miembro de (27) una expresión consistente en siete términos, cuatro de los cuales se cancelan, y queda
(38)
Ésta es la expresión para la extensión de un tensor contravariante de segundo rango, y también pueden formarse expresiones correspondientes para la extensión de tensores contravariantes de rango superior e inferior.
Notemos que de forma análoga también podemos formar la extensión de un tensor mixto:
(39)
Contrayendo (38) con respecto a los índices β y σ (producto interior por δσµ), obtenemos el vector
Debido a la simetría de {βγ, α} con respecto a los índices β y γ, el tercer término del segundo miembro se anula, si Aαβ es, como supondremos, un tensor antisimétrico. El segundo término admite ser transformado de acuerdo con (29a). Así obtenemos
(40)
Ésta es la expresión para la divergencia de un seis-vector contravariante.
_La divergencia de un tensor mixto de segundo rango_. Contrayendo (39) con respecto a los índices α y σ, y teniendo en consideración (29a), obtenemos
(41)
Si introducimos en el último término el tensor contravariante Aρσ = _g_ ρτ Aτσ, toma la forma
Si, además, el tensor Aρσ es simétrico, esto se reduce a
Si en lugar de Aρσ hubiéramos introducido el tensor covariante Aρσ = _g_ ρα _g_ σβ Aαβ, que es también simétrico, el último término, en virtud de (31), tomaría la forma
En el caso de simetría en cuestión, (41) puede por lo tanto ser reemplazado por las dos formas
(41a)
(41b)
que tenemos que emplear más adelante.
12. EL TENSOR DE RIEMANN-CHRISTOFFEL
Buscamos ahora el tensor que puede obtenerse a partir del tensor fundamental solamente, por diferenciación. A primera vista, la solución parece obvia. Coloquemos el tensor fundamental de las _g_ µν en (27) en lugar de cualquier tensor dado Aµν, y así tenemos un nuevo tensor, a saber, la extensión del tensor fundamental. Pero fácilmente nos convencemos de que esta extensión es idénticamente nula. Alcanzamos nuestro objetivo, sin embargo, de la siguiente manera. En (27) colocamos
i. e. la extensión del cuadrivector Aµ. Entonces (nombrando los índices de forma algo diferente) tenemos el tensor de tercer rango
Esta expresión sugiere formar el tensor Aµστ – Aµτσ. Pues, si lo hacemos, los términos siguientes de la expresión para Aµστ cancelan a los de Aµτσ, el primero, el cuarto y el miembro correspondiente al último término entre paréntesis cuadrados; puesto que todos estos son simétricos en σ y τ. Lo mismo es válido para la suma del segundo y tercer términos. Así obtenemos
Aµστ – Aµτσ = Bρµστ Aρ,
(42)
donde
(43)
La característica esencial del resultado es que en el segundo miembro de (42) las Aρ aparecen solas, sin sus derivadas. Del carácter tensorial de Aµστ – Aµτσ junto con el hecho de que Aρ es un vector arbitrario, se sigue, por el enunciado 7, que Bσµστ es un tensor (el tensor de Riemann-Christoffel).
La importancia matemática de este vector es la siguiente: Si el continuo es de naturaleza tal que existe un sistema de coordenadas con referencia al cual las _g_ µν son constantes, entonces todas las Bρµστ se anulan. Si escogemos cualquier nuevo sistema de coordenadas en lugar de las originales, las _g_ µν allí referidas no serán constantes, pero a consecuencia de su carácter tensorial las componentes transformadas de Bρµστ seguirán siendo nulas en el nuevo sistema. Así, la anulación del tensor de Riemann es una condición necesaria para que, por una elección apropiada del sistema de referencia, las _g_ µν puedan ser constantes. En nuestro problema esto corresponde al caso en el que, con una elección adecuada del sistema de referencia, la teoría de la relatividad especial es válida para una región finita del continuo.
Contrayendo (43) con respecto a los índices τ y ρ obtenemos el tensor covariante de segundo rango
G _µν_ = B _ρ µνρ_ = R _µν_ \+ S _µν_
donde
(44)
_Nota sobre la elección de coordenadas_. Ya se ha observado en el enunciado 8, en conexión con la ecuación (18a), que la elección de las coordenadas puede hacerse ventajosamente de modo que √− _g_ = 1. Una ojeada a las ecuaciones obtenidas en las dos últimas secciones muestra que con tal elección las leyes de formación de tensores experimentan una importante simplificación. Esto se aplica en particular a Gµν, el tensor que acabamos de desarrollar, que desempeña un papel fundamental en la teoría a establecer. Pues esta particularización de la elección de coordenadas produce la anulación de Sµν, de modo que el tensor Gµν se reduce a Rµν.
Debido a esto, en lo sucesivo daré todas las relaciones en la forma simplificada que trae consigo esta particularización de la elección de coordenadas. Será entonces fácil volver a las ecuaciones _generalmente_ covariantes si en un caso especial parece deseable.
#### C. TEORÍA DEL CAMPO GRAVITATORIO
13. ECUACIONES DE MOVIMIENTO DE UN PUNTO MATERIAL EN EL CAMPO GRAVITATORIO. EXPRESIÓN PARA LAS COMPONENTES DE CAMPO DE LA GRAVITACIÓN
Un cuerpo libremente movible no sometido a fuerzas externas se mueve, según la teoría de la relatividad especial, uniformemente y en línea recta. Éste es también el caso, según la teoría de la relatividad general, para una parte de un espacio tetradimensional en la que el sistema de coordenadas K0, puede ser, y lo es, escogido de modo que ellas tienen los valores constantes especiales dados en (4).
Si consideramos precisamente este movimiento desde cualquier sistema de coordenadas escogido K1, el cuerpo, observado desde K1, se mueve, según las consideraciones de 2, en un campo gravitatorio. La ley de movimiento con respecto a K1 resulta sin dificultad de la siguiente consideración. Con respecto a K0 la ley de movimiento corresponde a una línea recta tetradimensional, i. e. a una línea geodésica. Ahora bien, puesto que la línea geodésica está definida independientemente del sistema de referencia, su ecuación será también la ecuación de movimiento del punto material con respecto a K1. Si hacemos
Γ _τ µν_ = –{µν, τ}
(45)
la ecuación de movimiento del punto con respecto a K1 se convierte en
(46)
Planteamos ahora la hipótesis, que se sugiere inmediatamente, de que este sistema de coordenadas covariante define también el movimiento del punto en el campo gravitatorio en el caso en que no hay ningún sistema de referencia K0 con respecto al cual la teoría de la relatividad especial es válida en una región finita. Tenemos más justificación para esta hipótesis por cuanto (46) contiene sólo derivadas primeras de las _g_ µν, entre las cuales no subsisten relaciones ni siquiera en el caso especial de la existencia de K0.
Si las Γτµν se anulan, entonces el punto se mueve uniformemente en una línea recta. Por lo tanto, estas cantidades condicionan la desviación del movimiento respecto de la uniformidad. Son las componentes del campo gravitatorio.
14. LAS ECUACIONES DE CAMPO DE LA GRAVITACIÓN EN AUSENCIA DE MATERIA
En lo sucesivo hacemos una distinción entre «campo gravitatorio» y «materia» de esta manera: denotamos todo salvo el campo gravitatorio como «materia». Por lo tanto, nuestro uso de la palabra incluye no sólo la materia en el sentido ordinario, sino también el campo electromagnético.
Nuestra próxima tarea es encontrar las ecuaciones de campo de la gravitación en ausencia de materia. Aquí aplicamos de nuevo el método empleado en la sección precedente al formular las ecuaciones de movimiento del punto material. Un caso especial en el que las ecuaciones requeridas deben ser satisfechas en cualquier caso es el de la teoría de la relatividad especial, en el que las _g_ µν tienen ciertos valores constantes. Sea éste el caso en un cierto espacio finito en relación con un sistema de coordenadas definido K0. Con respecto a este sistema todas las componentes del tensor de Riemann Bρµστ, definido en (43), se anulan. Para el espacio en consideración se anulan entonces también en cualquier otro sistema de coordenadas.
Así pues, las ecuaciones requeridas del campo gravitatorio libre de materia deben ser satisfechas en cualquier caso si todas las Bρµστ se anulan. Pero esta condición va demasiado lejos. En efecto, es evidente que, e. g., el campo gravitatorio generado por un punto material en su entorno no puede ciertamente ser «eliminado» por ninguna elección del sistema de coordenadas, i. e. no puede transformarse en el caso de _g_ µν constante.
Esto nos impulsa a exigir para el campo gravitatorio libre de energía que el tensor simétrico Gµν, derivado del tensor Bρµστ, se anulará. Así, obtenemos diez ecuaciones para las diez cantidades _g_ µν, que se satisfacen en el caso especial de la anulación de todas la Bρµστ. Con la elección que hemos hecho de un sistema de coordenadas, y teniendo en cuenta (44), las ecuaciones para el campo libre de materia son
(47)
Debe señalarse que existe sólo un mínimo de arbitrariedad en la elección de estas ecuaciones. Pues aparte de Gµν no hay ningún tensor de segundo rango que esté formado a partir de las _g_ µν y sus derivadas, no contenga derivaciones más altas que la segunda y sea lineal en dichas derivadas.
Estas ecuaciones, que proceden, por el método de las puras matemáticas, del requisito de la teoría de la relatividad general, nos dan, en combinación con las ecuaciones de movimiento (46), en primera aproximación la ley de atracción de Newton, y en segunda aproximación la explicación del movimiento del perihelio del planeta Mercurio descubierto por Leverrier (tal como queda una vez que se han hecho correcciones para la perturbación). En mi opinión, estos hechos deben tomarse como una prueba convincente de la corrección de la teoría.
15. LA FUNCIÓN HAMILTONIANA PARA EL CAMPO GRAVITATORIO. LEYES DEL MOMENTO Y LA ENERGÍA
Para demostrar que las ecuaciones de campo corresponden a las leyes del momento y la energía, es más conveniente escribirlas en la siguiente forma hamiltoniana:
(47a)
donde, en la frontera de la región de integración tetradimensional finita que tenemos a la vista, las variaciones se anulan.
Primero tenemos que demostrar que la forma (47a) es equivalente a las ecuaciones (47). Para este fin consideramos H como una función de las _g_ µν y las _g_ ρµν (= δ _g_ µν / δ _x_ σ)
Entonces, en primer lugar
Pero
Los términos que aparecen de los dos últimos términos entre paréntesis redondos son de signo diferente, y resultan uno de otro (puesto que la denominación de los índices de suma carece de importancia) a través del intercambio de los índices µ y β. Se cancelan mutuamente en la expresión para δH, porque están multiplicados por la cantidad Γαµβ que es simétrica respecto a los índices µ y β. Así pues, sólo queda por considerar el primer término entre paréntesis redondos, de modo que, teniendo en cuenta (31), obtenemos
Así pues
(48)
Realizando la variación en (47a), obtenemos en primer lugar
(47b)
que, debido a (48), coincide con (47), como había que demostrar.
Si multiplicamos (47b) por _g_ σµν, entonces, puesto que,
y, en consecuencia,
obtenemos la ecuación
o
(49)
donde, debido a (48), la segunda ecuación de (47), y (34)
(50)
Hay que notar que _t_ ασ no es un tensor; por otra parte (49) se aplica a todos los sistemas de coordenadas para los que √–g = 1. Esta ecuación expresa la ley de conservación del momento y de la energía para el campo gravitatorio. Realmente la integración de esta ecuación sobre un volumen tridimensional V da las cuatro ecuaciones
(49a)
donde _l, m, n_ denotan los cosenos directores de la dirección de la normal interior en el elemento _d_ S de la superficie frontera (en el sentido de la geometría euclidiana). Reconocemos en esto la expresión de las leyes de conservación en su forma habitual. A las cantidades _t_ ασ las llamamos las «componentes de energía» del campo gravitatorio.
Ahora daré las ecuaciones (47) en una tercera forma, que es particularmente útil para una vívida comprensión de nuestro tema. Multiplicando las ecuaciones de campo (47) por _g_ νσ, éstas se obtienen en la forma «mixta». Notemos que
cuya magnitud, debido a (34), es igual a
o (con símbolos diferentes para los índices de suma)
El tercer término de esta expresión se cancela con el que aparece del segundo término de las ecuaciones de campo (47); utilizando la relación (50), el segundo término puede escribirse
donde _t_ = _t_ αα. Así pues, en lugar de las ecuaciones (47) obtenemos
(51)
16. LA FORMA GENERAL DE LAS ECUACIONES DE CAMPO DE LA GRAVITACIÓN
Las ecuaciones de campo para el espacio libre de materia en el epígrafe 15 deben compararse con la ecuación de campo
∇2 _φ_ = 0
de la teoría de Newton. Exigimos que la ecuación correspondiente a la ecuación de Poisson
∇2 _φ_ = 4πλρ,
donde ρ denota la densidad de materia.
La teoría de la relatividad especial ha llevado a la conclusión de que la masa inerte no es ni más ni menos que energía, que encuentra su completa expresión matemática en un tensor simétrico de segundo rango, el tensor-energía. Así, en la teoría de la relatividad general debemos introducir un correspondiente tensor-energía de materia Tασ, que, como las componentes de energía tασ [ecuaciones (49) y (50)] del campo gravitatorio, tendrán carácter mixto, pero pertenecerán a un tensor covariante simétrico.
El sistema de ecuaciones (51) muestra cómo este tensor-energía (correspondiente a la densidad ρ en la ecuación de Poisson) debe introducirse en las ecuaciones de campo de la gravitación. Pues si consideramos un sistema completo (e. g. el sistema solar), la masa total del sistema, y por lo tanto también su acción gravitatoria total, dependerá de la energía total del sistema, y por lo tanto de la energía ponderable junto con la energía gravitatoria. Esto admitirá ser expresado introduciendo en (51), en lugar de las componentes de energía del campo gravitatorio solo, las sumas tµσ \+ Tµσ de las componentes de energía de materia y de campo gravitatorio. Así, en lugar de (51) obtenemos la ecuación tensorial
(52)
donde hemos hecho T = Tµµ (escalar de Laue). Éstas son las requeridas ecuaciones generales de campo de gravitación en forma mixta. Trabajando hacia atrás a partir de ellas, tenemos en lugar de (47)
(53)
Hay que admitir que esta introducción del tensor-energía de materia no está justificada solamente por el postulado de relatividad. Por esta razón la hemos deducido aquí del requisito de que la energía del campo gravitatorio actuará gravitatoriamente de la misma manera que cualquier otro tipo de energía. Pero la razón más fuerte para la elección de estas ecuaciones reside en su consecuencia: que las ecuaciones de conservación del momento y la energía, correspondientes exactamente a las ecuaciones (49) y (49a), son válidas para las componentes de la energía total. Esto se demostrará en el epígrafe 17.
17. LAS LEYES DE CONSERVACIÓN EN EL CASO GENERAL
La ecuación (52) puede transformarse fácilmente de modo que el segundo término del segundo miembro se anule. Contraemos (52) con respecto a los índices µ y σ, y después de multiplicar la ecuación resultante por 1⁄2 δµσ, la restamos de (52). Esto da
(52a)
Sobre esta ecuación realizamos la operación δ/δ _x_ σ. Tenemos
El primero y el tercer término de los paréntesis redondos producen contribuciones que se cancelan mutuamente, como puede verse intercambiando, en la contribución del tercer término, los índices de suma α y σ por una parte, y β y λ por otra. El segundo término puede remodelarse por (31), de modo que tenemos
(54)
El segundo término del primer miembro de (52a) da en primer lugar
o
Con la elección de coordenadas que hemos hecho, el término que se deriva del último término entre paréntesis redondos desaparece debido a (29). Los otros dos pueden combinarse, y juntos, por (31), dan
de modo que considerando (54) tenemos la identidad
(55)
De (55) y (52a), se sigue que
(56)
Así pues, resulta de nuestras ecuaciones de campo de gravitación que se satisfacen las leyes de conservación de momento y energía. Esto puede verse más fácilmente a partir de la consideración que lleva a la ecuación (49a); salvo que aquí, en lugar de las componentes de energía tσ del campo gravitatorio, tenemos que introducir la totalidad de las componentes de energía de materia y campo gravitatorio.
18. LAS LEYES DE MOMENTO Y ENERGÍA PARA MATERIA, COMO CONSECUENCIA DE LAS ECUACIONES DE CAMPO
Multiplicando (53) por _dg_ µν/ _dx_ σ obtenemos, por el método adoptado en el epígrafe 15, a la vista de la anulación de
la ecuación
o, en vista de (56),
(57)
La comparación con (41b) muestra que, con la elección del sistema de coordenadas que hemos hecho, esta ecuación afirma ni más ni menos que la anulación de la divergencia del tensor-energía material. Físicamente, la aparición del segundo término en el primer miembro muestra que las leyes de conservación de momento y energía no se aplican en sentido estricto para la materia solamente, o que se aplican sólo cuando las _g_ µν son constantes, i. e. cuando se anulan las intensidades de campo de gravitación. Este segundo término es una expresión para el momento, y para la energía, transferidos por unidad de volumen y unidad de tiempo desde el campo gravitatorio a la materia. Esto se manifiesta aún más claramente reescribiendo (57) en el sentido de (41) como
(57a)
El miembro derecho expresa el efecto energético del campo gravitatorio sobre la materia.
Así pues, las ecuaciones de campo de gravitación contienen cuatro condiciones que gobiernan el curso de los fenómenos materiales. Dan por completo las ecuaciones de los fenómenos materiales, si las últimas pueden caracterizarse por cuatro ecuaciones diferenciales mutuamente independientes.
#### D. FENÓMENOS MATERIALES
Las herramientas matemáticas desarrolladas en la parte B nos permiten generalizar inmediatamente las leyes físicas de la materia (hidrodinámica, electrodinámica de Maxwell), tal como están formuladas en la teoría de la relatividad especial, de modo que encajen en la teoría de la relatividad general. Cuando se hace así, el principio de relatividad general no nos ofrece una limitación adicional de posibilidades; pero nos familiariza con la influencia del campo gravitatorio sobre todos los procesos, sin que tengamos que introducir ninguna hipótesis nueva.
De aquí resulta que no es necesario introducir hipótesis definidas respecto a la naturaleza física de la materia (en el sentido más restringido). En particular sigue siendo una cuestión abierta si la teoría del campo electromagnético en conjunción con la del campo gravitatorio proporciona o no una base suficiente para la teoría de la materia. El postulado de relatividad general es incapaz en principio de decirnos nada sobre ello. Debe quedar por ver, durante la elaboración de la teoría, si la teoría electromagnética y la doctrina de la gravitación en colaboración pueden realizar lo que la primera es incapaz de hacer por sí sola.
19. ECUACIONES DE EULER PARA UN FLUIDO ADIABÁTICO SIN FRICCIÓN
Sean _p_ y ρ dos escalares, al primero de los cuales llamamos «presión» y al segundo «densidad» de un fluido; y haya una ecuación que los relaciona. Sea el tensor simétrico contravariante
(58)
el tensor-energía contravariante del fluido. A él pertenece el tensor covariante
(58a)
así como el tensor mixto
(58b)
Insertando el segundo miembro de (58b) en (57a) obtenemos las ecuaciones hidrodinámicas eulerianas de la teoría de la relatividad general. Éstas dan, en teoría, una solución completa al problema de movimiento, puesto que las cuatro ecuaciones (57a), junto con la ecuación dada entre _p_ y ρ, y la ecuación
son suficientes, estando dadas las _g_ αβ, para definir las seis incógnitas
Si se desconocen también las gµν, intervienen las ecuaciones (53). Hay once ecuaciones para definir las diez funciones gµν, de modo que dichas funciones aparecen sobredeterminadas. Debemos recordar, sin embargo, que las ecuaciones (57a) ya están contenidas en las ecuaciones (53), de modo que las últimas representan sólo siete ecuaciones independientes. Hay una buena razón para esta falta de determinación, en cuanto que la amplia libertad de elección de coordenadas hace que el problema quede matemáticamente indeterminado en tal grado que tres de las funciones del espacio pueden ser escogidas a voluntad.
20. ECUACIONES DEL CAMPO ELECTROMAGNÉTICO DE MAXWELL PARA EL ESPACIO LIBRE
Sean φν las componentes de un vector covariante: el vector potencial electromagnético. A partir de ellas formamos, de acuerdo con (36) las componentes Fρσ del seis-vector covariante del campo electromagnético, de acuerdo con el sistema de ecuaciones
(59)
Se sigue de (59) que el sistema de ecuaciones
(60)
es satisfecho, siendo su miembro izquierdo, por (37), un tensor antisimétrico de tercer rango. El sistema (60) contiene así esencialmente cuatro ecuaciones que se escriben como sigue
(60a)
Este sistema corresponde al segundo sistema de ecuaciones de Maxwell. Reconocemos esto inmediatamente haciendo
(61)
Entonces, en lugar de (60a) podemos poner, en la notación usual del análisis vectorial tridimensional
(60b)
Obtenemos el primer sistema de Maxwell generalizando la forma dada por Minkowski. Introducimos el seis-vector contravariante asociado con Fαβ
Fµν = _g_ µα _g_ νβ F _αβ_ ,
(62)
y también el vector contravariante Jµ de la densidad de corriente eléctrica. Entonces, teniendo en cuenta (40), las siguientes ecuaciones serán invariantes para cualquier sustitución cuyo invariante sea la unidad (en acuerdo con las coordenadas escogidas):
(63)
Sean
(64)
cuyas cantidades son iguales a las cantidades H _x_ ... E _z_ en el caso especial de la teoría de la relatividad restringida; si además
J1 = _j x_, J2 = _j y_, J3 = _j z_, J4 = ρ,
obtenemos en lugar de (63)
(63a)
Las ecuaciones (60), (62) y (63) constituyen así la generalización de las ecuaciones de campo de Maxwell para el espacio libre, con el convenio que hemos establecido con respecto a la elección de coordenadas.
_Las componentes-de-energía del campo electromagnético_. Formemos el producto interior
κσ = Fσµ Jµ.
(65)
Por (61) sus componentes, escritas a la manera tridimensional, son
(65a)
κσ es un vector covariante cuyas componentes son iguales al momento negativo, o, respectivamente, la energía, que se transfiera desde las masas eléctricas al campo electromagnético por unidad de tiempo y volumen. Si las masas eléctricas son libres, es decir, están bajo la sola influencia del campo electromagnético, el vector covariante κσ se anulará.
Para obtener las componentes de energía Tνσ del campo electromagnético, sólo tenemos que dar a la ecuación κσ = 0 la forma de la ecuación (57). A partir de (63) y (65) tenemos en primer lugar
El segundo término del segundo miembro, en virtud de (60), permite la transformación
cuya última expresión puede, por razones de simetría, escribirse también en la forma
Pero para esto podemos hacer
El primero de estos términos se escribe de forma más breve
el segundo, una vez llevada a cabo la diferenciación, y tras alguna simplificación, resulta
Juntando los tres términos obtenemos la relación
(66)
donde
La ecuación (66), si se anula κσ, es, debido a (30), equivalente a (57) o (57a) respectivamente. Por consiguiente, las Tνσ son las componentes de energía del campo electromagnético. Con la ayuda de (61) y (64) es fácil demostrar que estas componentes de energía del campo electromagnético en el caso de la teoría de la relatividad especial dan las bien conocidas expresiones de Maxwell-Poynting.
Ahora hemos deducido las leyes generales que son satisfechas por el campo gravitatorio y la materia, utilizando de manera consistente un sistema de coordenadas para el que √– _g_ = 1. Con ello hemos conseguido una considerable simplificación de fórmulas y cálculos, sin dejar de satisfacer el requisito de covariancia general; pues hemos extraído nuestras ecuaciones de ecuaciones generalmente covariantes particularizando el sistema de coordenadas.
Aún queda la cuestión, no carente de interés formal, de si con una definición correspondientemente generalizada de las componentes de energía del campo gravitatorio y la materia, incluso sin particularizar el sistema de coordenadas, es posible formular leyes de conservación en forma de ecuación (56), y ecuaciones de campo de la gravitación de la misma naturaleza que (52) o (52a), de tal manera que a la izquierda tengamos una divergencia (en el sentido ordinario) y a la derecha la suma de las componentes de energía de materia y gravitación. He encontrado que en ambos casos es realmente así. Pero no creo que la comunicación de mis algo extensas reflexiones sobre este tema merezcan la pena, porque después de todo no nos dan nada que sea materialmente nuevo.
21. LA TEORÍA DE NEWTON COMO PRIMERA APROXIMACIÓN
Como ya ha sido mencionado más de una vez, la teoría de la relatividad especial como un caso especial de la teoría general se caracteriza porque las _g_ µν tienen los valores constantes (4). Por lo que ya se ha dicho, esto significa despreciar por completo los efectos de la gravitación. Llegamos a una aproximación más cercana a la realidad considerando el caso en donde las _g_ µν difieren de los valores de (4) en cantidades que son pequeñas comparadas con 1, y despreciando pequeñas cantidades de segundo orden y superiores. (Primer punto de vista de aproximación).
Hay que suponer, además, que en el territorio espacio-temporal en consideración las _g_ µν en el infinito espacial, con una elección de coordenadas adecuada, tienden hacia los valores (4); i. e. estamos considerando campos gravitatorios que pueden considerarse generados exclusivamente por materia en la región finita.
Podría pensarse que estas aproximaciones deben llevarnos a la teoría de Newton. Pero para este fin aún necesitamos aproximar las ecuaciones fundamentales desde un segundo punto de vista. Prestemos atención al movimiento de un punto material de acuerdo con las ecuaciones (16). En el caso de la teoría de la relatividad especial las componentes
pueden tomar cualquier valor. Esto significa que puede aparecer cualquier velocidad
que sea menor que la velocidad de la luz _in vacuo_. Si nos restringimos al caso que casi exclusivamente se ofrece a nuestra experiencia, de que _v_ sea pequeña comparada con la velocidad de la luz, esto denota que las componentes
deben ser tratadas como cantidades pequeñas, mientras que _dx_ 4/ _ds_ , a segundo orden de cantidades pequeñas, es igual a 1. (Segundo punto de vista de aproximación).
Notemos ahora que desde el primer punto de vista de aproximación las magnitudes Γτµν son todas pequeñas magnitudes de al menos primer orden. Una ojeada a (46) muestra así que en esta ecuación, desde el segundo punto de vista de aproximación, tenemos que considerar solamente términos para los que µ = ν = 4. Restringiéndonos a términos de orden más bajo obtenemos primero en lugar de (46) las ecuaciones
donde hemos hecho _ds = dx_ 4 _= dt_ ; o con restricción a términos que desde el primer punto de vista de aproximación son de primer orden
Si además suponemos que el campo gravitatorio es un campo cuasi estático, limitándonos al caso en donde el movimiento de la materia que genera el campo gravitatorio es lento (en comparación con la velocidad de propagación de la luz), podemos despreciar en el segundo miembro las derivadas con respecto al tiempo en comparación con las derivadas con respecto a las coordenadas espaciales, de modo que tenemos
(67)
Ésta es la ecuación de movimiento del punto material según la teoría de Newton, en la que _g_ 44 desempeña el papel del potencial gravitatorio. Lo que es notable en este resultado es que la componente _g_ 44 del tensor fundamental define por sí sola, en primera aproximación, el movimiento del punto material.
Volvamos ahora a las ecuaciones de campo (53). Aquí debemos tener en cuenta que el tensor-energía de «materia» está definido casi exclusivamente por la densidad de materia en el sentido más estrecho, i. e. por el segundo término del segundo miembro de (58) [o, respectivamente (58a) o (58b)]. Si formamos la aproximación en cuestión, todas las componentes se anulan con la única excepción de T44 = ρ = T. En el primer miembro de (53) el segundo término es una cantidad pequeña de segundo orden; el primero da, en la aproximación en cuestión,
Para µ = ν = 4, esto da, con la omisión de los términos derivados con respecto al tiempo
La última de las ecuaciones (53) da así
(68)
Las ecuaciones (67) y (68) juntas son equivalentes a la ley de gravitación de Newton.
Por (67) y (68) la expresión para el potencial gravitatorio se convierte en
(68a)
mientras que la teoría de Newton, con la unidad de tiempo que hemos escogido, da
en donde _K_ denota la constante 6,7 X 10–8, normalmente llamada constante de gravitación. Por comparación obtenemos
(69)
22. COMPORTAMIENTO DE REGLAS Y RELOJES EN EL CAMPO GRAVITATORIO ESTÁTICO. CURVATURA DE RAYOS LUMINOSOS. MOVIMIENTO DEL PERIHELIO DE UNA ÓRBITA PLANETARIA
Para llegar a la teoría de Newton como una primera aproximación tuvimos que calcular solamente una componente, _g_ 44, de las diez _g_ µν del campo gravitatorio, puesto que sólo esta componente entra en la primera aproximación, (67), de la ecuación para el movimiento del punto material en el campo gravitatorio. A partir de esto, sin embargo, es ya evidente que otras componentes de las _g_ µν deben diferir de los valores dados en (4) en pequeñas cantidades de primer orden. Esto es exigido por la condición _g_ = –1.
Para una masa puntual productora de campo en el origen de coordenadas, obtenemos, en primera aproximación, la solución con simetría radial
(70)
donde δρσ es 1 ó 0, respectivamente, según sea ρ = σ ó ρ≠σ, y _r_ es la cantidad debido a (68a)
(70a)
si M denota la masa productora de campo. Es fácil verificar que las ecuaciones de campo (fuera de la masa) resultan satisfechas en primer orden de pequeñas cantidades.
Examinemos ahora la influencia ejercida por el campo de la masa M sobre las propiedades métricas del espacio. La relación
_ds_ 2 = _g_ _µν_ _dx_ _µ_ _dx_ _ν_
es válida siempre entre las longitudes «localmente» medidas (4) y tiempos _ds_ por una parte, y las diferencias de coordenadas _dx_ ν por otra.
Para una unidad de medida de longitud tendida «paralela» al eje de las _x_ , por ejemplo, tendríamos que hacer _ds_ 2 = –1; _dx_ 2 = _dx_ 3 = _dx_ 4 = 0. Por consiguiente, –1 = _g_ 11 _dx_ 12. Si además la unidad de medida reposa sobre el eje de las _x_ , la primera de las ecuaciones (70) da
De estas dos relaciones se sigue que, hasta primer orden de pequeñas cantidades,
(71)
La regla de medir unidad aparece así un poco acortada con relación al sistema de coordenadas por la presencia del campo gravitatorio, si la regla está tendida a lo largo de un radio.
De manera análoga obtenemos la longitud de coordenadas en dirección tangencial si, por ejemplo, hacemos
_ds_ 2 = –1; _dx_ 1 = _dx_ 3 = _dx_ 4 = 0; _x_ 1 = _r_ ; _x_ 2 = _x_ 3 = 0.
El resultado es
–1 = _g_ 22 _dx_ 22 = – _dx_ 22.
(71a)
Con la posición tangencial, por lo tanto, el campo gravitatorio del punto de masa no tiene influencia sobre la longitud de la regla.
Así pues, la geometría euclidiana no es válida ni siquiera en primera aproximación en el campo gravitatorio, si queremos tomar una y la misma regla independientemente de su lugar y orientación como una realización del mismo intervalo; aunque, por supuesto, una ojeada a (70a) y (69) muestra que las desviaciones que se pueden esperar son demasiado pequeñas para que sea posible advertirlas en medidas de la superficie de la Tierra.
Examinemos también la marcha de un reloj unidad, que está preparado para estar en reposo en un campo gravitatorio estático. Aquí tenemos para un período de reloj _ds_ = 1; _dx_ 1 = _dx_ 2 = _dx_ 3 = 0. Por consiguiente
o
(72)
Así pues, el reloj marcha más despacio si se encuentra en la vecindad de masas ponderables. De esto se sigue que las líneas espectrales de la luz que nos llega desde la superficie de grandes estrellas debe aparecer desplazada hacia el extremo rojo del espectro.
Examinemos ahora el curso de los rayos luminosos en el campo gravitatorio estático. Por la teoría de la relatividad especial la velocidad de la luz viene dada por la ecuación
– _dx_ 21 – _dx_ 2 – _dx_ 23 – _dx_ 24 = 0
y, por lo tanto, por la teoría de la relatividad general viene dada por la ecuación
_ds_ 2 = _g_ _µν_ _dx_ _µ_ _dx_ _ν_ = 0.
(73)
Si la dirección, i. e. la razón _dx_ 1 : _dx_ 2 : _dx_ 3 es dada, la ecuación (73) da las cantidades
y en consecuencia la velocidad
definida en el sentido de la geometría euclidiana. Reconocemos fácilmente que el curso de los rayos luminosos debe estar curvado con respecto al sistema de coordenadas, si las _g_ µν no son constantes. Si _n_ es una dirección perpendicular a la propagación de la luz, el principio de Huyghens muestra que el rayo luminoso, imaginado en el plano (γ, _n_ ), tiene la curvatura –δγ/δ _n_.
Examinemos la curvatura sufrida por un rayo luminoso que pasa frente a una masa M a una distancia ∆. Si escogemos el sistema de coordenadas de acuerdo con el diagrama que se acompaña, la curvatura total del rayo (calculada positivamente si es cóncava hacia el origen) viene dada con suficiente aproximación por
mientras que (73) y (70) dan
Haciendo el cálculo, esto da
(74)
Según esto, un rayo luminoso que pasa cerca del Sol sufre una desviación de 1.7''; y un rayo que pasa cerca del planeta Júpiter sufre una desviación de aproximadamente 0.02''.
Si calculamos el campo gravitatorio en un grado de aproximación superior, y con una precisión correspondiente al movimiento orbital de un punto material de masa relativa infinitamente pequeña, encontramos una desviación del siguiente tipo de las leyes de Kepler-Newton del movimiento planetario. La elipse orbital de un planeta experimenta una lenta rotación en la dirección de movimiento, de valor
(75)
por revolución. En esta fórmula _a_ denota el semieje mayor, _c_ la velocidad de la luz en la medida normal, _e_ la excentricidad, T el tiempo de revolución en segundos.
El cálculo da para el planeta Mercurio una rotación de la órbita de 43'' por siglo, que corresponde exactamente a la observación astronómica (Leverrier); pues los astrónomos han descubierto en el movimiento del perihelio de este planeta, una vez consideradas las perturbaciones debidas a otros planetas, un residuo inexplicable de esta magnitud.
### 5
### EL PRINCIPIO DE HAMILTON Y LA TEORÍA DE LA RELATIVIDAD GENERAL*
La teoría de la relatividad general ha sido recientemente expuesta en una forma particularmente clara por H. A. Lorentz y D. Hilbert, quienes han deducido sus ecuaciones a partir de un único principio variacional. Lo mismo se hará en el presente artículo. Pero mi propósito aquí es presentar las conexiones fundamentales de una forma tan clara como sea posible y en términos tan generales como sea permisible desde el punto de vista de la teoría de la relatividad general. En particular, haremos las mínimas hipótesis particularizadoras posibles, en marcado contraste con el tratamiento del tema por parte de Hilbert. Por otra parte, en antítesis con mi más reciente tratamiento del tema, habrá una completa libertad de elección del sistema de coordenadas.
#### 1. EL PRINCIPIO VARIACIONAL Y LAS ECUACIONES DE CAMPO DE LA GRAVITACIÓN Y LA MATERIA
Sea el campo gravitatorio descrito, como es habitual, por el tensor de las _g_ µν (o las _g_ µν); y sea la materia, incluyendo el campo electromagnético, descrita por cualquier número de funciones espaciotemporales _q_ (ρ). No nos importa cómo puedan estar caracterizadas dichas funciones en la teoría de invariantes. Además, sea una función de las
El principio variacional
(1)
nos da entonces tantas ecuaciones diferenciales como funciones _g_ µν y _q_ (ρ) hay que determinar, si las _g_ µν y _q_ (ρ) se varían independientemente unas de otras, y de tal manera que en los límites de integración las δ _q_ (ρ), δ _g_ µν y se anulan.
Supondremos ahora que es lineal en las _g_ στ, y que los coeficientes de las _g_ µνστ dependen sólo de las _g_ µν. Podemos entonces sustituir el principio variacional (1) por otro que es más conveniente para nosotros. En efecto, por una integración parcial apropiada obtenemos
(2)
donde F denota una integral sobre la frontera del dominio en cuestión, y depende sólo de las _g_ µν, _g_ σµν, _q_ (ρ), _q_ (ρ)α y ya no de las _g_ µνστ. A partir de (2) obtenemos, para variaciones tales como las que nos interesan,
(3)
de modo que podemos reemplazar nuestro principio variacional (1) por la forma más conveniente
(1a)
Efectuando la variación de las _g_ µν y las _q_ (ρ) obtenemos, como ecuaciones de campo de la gravitación y materia, las ecuaciones
(4)
(5)
#### 2. EXISTENCIA INDEPENDIENTE DEL CAMPO GRAVITATORIO
Si no formulamos ninguna hipótesis restrictiva respecto a la manera en que depende de las _g_ µν, _g_ σµν, _q_ (ρ), _q_ (ρ)σ, las componentes de energía no pueden dividirse en dos partes, una perteneciente al campo gravitatorio y otra a la materia. Para asegurar esta característica de la teoría, formulamos la siguiente hipótesis
(6)
donde va a depender sólo de las _g_ µν, _g_ σµν, y sólo de las _g_ µν, _q_ (ρ), _q_ (ρ)α. Las ecuaciones (4), (4a) toman entonces la forma
(7)
(8)
Aquí guarda la misma relación con que con .
Debe notarse cuidadosamente que las ecuaciones (8) o (5) tendrían que dar paso a otras, si fuéramos a suponer que o son también independientes de las derivadas de las _q_ (ρ) de orden superior al primero. Análogamente, podría imaginarse que las _q_ (ρ) deberían tomarse no mutuamente independientes, sino conectadas por ecuaciones condicionales. Todo esto no tiene importancia para los desarrollos que siguen, pues están basados solamente en las ecuaciones (7), que se han encontrado variando nuestra integral con respecto a las _g_ µν.
#### 3. PROPIEDADES DE LAS ECUACIONES DE CAMPO DE LA GRAVITACIÓN CONDICIONADAS POR LA TEORÍA DE INVARIANTES
Introducimos ahora la hipótesis de que
_ds_ 2 = _g_ _µν_ _dx_ _µ_ _dx_ _ν_
(9)
es un invariante. Ésta determina el carácter transformacional de las _g_ µν. Respecto al carácter transformacional de la _q_ (ρ), que describen la materia, no planteamos ninguna hipótesis. Por otra parte, sean las funciones así como y , invariantes respecto a cualesquiera sustituciones y coordenadas espacio-temporales. A partir de estas hipótesis se sigue la covariancia general de las ecuaciones (7) y (8), deducidas del tensor de curvatura de Riemann; porque no hay ningún otro invariante con las propiedades requeridas para G. Por ello está también perfectamente determinado, y en consecuencia también lo está el primer miembro de la ecuación de campo (7).
_x_ 'ν = _x_ ν \+ ∆ _x_ ν,
(10)
donde la ∆ _x_ ν son funciones arbitrarias e infinitamente pequeñas de las coordenadas, y _x_ 'ν son las coordenadas, en el nuevo sistema, del punto-universo que tiene las coordenadas _x_ ν en el sistema original. Igual que para las coordenadas, también para cualquier otra magnitud ψ, es válida una ley de transformación del tipo
ψ'+ψ \+ ∆ψ,
donde ∆ψ debe ser siempre expresable mediante las ∆ _x_ ν. De la propiedad de covariancia de las _g_ µν deducimos fácilmente para las _g_ µν y las _g_ σµν las leyes de transformación
(11)
(12)
Puesto que depende sólo de las _g_ µν y _g_ σµν, es posible, con ayuda de (11) y (12), calcular . Obtenemos así la ecuación
(13)
donde por brevedad hemos hecho
(14)
De estas dos ecuaciones sacamos dos inferencias importantes para lo que sigue. Sabemos que es un invariante con respecto a cualquier sustitución, pero no sabemos esto de . Es fácil demostrar, no obstante, que la última cantidad es un invariante con respecto a cualquier sustitución _lineal_ de las coordenadas. De aquí se sigue que el segundo miembro de (13) debe anularse siempre si todas las se anulan. En consecuencia, debe satisfacer la identidad
(15)
Si, además, escogemos las ∆ _x_ ν de modo que difieran de cero sólo en el interior de un dominio dado, pero se anulan en una proximidad infinitesimal de la frontera, entonces, con la transformación en cuestión, el valor de la integral de contorno que aparece en la ecuación (2) no cambia. Por lo tanto ∆F = 0, y, en consecuencia
Pero el primer miembro de la ecuación debe anularse, puesto que y son invariantes. En consecuencia, el segundo miembro también se anula. Así pues, teniendo en cuenta (14), (15) y (16), obtenemos, en primer lugar, la ecuación
(16)
Transformando esta ecuación mediante dos integraciones parciales, y habiendo considerado la libertad de elección de las ∆ _x_ σ, obtenemos la identidad
(17)
Ahora tenemos que extraer conclusiones de las dos identidades (16) y (17) que resultan de la invariancia de y, por consiguiente, del postulado de relatividad general.
Primero transformamos las ecuaciones de campo (7) de la gravitación por producto mixto por _g_ µσ. Luego obtenemos (intercambiando los índices σ y ν), como equivalentes de las ecuaciones de campo (7), las ecuaciones
(18)
donde hemos hecho
(19)
(20)
La última ecuación para _t_ νµ está justificada por (14) y (15). Diferenciando (18) con respecto a _x_ ν, y sumando para ν, se sigue, en vista de (17)
(21)
La ecuación (21) expresa la conservación de momento y energía. Llamamos a las componentes de energía de la materia, _t_ σν a las componentes de la energía del campo gravitatorio.
Habiendo considerado (20), se sigue de las ecuaciones de campo (7) de la gravitación, mediante producto por _g_ σµν y suma con respecto a µ y ν,
o, en vista de (19) y (21)
(22)
donde denota las cantidades _g_ νσ . Éstas son cuatro ecuaciones que las componentes de energía de la materia tienen que satisfacer.
Hay que resaltar que las leyes (generalmente covariantes) de conservación (21) y (22) están deducidas a partir de las ecuaciones de campo (7) de la gravitación, en combinación con el postulado de covariancia general (relatividad) _solamente_ , sin utilizar las ecuaciones de campo (8) para fenómenos materiales.
### 6
### CONSIDERACIONES COSMOLÓGICAS SOBRE LA TEORÍA DE LA RELATIVIDAD GENERAL*
Es bien sabido que la ecuación de Poisson
∇2φ = 4πK _ρ_
(1)
en combinación con las ecuaciones de movimiento de un punto material no es por el momento un sustituto perfecto para la teoría de Newton de acción a distancia. Aún hay que tener en cuenta la condición de que en el infinito espacial el potencial φ tiende hacia un valor límite fijo. Existe un estado de cosas análogo en la teoría de la gravitación en relatividad general. También aquí debemos suplementar las ecuaciones diferenciales con condiciones límite en el infinito espacial, si realmente vamos a considerar que el universo tiene una extensión espacial infinita.
En mi tratamiento del problema planetario escogí dichas condiciones límite en forma de la siguiente hipótesis: es posible seleccionar un sistema de referencia de modo que en el infinito espacial todos los potenciales gravitatorios gµν se hagan constantes. Pero no es en absoluto evidente a priori que podamos establecer las mismas condiciones límite cuando queremos tomar en consideración porciones más grandes del universo físico. En las páginas siguientes se ofrecerán las reflexiones que, hasta el presente, he hecho sobre esta cuestión de fundamental importancia.
#### 1. LA TEORÍA NEWTONIANA
Es bien sabido que la condición límite de Newton del límite constante para φ en el infinito espacial lleva a la concepción de que la densidad de materia se hace cero en el infinito. Pues imaginemos que pueda haber un lugar en el espacio universal en el cual el campo gravitatorio de materia, visto a gran escala, posee simetría esférica. Se sigue entonces de la ecuación de Poisson que, para que φ pueda tender a un límite en el infinito, la densidad media ρ debe decrecer hacia cero más rápidamente que 1/ _r_ 2 a medida que aumenta la distancia _r_ al centro del universo. En este sentido, por consiguiente, el universo según Newton es finito, aunque puede poseer una masa total infinitamente grande.
De esto se sigue en primer lugar que la radiación emitida por los cuerpos celestes dejará, en parte, el sistema newtoniano del universo, saliendo radialmente hacia fuera, para hacerse inefectiva y perderse en el infinito. ¿Puede pasar lo mismo con los cuerpos celestes? Difícilmente es posible dar una respuesta negativa a esta pregunta. En efecto, se sigue de la hipótesis de un límite finito para φ en el infinito espacial que un cuerpo celeste con energía cinética finita puede llegar al infinito espacial superando las fuerzas de atracción newtonianas. Por la mecánica estadística este caso debe darse de vez en cuando, siempre que la energía total del sistema estelar —transferida a una única estrella— sea suficientemente grande para enviar la estrella en su viaje al infinito, de donde nunca puede volver.
Podríamos tratar de evitar esta dificultad peculiar suponiendo un valor muy alto para el potencial límite en el infinito. Ésa sería una forma posible, si el propio valor del potencial gravitatorio no estuviera necesariamente condicionado por los cuerpos celestes. Lo cierto es que nos vemos obligados a considerar la ocurrencia de cualesquiera grandes diferencias de potencial del campo gravitatorio como algo que contradice los hechos. Tales diferencias deben ser realmente de un orden de magnitud tan bajo que las velocidades estelares generadas por ellas no superen las velocidades realmente observadas.
Si aplicamos a las estrellas la ley de distribución de Boltzmann para moléculas, asimilando el sistema estelar a un gas en equilibrio térmico, encontramos que el sistema estelar newtoniano no puede existir en absoluto. En efecto, existe una razón finita de densidades correspondiente a la diferencia de potencial finita entre el centro y el infinito espacial. Una anulación de la densidad en el infinito implica así una anulación de la densidad en su centro.
Apenas parece posible superar estas dificultades sobre la base de la teoría newtoniana. Podemos preguntarnos si pueden eliminarse mediante una modificación de la teoría newtoniana. Antes de nada, indicaremos un método que no pretende ser tomado seriamente; meramente sirve como contrapunto para lo que sigue. En lugar de la ecuación de Poisson escribimos
∇2 _φ_ – λ _φ_ = 4 _πκρ_.
(2)
donde λ denota una constante universal. Si ρ0 es la densidad uniforme de distribución de masa, entonces
(3)
es una solución de la ecuación (2). Esta solución correspondería al caso en el que la materia de las estrellas fijas estuviera distribuida uniformemente por el espacio, si se hace la densidad ρ0 igual a la densidad media real de materia en el universo. La solución entonces corresponde a una extensión infinita del espacio central, llena uniformemente de materia. Si, sin realizar ningún cambio de densidad media, imaginamos que la materia no esté uniformemente distribuida localmente, habrá, además del φ con el valor constante de la ecuación (3), un φ adicional, que en la vecindad de masas más densas se parecerá tanto más al campo newtoniano cuanto menor sea λφ en comparación con 4πκρ.
Un universo así constituido no tendría centro, con respecto a su campo gravitatorio. No habría que suponer una disminución de densidad en el infinito espacial, sino que tanto el potencial medio como la densidad media quedarían constantes en el infinito. El conflicto con la mecánica estadística que encontrábamos en el caso de la teoría newtoniana no se repite. Con una densidad definida pero extraordinariamente pequeña la materia está en equilibrio, sin que se requiera ninguna forma material interna (presiones) para mantener el equilibrio.
#### 2. LAS CONDICIONES DE CONTORNO SEGÚN LA TEORÍA DE LA RELATIVIDAD GENERAL
En la presente sección llevaré al lector por el camino que yo mismo he recorrido, un camino más bien áspero y sinuoso, porque de otro modo no puedo esperar que se tome mucho interés en el resultado final del viaje. La conclusión a la que llegaré es que las ecuaciones de campo de la gravitación que he defendido hasta ahora necesitan todavía una ligera modificación, de modo que sobre la base de la teoría de la relatividad general pueden evitarse aquellas dificultades fundamentales que se han presentado en el epígrafe 1 como enfrentadas a la teoría newtoniana. Esta modificación corresponde perfectamente a la transición de la ecuación de Poisson (1) a la ecuación (2) del epígrafe 1. Finalmente inferimos que las condiciones de contorno en el infinito espacial desaparecen por completo, porque el continuo universal con respecto a sus dimensiones espaciales debe verse como un continuo autocontenido de volumen (tridimensional) espacial finito.
La opinión que yo mantenía hasta hace poco tiempo, respecto a las condiciones límite a fijar en el infinito espacial, se basaba en las siguientes consideraciones. En una teoría de la relatividad consistente no puede haber inercia _relativa al «espacio»_ , sino sólo una inercia de _unas masas con respecto a otras_. Si, por consiguiente, yo tengo una masa a distancia suficiente de todas las demás masas en el universo, su inercia debe reducirse a cero. Trataremos de formular matemáticamente esta condición.
Según la teoría de la relatividad general el momento negativo viene dado por las tres primeras componentes, y la energía por la última componente del tensor covariante multiplicado por √– _g_
(4)
donde, como siempre, hacemos
_ds_ 2 = – _g_ _µν_ _dx_ _µ_ _dx_ _ν_.
(5)
En el caso particularmente claro de la posibilidad de escoger el sistema de coordenadas de modo que el campo gravitatorio en cada punto sea espacialmente isótropo, tenemos de forma más simple
_ds_ 2 = – A ( _dx_ 21 \+ _dx_ 22 \+ _dx_ 23) + B _dx_ 24.
Si, además, al mismo tiempo
obtenemos de (4), en primera aproximación para velocidades pequeñas,
para las componentes del momento, y para la energía (en el caso estático)
_m_ √B.
De las expresiones para el momento se sigue que desempeña el papel de la masa en reposo. Puesto que _m_ es una constante intrínseca de la masa puntual, independientemente de su posición, esta expresión, si retenemos la condición √– _g_ = 1 en el infinito espacial, sólo puede anularse cuando A disminuye hasta cero mientras que B aumenta hasta infinito. Parece, por lo tanto, que tal degeneración de los coeficientes _g_ µν es exigida por el postulado de relatividad de toda la inercia. Este requisito implica que la energía potencial _m_ √B se hace infinitamente grande en el infinito. Así pues, una masa puntual nunca puede abandonar el sistema; y una investigación más detallada muestra que lo mismo se aplica a los rayos luminosos. Un sistema del universo con un comportamiento semejante de los potenciales gravitatorios en el infinito no correría así el riesgo de echar a perder lo que se ha propuesto hasta ahora en conexión con la teoría newtoniana.
Quiero señalar que las hipótesis simplificadoras acerca de los potenciales gravitatorios sobre las que se basa este razonamiento han sido introducidas meramente por razón de claridad. Es posible encontrar formulaciones generales para el comportamiento de las _g_ µν en el infinito que expresan los puntos esenciales de la cuestión sin hipótesis restrictivas adicionales.
En este punto, con la amable asistencia del matemático J. Grommer, investigué campos gravitatorios estáticos, con simetría central, que degeneran en el infinito de la forma mencionada. Se aplicaban los potenciales gravitatorios _g_ µν y a partir de ellos se calculaba el tensor-energía Tµν de materia sobre la base de las ecuaciones de campo de la gravitación. Pero aquí se demostraba que para el sistema de las estrellas fijas no puede intervenir en absoluto ninguna condición de contorno de este tipo, como también ha remarcado recientemente el astrónomo de Sitter.
El tensor-energía contravariante Tµν de la materia ponderable viene dado por
donde ρ es la densidad de materia en la medida natural. Con una elección apropiada del sistema de coordenadas las velocidades estelares son muy pequeñas en comparación con la de la luz. Por lo tanto, podemos sustituir √ _g_ 44 _dx_ 4 por _ds_. Esto nos muestra que todas las componentes de Tµν deben ser muy pequeñas en comparación con la última componente T44. Pero fue completamente imposible reconciliar esta condición con las condiciones de contorno escogidas. Visto en retrospectiva, este resultado no parece sorprendente. El hecho de las pequeñas velocidades de las estrellas permite la conclusión de que donde quiera que haya estrellas fijas, los potenciales gravitatorios (en nuestro caso √B ) nunca pueden ser mucho mayores que aquí en la Tierra. Esto se sigue de un razonamiento estadístico, exactamente como en el caso de la teoría newtoniana. En cualquier caso, nuestros cálculos me han convencido de que no pueden postularse tales condiciones de degeneración para las _g_ µν en el infinito espacial.
Tras el fracaso de este intento, se ofrecen dos posibilidades.
( _a_ ) Podemos exigir, como en el problema de los planetas, que, con una elección adecuada del sistema de referencia, las _g_ µν en el infinito espacial se aproximen a los valores
-1 | 0 | 0 | 0
---|---|---|---
0 | -1 | 0 | 0
0 | 0 | -1 | 0
0 | 0 | 0 | 1
( _b_ ) Podemos abstenernos por completo de fijar condiciones de contorno para el infinito espacial que reclamen validez general; pero en el límite espacial del dominio en consideración tenemos que dar las _g_ µν por separado en cada caso individual, como hasta ahora estábamos habituados a dar por separado las condiciones iniciales para el tiempo.
La posibilidad ( _b_ ) no ofrece ninguna esperanza de resolver el problema, sino que equivale a abandonarlo. Ésta es una posición irrebatible, que actualmente es asumida por De Sitter. Pero debo confesar que semejante resignación total en esta cuestión fundamental es para mí algo difícil. Yo no la aceptaré hasta que se haya demostrado vano todo esfuerzo por avanzar hacia una visión satisfactoria.
La posibilidad ( _a_ ) es insatisfactoria en más de un aspecto. En primer lugar, aquellas condiciones de contorno presuponen una elección definida del sistema de referencia, que es contraria al espíritu del principio de relatividad. En segundo lugar, si adoptamos esta idea, dejamos de satisfacer el requisito de la relatividad de la inercia. Pues la inercia de un punto de masa material _m_ (en medida natural) depende de las _g_ µν; pero éstas difieren poco de sus valores postulados, dados antes, para el infinito espacial. Así pues, la inercia estaría _influida_ , pero no estaría _condicionada_ por la materia (presente en el espacio finito). Si sólo hubiera presente una masa puntual finita, según esta visión, poseería inercia, y de hecho una inercia casi tan grande como cuando está rodeado por las demás masas del universo real. Finalmente, frente a esta visión deben plantearse las objeciones estadísticas que se mencionaron con respecto a la teoría de Newton.
De lo que se ha dicho ahora se verá que no he tenido éxito en formular condiciones de contorno para el infinito espacial. De todas formas, hay todavía una posible salida, sin renunciar como se sugería en ( _b_ ). En efecto, si fuera posible considerar el universo como un continuo que es _finito (cerrado) con respecto a sus dimensiones espaciales_ , no necesitaríamos en absoluto ninguna de tales condiciones de contorno. Procederemos a demostrar que el postulado de relatividad general y el hecho de las pequeñas velocidades estelares son compatibles con la hipótesis de un universo espacialmente finito; aunque ciertamente, para llevar a cabo esta idea, necesitamos una modificación generalizadora de las ecuaciones de campo de la gravitación.
#### 3. EL UNIVERSO ESPACIALMENTE FINITO CON UNA DISTRIBUCIÓN UNIFORME DE MATERIA
Según la teoría de la relatividad general el carácter métrico (curvatura) del continuo espacio-temporal tetradimensional está definido en cada punto por la materia en dicho punto y el estado de dicha materia. Por lo tanto, debido a la falta de uniformidad en la distribución de la materia, la estructura métrica de este continuo debe ser por fuerza extraordinariamente complicada. Pero si estamos interesados sólo en la estructura a gran escala, podemos representarnos la materia como estando uniformemente distribuida sobre espacios enormes, de modo que su densidad de distribución es una función variable que varía de forma extraordinariamente lenta. Así pues, nuestro procedimiento recordará algo al de los geodestas que aproximan por un elipsoide la forma de la superficie terrestre, que a pequeña escala es extraordinariamente complicada.
El hecho más importante que extraemos de la experiencia acerca de la distribución de materia es que las velocidades relativas de las estrellas son muy pequeñas comparadas con la velocidad de la luz. Por ello pienso que por el momento podemos basar nuestro razonamiento en la siguiente hipótesis aproximada. Existe un sistema de referencia con respecto al cual la materia puede considerarse permanentemente en reposo. Con respecto a este sistema, por consiguiente, el tensor-energía contravariante Tµν de materia es, debido a (5), de la forma simple
(6)
El escalar ρ de la densidad (media) de distribución puede ser a priori una función de las coordenadas espaciales. Pero si suponemos que el universo es espacialmente finito, nos vemos impulsados a la hipótesis de que ρ debe ser independiente de la localización. En esta hipótesis basamos las consideraciones que siguen.
En lo que concierne al campo gravitatorio, se sigue de la ecuación de movimiento del punto material
que un punto material en un campo gravitatorio estático sólo puede permanecer en reposo cuando _g_ 44 es independiente de la localización. Puesto que, además, presuponemos independencia de la coordenada temporal _x_ 4 para todas las magnitudes, podemos exigir para la solución requerida que, para toda xν,
_g_ 44 = 1.
(7)
Además, como sucede siempre con problemas estáticos, tendremos que hacer
_g_ 14 = _g_ 24 = _g_ 34 = 0.
(8)
Queda ahora por determinar aquellas componentes del potencial gravitatorio que definen las relaciones puramente geométrico-espaciales de nuestro continuo ( _g_ 11, _g_ 12, ... _g_ 33). De nuestra hipótesis sobre la uniformidad de distribución de las masas que generan el campo se sigue que la curvatura del espacio requerido debe ser constante. Con esta distribución de masa, por consiguiente, el requerido continuo finito de las _x_ 1, _x_ 2, _x_ 3, con _x_ 4 constante, será un espacio esférico.
Llegamos a un espacio semejante, por ejemplo, de la siguiente manera. Partimos de un espacio euclidiano de cuatro dimensiones ξ1, ξ2, ξ3, ξ4 con un elemento de línea _d_ σ; sea, por consiguiente
(9)
En este espacio consideramos la hipersuperficie
(10)
donde R denota una constante. Los puntos de esta hipersuperficie forman un continuo tridimensional, un espacio esférico de radio de curvatura R.
El espacio euclidiano tetradimensional del que partimos sirve sólo para una definición conveniente de nuestra hipersuperficie. Sólo nos interesan aquellos puntos de la hipersuperficie que tienen propiedades métricas en acuerdo con las del espacio físico con una distribución uniforme de materia. Para la descripción de este continuo tridimensional podemos emplear las coordenadas ξ1, ξ2, ξ3 (la proyección sobre el hiperplano ξ4 = 0), puesto que, debido a (10), ξ4 puede expresarse en términos de ξ1, ξ2, ξ3. Eliminando ξ4 de (9) obtenemos para el elemento de línea del espacio esférico la expresión
(11)
donde δµν = 1, si µ = ν; δµν = 0, si µ≠ν, y ρ2 = ξ21 \+ ξ22 \+ ξ23. Las coordenadas escogidas son convenientes cuando se trata de examinar el entorno de uno de los dos puntos ξ1 = ξ2 = ξ3 = 0.
Ahora, el elemento de línea del universo espacio-temporal tetradimensional requerido también nos está dado. Para el potencial _g_ µν, cuyos dos índices difieren de 4, tenemos que hacer
(12)
ecuación que, en combinación con (7) y (8), define perfectamente el comportamiento de reglas de medir, relojes y rayos luminosos.
#### 4. SOBRE UN TÉRMINO ADICIONAL PARA LAS ECUACIONES DE CAMPO DE LA GRAVITACIÓN
Mis ecuaciones de campo de la gravitación propuestas para cualquier sistema de coordenadas escogido son las siguientes:
(13)
El sistema de ecuaciones (13) no se satisface en absoluto cuando insertamos para las _g_ µν los valores dados en (7), (8) y (12), y para el tensor-energía (contravariante) de materia los valores indicados en (6). Se demostrará en la siguiente sección cómo puede hacerse este cálculo convenientemente. De modo que, si fuese cierto que las ecuaciones de campo (13) que he utilizado hasta ahora fueran las únicas compatibles con el postulado de relatividad general, probablemente tendríamos que concluir que la teoría de la relatividad no admite la hipótesis de un universo espacialmente finito.
Sin embargo, el sistema de ecuaciones (14) permite una extensión fácilmente sugerida que es compatible con el postulado de relatividad, y es perfectamente análoga a la extensión de la ecuación de Poisson dada por la ecuación (2). En efecto, en el primer miembro de la ecuación de campo (13) podemos añadir el tensor fundamental _g_ µν, multiplicado por una constante universal, –λ, desconocida por el momento, sin destruir la covariancia general. En lugar de la ecuación de campo (13) escribimos
(13a)
La ecuación de campo, con λ suficientemente pequeña, es en cualquier caso compatible también con los hechos de experiencia derivados del sistema solar. También satisface leyes de conservación de momento y energía, porque llegamos a (13a) en lugar de (13) introduciendo en el principio de Hamilton, en lugar del escalar del tensor de Riemann, este escalar aumentado en una constante universal; y el principio de Hamilton, por supuesto, garantiza la validez de leyes de conservación. Se demostrará en el epígrafe 5 que la ecuación de campo (13a) es compatible con nuestras conjeturas sobre campo y materia.
#### 5. CÁLCULOS Y RESULTADO
Puesto que todos los puntos de nuestro continuo están en pie de igualdad, es suficiente efectuar el cálculo para un punto, e. g., para uno de los dos puntos con coordenadas
_x_ 1 = _x_ 2 = _x_ 3 = _x_ 4 = 0.
Entonces, para las _g_ µν en (13a), tenemos que insertar los valores
-1 | 0 | 0 | 0
---|---|---|---
0 | -1 | 0 | 0
0 | 0 | -1 | 0
0 | 0 | 0 | 1
dondequiera que aparecen derivados sólo una vez o ninguna. Obtenemos así en primer lugar
A partir de esto descubrimos fácilmente, teniendo en cuenta (7), (8) y (13), que todas las ecuaciones (13a) se satisfacen si se cumplen las dos relaciones
o
(14)
Por lo tanto, la recién introducida constante universal λ define tanto la densidad media de distribución ρ que puede permanecer en equilibrio como el radio R y el volumen 2π2R3 del espacio esférico. La masa total M del universo, según nuestra visión, es finita, y de hecho es
(15)
Así pues, la concepción teórica del universo real, si está en correspondencia con nuestro razonamiento, es la siguiente. La curvatura del espacio es variable en el tiempo y el lugar, según la distribución de materia, pero podemos aproximarla groseramente por medio de un espacio esférico. En cualquier caso, esta concepción es lógicamente consistente, y desde el punto de vista de la teoría de la relatividad general es la más accesible; si es sostenible desde el punto de vista del conocimiento astronómico presente no se discutirá aquí. Para llegar a esta concepción consistente, tuvimos por supuesto que introducir una extensión de las ecuaciones de campo de la gravitación que no está justificada por nuestro conocimiento real de la gravitación. Hay que resaltar, sin embargo, que nuestros resultados dan una curvatura positiva del espacio, incluso si no se introduce el término suplementario. Este término es necesario solamente con el objetivo de hacer posible una distribución cuasi estática de materia, como se requiere por el hecho de las velocidades pequeñas de las estrellas.
### 7
### ¿DESEMPEÑAN LOS CAMPOS GRAVITATORIOS UN PAPEL ESENCIAL EN LA ESTRUCTURA DE LAS PARTÍCULAS ELEMENTALES DE LA MATERIA?*
Ni la teoría newtoniana ni la teoría relativista de la gravitación han llevado hasta ahora a ningún avance en la teoría de la constitución de la materia. En vista de este hecho se demostrará en las páginas siguientes que existen razones para pensar que las formaciones elementales que van a constituir los átomos se mantienen unidas por fuerzas gravitatorias.
#### 1. DEFECTOS DE LA VISIÓN ACTUAL
Se han realizado grandes esfuerzos por desarrollar una teoría que diera cuenta del equilibrio de la electricidad que constituye el electrón. G. Mie, en particular, ha dedicado profundas investigaciones a esta cuestión. Su teoría, que ha encontrado un considerable apoyo entre los físicos teóricos, se basa principalmente en la introducción en el tensor-energía de términos suplementarios que dependen de las componentes del potencial electrodinámico, además de los términos de energía de la teoría de Maxwell-Lorentz. Estos nuevos términos, que en el espacio exterior no son importantes, son sin embargo efectivos en el interior de los electrones al mantener el equilibrio frente a la repulsión eléctrica. A pesar de la belleza de la estructura formal de esta teoría, erigida por Mie, Hilbert y Weyl, sus resultados físicos hasta ahora han sido insatisfactorios. Por una parte, la multiplicidad de posibilidades es desalentadora, y por otra parte dichos términos adicionales no han podido ser formulados de una manera tan simple que la solución pudiera ser satisfactoria.
Hasta ahora la teoría de la relatividad general no ha hecho ningún cambio en este estado de la cuestión. Si por el momento no consideramos el término cosmológico adicional, las ecuaciones de campo toman la forma
(1)
donde Gµν denota el tensor de curvatura de Riemann contraído, G el escalar de curvatura formado por contracción repetida, y Tµν el tensor de energía de «materia». La hipótesis de que las Tµν _no_ dependen de las derivadas de las _g_ µν está en consonancia con el desarrollo histórico de estas ecuaciones. Pues estas cantidades son, por supuesto, las componentes de energía en el sentido de la teoría de la relatividad especial, en donde no aparecen variables _g_ µν. El segundo término del primer miembro de la ecuación se escoge de modo que la divergencia del primer miembro de (1) se anule idénticamente, de modo que al tomar la divergencia de (1), obtenemos la ecuación
(2)
que en el caso límite de la teoría de la relatividad especial da las ecuaciones completas de conservación
En ello reside el fundamento físico del segundo término del primer miembro de (1). No está en absoluto establecido a priori que un paso al límite de este tipo tenga algún posible significado. Pues si los campos gravitatorios desempeñan una parte esencial en la estructura de las partículas de materia, la transición al caso límite de _g_ µν constante habría perdido, para ellas, su justificación, pues en realidad, con _g_ µν constante, no podría haber partículas de materia. De modo que si deseamos contemplar la posibilidad de que la gravitación pueda tomar parte en la estructura de los campos que constituyen los corpúsculos, no podemos considerar confirmada la ecuación (1).
Colocando en (1) las componentes de energía de Maxwell-Lorentz del campo electromagnético φµν,
(3)
obtenemos para (2), tomando la divergencia, y tras alguna reducción,
(4)
donde, por brevedad, hemos hecho
(5)
En los cálculos hemos empleado el segundo sistema de ecuaciones de Maxwell
(6)
Vemos de (4) que la densidad de corriente debe anularse en todas partes. Por lo tanto, por la ecuación (1), no podemos llegar a una teoría del electrón restringiéndonos a las componentes electromagnéticas de la teoría de Maxwell-Lorentz, como se ha sabido desde hace tiempo. Así, si mantenemos (1) nos vemos llevados al camino de la teoría de Mie.
No sólo el problema de la materia, sino también el problema cosmológico, lleva a dudar de la ecuación (1). Como he demostrado en el artículo anterior, la teoría de la relatividad general requiere que el universo sea espacialmente finito. Pero esta visión del universo necesitaba una extensión de las ecuaciones (1), con la introducción de una nueva constante universal λ, que está en una relación fija con la masa total del universo (o, respectivamente, con la densidad de equilibrio de la materia). Esto va en grave detrimento de la belleza formal de la teoría.
#### 2. LAS ECUACIONES DE CAMPO LIBERADAS DE ESCALARES
Las dificultades planteadas más arriba se eliminan si en lugar de las ecuaciones de campo (1) establecemos las ecuaciones de campo
(1a)
donde Tµν denota el tensor de energía del campo electromagnético dado por (3).
La justificación formal para el factor –1⁄4 en el segundo término de esta ecuación reside en que hace que el escalar del primer miembro
se haga idénticamente nulo, como lo hace el escalar _g_ νµTµν del segundo miembro debido a (3). Si hubiésemos razonado sobre la base de las ecuaciones (1) en lugar de las (1a) habríamos obtenido, por el contrario, la condición G = 0, que hubiese sido válida en todas partes para las _g_ µν, independientemente del campo eléctrico. Es evidente que el sistema de ecuaciones [(1a),(3)] es una consecuencia del sistema [(1),(3)], pero no a la inversa.
A primera vista podríamos sentir dudas acerca de si (1a) junto con (6) son suficientes para definir todo el campo. En una teoría relativista general necesitamos _n_ – 4 ecuaciones diferenciales, mutuamente independientes, para la definición de _n_ variables independientes, puesto que, debido a la libertad de elección de las coordenadas, en la solución deben aparecer de forma natural cuatro funciones completamente arbitrarias de todas las coordenadas. Así, para determinar las dieciséis cantidades independientes _g_ µν y φµν necesitamos doce ecuaciones, todas ellas mutuamente independientes. Pero resulta que nueve de las ecuaciones (1a) y tres de las ecuaciones (6) son mutuamente independientes.
Formando la divergencia de (1a) y teniendo en cuenta que la divergencia de se anula, obtenemos
(4a)
A partir de esto reconocemos antes de nada que el escalar de curvatura G es constante en los dominios tetradimensionales en donde se anula la densidad de electricidad. Si suponemos que todas estas partes del espacio están conectadas, y por consiguiente que la densidad de electricidad difiere de cero sólo en «líneas de universo» separadas, entonces el escalar de curvatura, en todo lugar fuera de dichas líneas de universo, posee un valor constante G0. Pero la ecuación (4a) también permite una importante conclusión respecto al comportamiento de G dentro de los dominios que tienen una densidad de electricidad distinta de cero. Si, como es costumbre, consideramos la electricidad como una densidad de carga en movimiento, haciendo
(7)
obtenemos de (4a) por un producto interno por Jσ, y debido a la antisimetría de φµν, la relación
(8)
Así pues, el escalar de curvatura es constante en toda línea de universo del movimiento de la electricidad. La ecuación (4a) puede interpretarse de una manera gráfica por el enunciado: El escalar de curvatura desempeña el papel de una presión negativa que, fuera de los corpúsculos eléctricos, tiene un valor constante G0. En el interior de cada corpúsculo subsiste una presión negativa (G – G0 positivo) cuya caída mantiene la fuerza electrodinámica en equilibrio. El mínimo de la presión o, respectivamente, el máximo del escalar de curvatura, no cambia con el tiempo en el interior del corpúsculo.
Ahora escribimos las ecuaciones de campo (1a) en la forma
(9)
Por otra parte, transformamos las ecuaciones completadas con el término cosmológico como ya se han dado
Restando la ecuación escalar multiplicada por 1⁄2, obtenemos a continuación
Ahora, en las regiones donde sólo hay presentes campos eléctricos o gravitatorios el segundo miembro de esta ecuación se anula. Para tales regiones obtenemos, formando el escalar,
G + 4λ = 0.
En tales regiones, por lo tanto, el escalar de curvatura es constante, de modo que λ puede reemplazarse por 1/4G0. Así, podemos escribir la ecuación de campo anterior (1) en la forma
(10)
Comparando (9) con (10) vemos que no hay diferencia entre las nuevas ecuaciones de campo y las anteriores, excepto que en lugar de Tµν como tensor de «masa gravitatoria» ahora aparece que es independiente del escalar de curvatura. Pero la nueva formulación tiene esta gran ventaja, que la cantidad λ aparece en las ecuaciones fundamentales como una constante de integración, y ya no como una constante universal característica de la ley fundamental.
#### 3. SOBRE LA CUESTIÓN COSMOLÓGICA
El último resultado permite ya suponer que con nuestra nueva formulación el universo puede considerarse espacialmente finito, sin necesidad de ninguna hipótesis adicional. Como en el artículo precedente, demostraré una vez más que, con una distribución uniforme de materia, un mundo esférico es compatible con las ecuaciones.
En primer lugar hacemos
_ds_ 2 = – γ _ik dxi dxk_ \+ _dx_ 24 ( _i_ , _k_ = 1, 2, 3).
(11)
Entonces si Pik y P son, respectivamente, el tensor de curvatura de segundo rango y el escalar de curvatura en el espacio tridimensional, tenemos
G _ik_ = P _ik_ ( _i_ , _k_ = 1, 2, 3)
G _i_ 4 = G4 _i_ = G44 = 0
G = –P
– _g_ = γ.
Se sigue por lo tanto para nuestro caso que
A partir de este momento continuamos nuestras reflexiones de dos maneras. En primer lugar, con el apoyo de la ecuación (1a). Aquí Tµν denota el tensor-energía del campo electromagnético, que aparece por las partículas eléctricas que constituyen la materia. Para este campo tenemos en todo lugar
Las individuales son cantidades que varían rápidamente con la posición; pero para nuestro objetivo podemos reemplazarlas sin duda por sus valores medios. Por lo tanto tenemos que escoger
(12)
y por consiguiente
Considerando lo que se ha mostrado hasta ahora, obtenemos en lugar de (1a)
(13)
(14)
El escalar de la ecuación (13) coincide con (14). Por esto es por lo que nuestras ecuaciones fundamentales permiten la idea de un universo esférico. Pues de (13) y (14) se sigue
(15)
y es sabido que este sistema es satisfecho por un universo esférico (tridimensional).
Pero también podemos basar nuestras reflexiones en las ecuaciones (9). En el segundo miembro de (9) están aquellos términos que, desde el punto de vista fenomenológico, deben ser sustituidos por el tensor-energía de materia; es decir, deben ser reemplazados por
0 | 0 | 0 | 0
---|---|---|---
0 | 0 | 0 | 0
0 | 0 | 0 | 0
0 | 0 | 0 | _ρ_
donde ρ denota la densidad media de materia que se supone en reposo. Obtenemos así las ecuaciones
(16)
(17)
A partir del escalar de la ecuación (16) y de (17) obtenemos
(18)
y en consecuencia a partir de (16)
(19)
ecuación que, salvo la expresión para los coeficientes, coincide con (15). Por comparación obtenemos
(20)
Esta ecuación significa que tres cuartos de la energía que constituye la materia deben atribuirse al campo electromagnético, y un cuarto al campo gravitatorio.
#### 4. COMENTARIOS FINALES
Las reflexiones anteriores muestran la posibilidad de una construcción teórica de la materia a partir del campo gravitatorio y el campo electromagnético solamente, sin la introducción de hipotéticos términos suplementarios en la línea de la teoría de Mie. Esta posibilidad se presenta particularmente prometedora en cuanto que nos libera de la necesidad de introducir una constante especial λ para la solución del problema cosmológico. Por otra parte, existe una dificultad peculiar. En efecto, si particularizamos (1) para el caso estático con simetría esférica obtenemos una ecuación menos de las necesarias para definir los _g_ µν y φµν, con el resultado de que _cualquier distribución esféricamente simétrica_ de electricidad parece capaz de permanecer en equilibrio. Así, el problema de la constitución de los cuantos elementales no puede ser aún resuelto sobre la base inmediata de las ecuaciones de campo dadas.
# II
## RELATIVIDAD: LA TEORÍA ESPECIAL Y GENERAL
La Tierra es una esfera ligeramente aplastada, y a pesar de ello, desde su superficie parece plana. Y de hecho, se creyó plana durante bastantes miles de años. De la misma manera, nuestro universo nos parece «plano» en cuanto que los axiomas de Euclides nos parecen obviamente ciertos, siendo el más famoso el que establece que dos líneas rectas o haces de luz pueden cruzarse como mucho una vez. Este escenario «plano» del espacio es el más simple, y también el escenario aceptado por todos los físicos anteriores a Einstein.
Einstein no echó por la borda de forma inmediata el modelo de universo plano, sino que simplemente añadió otra dimensión a altura, anchura y largura: el tiempo. En «Relatividad: la Teoría Especial y General», Einstein describió la física en un espacio plano, el dominio de la relatividad especial. Sus postulados fueron bastante simples: primero, las leyes de la física son las mismas para todos los observadores desplazándose a velocidad constante; y segundo, todos ellos medirán la misma velocidad de la luz. Ciertamente, sir Isaac Newton hubiera asentido el primer punto, pero hubiera tenido que desestimar el segundo. Einstein consiguió este efecto tras recalcar que las leyes de la física eran invariables no sólo bajo rotaciones entre direcciones del espacio, sino también bajo «rotaciones» entre espacio y tiempo.
Einstein se dio cuenta de que la teoría no incluía la gravedad, y por tanto necesariamente estaba incompleta. Para remediarlo, como se discute en la Parte II, Einstein argumentaba que el universo debería también ser curvo. La curvatura de espacio y tiempo conlleva un número de implicaciones profundas: la luz no viaja en línea recta, sino que más bien se curva alrededor de cuerpos masivos; relojes cercanos a cuerpos masivos van más despacio respecto a relojes lejanos. En otras palabras, Einstein notó que no sólo el espacio se curva, también el tiempo lo hace. Con un simple conjunto de «ecuaciones de campo», Einstein derivó no sólo las leyes de movimiento y gravedad expuestas por Newton, sino que también allanó el camino para entender diversos fenómenos hasta entonces inexplicables.
Casi inmediatamente después de que Einstein publicara su teoría general de la relatividad en 1915, Karl Schwarzschild demostró que las ecuaciones de campo de Einstein podían ser resueltas para el caso de un único cuerpo masivo. Aunque entonces no se dieran cuenta, y nunca llegara a ser admitido por Einstein, esta solución describe objetos compactos de los cuales ni siquiera la luz puede escapar: lo que ahora llamamos «agujeros negros». Hoy creemos que algunas estrellas finalizan sus vidas como agujeros negros, y que en el centro de la mayoría de las galaxias, si no en todas, subyacen agujeros negros muy masivos. De hecho, en nuestra propia galaxia Vía Láctea, evidencias recientes sugieren que existe un agujero negro de aproximadamente tres millones de veces la masa del sol.
Dado que la luz se tuerce alrededor de objetos masivos, las imágenes de galaxias distantes pueden estar distorsionadas, o incluso multiplicadas en su camino hacia los observadores aquí en la Tierra. Este efecto, bautizado como «lentes gravitacionales», no es muy distinto al de la curvatura de una pieza de cristal. Una de las primeras observaciones que confirmaron la relatividad general fue un efecto de lente gravitacional por sir Arthur Eddington durante un eclipse solar en 1919. Eddington notó que la posición de una estrella parecía desplazarse en el firmamento respecto a su posición normal. El desplazamiento era consistente con el resultado predicho por Einstein teniendo en cuenta la masa solar.
La curvatura del espacio no es necesariamente local. La mayoría de los astrofísicos modernos está preocupada por la «forma» global que tiene el universo, si resulta ser «plano», «cerrado» como una esfera (y por tanto, finito), o «abierto» como una silla de montar (y por tanto, infinito). Observaciones recientes del satélite Wilkinson Microwave Anisotropy Probe (WMAP) sugieren que el universo es plano. O tan extenso, que todavía no puede ser distinguido respecto a una planicie perfecta.
Al proponer Einstein su relatividad general, reconoció que su teoría predecía que el universo como un todo podía no ser estático como siempre se había asumido: la atracción gravitatoria implicaba que el universo debía de estar o bien expandiéndose, o bien contrayéndose. Por tanto, añadió una «constante cosmológica» para equilibrar la atracción de la gravedad y mantener el universo estático. En 1922, el astrónomo Edwin Hubble midió la expansión del universo mediante una observación, una expansión totalmente consistente con la teoría original de Einstein, pero no con su valor para la constante cosmológica. En el Apéndice 4 de este trabajo, Einstein responde a los descubrimientos recientes, y en otras notas añade que su introducción ad hoc de una constante cosmológica fue su «mayor metedura de pata». Como epílogo interesante, diremos que a mediados de los años noventa, se realizaron mediciones tras explosiones de supernovas distantes que indican que, aunque no sea el valor propuesto por Einstein, puede que después de todo exista una constante cosmológica.
### PRÓLOGO
_El presente librito * pretende dar una idea lo más exacta posible de la teoría de la relatividad, pensando en aquellos que, sin dominar el aparato matemático de la física teórica, tienen interés en la teoría desde el punto de vista científico o filosófico general. La lectura exige una formación de bachillerato aproximadamente y —pese a su brevedad— no poca paciencia y voluntad por parte del lector. El autor ha puesto todo su empeño en resaltar con la máxima claridad y sencillez las ideas principales, respetando por lo general el orden y el contexto en que realmente surgieron. En aras de la claridad me pareció inevitable repetirme a menudo, sin reparar lo más mínimo en la elegancia expositiva; me atuve obstinadamente al precepto del genial teórico L. Boltzmann, de dejar la elegancia para los sastres y zapateros. Las dificultades que radican en la teoría propiamente dicha creo no habérselas ocultado al lector, mientras que las bases físicas empíricas de la teoría las he tratado deliberadamente con cierta negligencia, para que al lector alejado de la física no le ocurra lo que al caminante, a quien los árboles no le dejan ver el bosque. Espero que el librito depare a más de uno algunas horas de alegre entretenimiento_.
A. EINSTEIN
Diciembre de 1916
### Primera parte
### SOBRE LA TEORÍA DE LA RELATIVIDAD ESPECIAL
#### 1. EL CONTENIDO FÍSICO DE LOS TEOREMAS GEOMÉTRICOS
Seguro que también tú, querido lector, entablaste de niño conocimiento con el soberbio edificio de la geometría de Euclides y recuerdas, quizá con más respeto que amor, la imponente construcción por cuyas altas escalinatas te pasearon durante horas sin cuento los meticulosos profesores de la asignatura. Y seguro que, en virtud de ese tu pasado, castigarías con el desprecio a cualquiera que declarase falso incluso el más recóndito teoremita de esta ciencia. Pero es muy posible que este sentimiento de orgullosa seguridad te abandonara de inmediato si alguien te preguntara: «¿Qué entiendes tú al afirmar que estos teoremas son verdaderos?». Detengámonos un rato en esta cuestión.
La geometría parte de ciertos conceptos básicos, como el de plano, punto, recta, a los que estamos en condiciones de asociar representaciones más o menos claras, así como de ciertas proposiciones simples (axiomas) que, sobre la base de aquellas representaciones, nos inclinamos a dar por «verdaderas». Todos los demás teoremas son entonces referidos a aquellos axiomas (es decir, son demostrados) sobre la base de un método lógico cuya justificación nos sentimos obligados a reconocer. Un teorema es correcto, o «verdadero», cuando se deriva de los axiomas a través de ese método reconocido. La cuestión de la «verdad» de los distintos teoremas geométricos remite, pues, a la de la «verdad» de los axiomas. Sin embargo, se sabe desde hace mucho que esta última cuestión no sólo no es resoluble con los métodos de la geometría, sino que ni siquiera tiene sentido en sí. No se puede preguntar si es verdad o no que por dos puntos sólo pasa _una_ recta. Únicamente cabe decir que la geometría euclidiana trata de figuras a las que llama «rectas» y a las cuales asigna la propiedad de quedar unívocamente determinadas por dos de sus puntos. El concepto de «verdadero» no se aplica a las proposiciones de la geometría pura, porque con la palabra «verdadero» solemos designar siempre, en última instancia, la coincidencia con un objeto «real»; la geometría, sin embargo, no se ocupa de la relación de sus conceptos con los objetos de la experiencia, sino sólo de la relación lógica que guardan estos conceptos entre sí.
El que, a pesar de todo, nos sintamos inclinados a calificar de «verdaderos» los teoremas de la geometría tiene fácil explicación. Los conceptos geométricos se corresponden más o menos exactamente con objetos en la naturaleza, que son, sin ningún género de dudas, la única causa de su formación. Aunque la geometría se distancie de esto para dar a su edificio el máximo rigor lógico, lo cierto es que la costumbre, por ejemplo, de ver un segmento como dos lugares marcados en un cuerpo prácticamente rígido está muy afincada en nuestros hábitos de pensamiento. Y también estamos acostumbrados a percibir tres lugares como situados sobre una recta cuando, mediante adecuada elección del punto de observación, podemos hacer coincidir sus imágenes al mirar con un solo ojo.
Si, dejándonos llevar por los hábitos de pensamiento, añadimos ahora a los teoremas de la geometría euclidiana un único teorema más, el de que a dos puntos de un cuerpo prácticamente rígido les corresponde siempre la misma distancia (segmento), independientemente de las variaciones de posición a que sometamos el cuerpo, entonces los teoremas de la geometría euclidiana se convierten en teoremas referentes a las posibles posiciones relativas de cuerpos prácticamente rígidos. La geometría así ampliada hay que contemplarla como una rama de la física. Ahora sí cabe preguntarse por la «verdad» de los teoremas geométricos así interpretados, porque es posible preguntar si son válidos o no para aquellos objetos reales que hemos asignado a los conceptos geométricos. Aunque con cierta imprecisión, podemos decir, pues, que por «verdad» de un teorema geométrico entendemos en este sentido su validez en una construcción con regla y compás.
Naturalmente, la convicción de que los teoremas geométricos son «verdaderos» en este sentido descansa exclusivamente en experiencias harto incompletas. De entrada daremos por supuesta esa verdad de los teoremas geométricos, para luego, en la última parte de la exposición (la teoría de la relatividad general), ver que esa verdad tiene sus límites y precisar cuáles son éstos.
#### 2. EL SISTEMA DE COORDENADAS
Basándonos en la interpretación física de la distancia que acabamos de señalar estamos también en condiciones de determinar la distancia entre dos puntos de un cuerpo rígido por medio de mediciones. Para ello necesitamos un segmento (regla S) que podamos utilizar de una vez para siempre y que sirva de escala unidad. Si _A_ y _B_ son dos puntos de un cuerpo rígido, su recta de unión es entonces construible según las leyes de la geometría; sobre esta recta de unión, y a partir de _A_ , llevamos el segmento S tantas veces como sea necesario para llegar a _B_. El número de repeticiones de esta operación es la medida del segmento _AB_. Sobre esto descansa toda medición de longitudes.
Cualquier descripción espacial del lugar de un suceso o de un objeto consiste en especificar el punto de un cuerpo rígido (cuerpo de referencia) con el cual coincide el suceso, y esto vale no sólo para la descripción científica, sino también para la vida cotidiana. Si analizo la especificación de lugar «en Berlín, en la Plaza de Potsdam», veo que significa lo siguiente. El suelo terrestre es el cuerpo rígido al que se refiere la especificación de lugar; sobre él, «Plaza de Potsdam en Berlín» es un punto marcado, provisto de nombre, con el cual coincide espacialmente el suceso.
Este primitivo modo de localización sólo atiende a lugares situados en la superficie de cuerpos rígidos y depende de la existencia de puntos distinguibles sobre aquélla. Veamos cómo el ingenio humano se libera de estas dos limitaciones sin que la esencia del método de localización sufra modificación alguna. Si sobre la Plaza de Potsdam flota por ejemplo una nube, su posición, referida a la superficie terrestre, cabrá fijarla sin más que erigir en la plaza un mástil vertical que llegue hasta la nube. La longitud del mástil medida con la regla unidad, junto con la especificación del lugar que ocupa el pie del mástil, constituyen entonces una localización completa. El ejemplo nos muestra de qué manera se fue refinando el concepto de lugar:
a) Se prolonga el cuerpo rígido al que se refiere la localización, de modo que el cuerpo rígido ampliado llegue hasta el objeto a localizar.
b) Para la caracterización del lugar se utilizan _números_ , y no la nomenclatura de puntos notables (en el caso anterior, la longitud del mástil medida con la regla).
c) Se sigue hablando de la altura de la nube aun cuando no se erija un mástil que llegue hasta ella. En nuestro caso, se determina —mediante fotografías de la nube desde diversos puntos del suelo y teniendo en cuenta las propiedades de propagación de la luz— qué longitud habría que dar al mástil para llegar a la nube.
De estas consideraciones se echa de ver que para la descripción de lugares es ventajoso independizarse de la existencia de puntos notables, provistos de nombres y situados sobre el cuerpo rígido al que se refiere la localización, y utilizar en lugar de ello números. La física experimental cubre este objetivo empleando el sistema de coordenadas cartesianas.
Este sistema consta de tres paredes rígidas, planas, perpendiculares entre sí y ligadas a un cuerpo rígido. El lugar de cualquier suceso, referido al sistema de coordenadas, viene descrito (en esencia) por la especificación de la longitud de las tres verticales o coordenadas ( _x_ , _y_ , _z_ ) [cf. figura p. 214] que pueden trazarse desde el suceso hasta esas tres paredes. Las longitudes de estas tres perpendiculares pueden determinarse mediante una sucesión de manipulaciones con reglas rígidas, manipulaciones que vienen prescritas por las leyes y métodos de la geometría euclidiana.
En las aplicaciones no suelen construirse realmente esas paredes rígidas que forman el sistema de coordenadas; y las coordenadas tampoco se determinan realmente por medio de construcciones con reglas rígidas, sino indirectamente. Pero el sentido físico de las localizaciones debe buscarse siempre en concordancia con las consideraciones anteriores, so pena de que los resultados de la física y la astronomía se diluyan en la falta de claridad.
La conclusión es, por tanto, la siguiente: toda descripción espacial de sucesos se sirve de un cuerpo rígido al que hay que referirlos espacialmente. Esa referencia presupone que los «segmentos» se rigen por las leyes de la geometría euclidiana, viniendo representados físicamente por dos marcas sobre un cuerpo rígido.
#### 3. ESPACIO Y TIEMPO EN LA MECÁNICA CLÁSICA
Si formulo el objetivo de la mecánica diciendo que «la mecánica debe describir cómo varía con el tiempo la posición de los cuerpos en el espacio», sin añadir grandes reservas y prolijas explicaciones, cargaría sobre mi conciencia algunos pecados capitales contra el sagrado espíritu de la claridad. Indiquemos antes que nada estos pecados.
No está claro qué debe entenderse aquí por «posición» y «espacio». Supongamos que estoy asomado a la ventanilla de un vagón de ferrocarril que lleva una marcha uniforme, y dejo caer una piedra a la vía, sin darle ningún impulso. Entonces veo (prescindiendo de la influencia de la resistencia del aire) que la piedra cae en línea recta. Un peatón que asista a la fechoría desde el terraplén observa que la piedra cae a tierra según un arco de parábola. Yo pregunto ahora: las «posiciones» que recorre la piedra ¿están «realmente» sobre una recta o sobre una parábola? Por otro lado, ¿qué significa aquí movimiento en el «espacio»? La respuesta es evidente después de lo dicho en el epígrafe 2. Dejemos de momento a un lado la oscura palabra «espacio», que, para ser sinceros, no nos dice absolutamente nada; en lugar de ella ponemos «movimiento respecto a un cuerpo de referencia prácticamente rígido». Las posiciones con relación al cuerpo de referencia (vagón del tren o vías) han sido ya definidas explícitamente en el epígrafe anterior. Introduciendo en lugar de «cuerpo de referencia» el concepto de «sistema de coordenadas», que es útil para la descripción matemática, podemos decir: la piedra describe, con relación a un sistema de coordenadas rígidamente unido al vagón, una recta; con relación a un sistema de coordenadas rígidamente ligado a las vías, una parábola. En este ejemplo se ve claramente que en rigor no existe una trayectorias, sino sólo una trayectoria con relación a un cuerpo de referencia determinado.
Ahora bien, la descripción _completa_ del movimiento no se obtiene sino al especificar cómo varía la posición del cuerpo _con el tiempo_ , o lo que es lo mismo, para cada punto de la trayectoria hay que indicar en qué momento se encuentra allí el cuerpo. Estos datos hay que completarlos con una definición del tiempo en virtud de la cual podamos considerar estos valores temporales como magnitudes esencialmente observables (resultados de mediciones). Nosotros, sobre el suelo de la mecánica clásica, satisfacemos esta condición —con relación al ejemplo anterior— de la siguiente manera. Imaginemos dos relojes exactamente iguales; uno de ellos lo tiene el hombre en la ventanilla del vagón de tren; el otro, el hombre que está de pie en el terraplén. Cada uno de ellos verifica en qué lugar del correspondiente cuerpo de referencia se encuentra la piedra en cada instante marcado por el reloj que tiene en la mano. Nos abstenemos de entrar aquí en la imprecisión introducida por el carácter finito de la velocidad de propagación de la luz. Sobre este extremo, y sobre una segunda dificultad que se presenta aquí, hablaremos detenidamente más adelante.
#### 4. EL SISTEMA DE COORDENADAS DE GALILEO
Como es sabido, la ley fundamental de la mecánica de Galileo y Newton, conocida por la ley de inercia, dice: un cuerpo suficientemente alejado de otros cuerpos persiste en su estado de reposo o de movimiento rectilíneo uniforme. Este principio se pronuncia no sólo sobre el movimiento de los cuerpos, sino también sobre qué cuerpos de referencia o sistemas de coordenadas son permisibles en la mecánica y pueden utilizarse en las descripciones mecánicas. Algunos de los cuerpos a los que sin duda cabe aplicar con gran aproximación la ley de inercia son las estrellas fijas. Ahora bien, si utilizamos un sistema de coordenadas solidario con la Tierra, cada estrella fija describe, con relación a él y a lo largo de un día (astronómico), una circunferencia de radio enorme, en contradicción con el enunciado de la ley de inercia. Así pues, si uno se atiene a esta ley, entonces los movimientos sólo cabe referirlos a sistemas de coordenadas con relación a los cuales las estrellas fijas no ejecutan movimientos circulares. Un sistema de coordenadas cuyo estado de movimiento es tal que con relación a él es válida la ley de inercia lo llamamos «sistema de coordenadas de Galileo». Las leyes de la mecánica de Galileo-Newton sólo tienen validez para sistemas de coordenadas de Galileo.
#### 5. EL PRINCIPIO DE LA RELATIVIDAD (EN SENTIDO RESTRINGIDO)
Para conseguir la mayor claridad posible, volvamos al ejemplo del vagón de tren que lleva una marcha uniforme. Su movimiento decimos que es una traslación uniforme («uniforme», porque es de velocidad y dirección constantes; «traslación», porque aunque la posición del vagón varía con respecto a la vía, no ejecuta ningún giro). Supongamos que por los aires vuela un cuervo en línea recta y uniformemente (respecto a la vía). No hay duda de que el movimiento del cuervo es —respecto al vagón en marcha— un movimiento de distinta velocidad y diferente dirección, pero sigue siendo rectilíneo y uniforme. Expresado de modo abstracto: si una masa _m_ se mueve en línea recta y uniformemente respecto a un sistema de coordenadas _K_ , entonces también se mueve en línea recta y uniformemente respecto a un segundo sistema de coordenadas _K'_ , siempre que éste ejecute respecto a _K_ un movimiento de traslación uniforme. Teniendo en cuenta lo dicho en el párrafo anterior, se desprende de aquí lo siguiente:
Si _K_ es un sistema de coordenadas de Galileo, entonces también lo es cualquier otro sistema de coordenadas _K'_ que respecto a _K_ se halle en un estado de traslación uniforme. Las leyes de la mecánica de Galileo-Newton valen tanto respecto a _K'_ como respecto a _K_.
Demos un paso más en la generalización y enunciemos el siguiente principio: Si _K'_ es un sistema de coordenadas que se mueve uniformemente y sin rotación respecto a _K_ , entonces los fenómenos naturales transcurren con respecto a _K'_ según idénticas leyes generales que con respecto a _K_. Esta proposición es lo que llamaremos el «principio de relatividad» (en sentido restringido).
Mientras se mantuvo la creencia de que todos los fenómenos naturales se podían representar con ayuda de la mecánica clásica, no se podía dudar de la validez de este principio de relatividad. Sin embargo, los recientes adelantos de la electrodinámica y de la óptica hicieron ver cada vez más claramente que la mecánica clásica, como base de toda descripción física de la naturaleza, no era suficiente. La cuestión de la validez del principio de relatividad se tornó así perfectamente discutible, sin excluir la posibilidad de que la solución fuese en sentido negativo.
Existen, con todo, dos hechos generales que de entrada hablan muy a favor de la validez del principio de relatividad. En efecto, aunque la mecánica clásica no proporciona una base suficientemente ancha para representar teóricamente _todos_ los fenómenos físicos, tiene que poseer un contenido de verdad muy importante, pues da con admirable precisión los movimientos reales de los cuerpos celestes. De ahí que en el campo de la mecánica tenga que ser válido con gran exactitud el principio de relatividad. Y que un principio de generalidad tan grande y que es válido, con tanta exactitud, en un determinado campo de fenómenos fracase en otro campo es, a priori, poco probable.
El segundo argumento, sobre el que volveremos más adelante, es el siguiente. Si el principio de relatividad (en sentido restringido) no es válido, entonces los sistemas de coordenadas de Galileo _K_ , _K'_ , _K"_ , etc., que se mueven uniformemente unos respecto a los otros, no serán equivalentes para la descripción de los fenómenos naturales. En ese caso no tendríamos más remedio que pensar que las leyes de la naturaleza sólo pueden formularse con especial sencillez y naturalidad si de entre todos los sistemas de coordenadas de Galileo eligiésemos como cuerpo de referencia _uno_ ( _K_ 0) que tuviera un estado de movimiento determinado. A éste lo calificaríamos, y con razón (por sus ventajas para la descripción de la naturaleza), de «absolutamente en reposo», mientras que de los demás sistemas galileanos _K_ diríamos que son «móviles». Si la vía fuese el sistema _K_ 0, pongamos por caso, entonces nuestro vagón de ferrocarril sería un sistema _K_ respecto al cual regirían leyes menos sencillas que respecto a _K_ 0. Esta menor simplicidad habría que atribuirla a que el vagón _K_ se mueve respecto a _K_ 0 (es decir, «realmente»). En estas leyes generales de la naturaleza formuladas respecto a _K_ tendrían que desempeñar un papel el módulo y la dirección de la velocidad del vagón. Sería de esperar, por ejemplo, que el tono de un tubo de órgano fuese distinto cuando su eje fuera paralelo a la dirección de marcha que cuando estuviese perpendicular. Ahora bien, la Tierra, debido a su movimiento orbital alrededor del Sol, es equiparable a un vagón que viajara a unos 30 km por segundo. Por consiguiente, caso de no ser válido el principio de relatividad, sería de esperar que la dirección instantánea del movimiento terrestre interviniera en las leyes de la naturaleza y que, por lo tanto, el comportamiento de los sistemas físicos dependiera de su orientación espacial respecto a la Tierra; porque, como la velocidad del movimiento de rotación terrestre varía de dirección en el transcurso del año, la Tierra no puede estar todo el año en reposo respecto al hipotético sistema _K_ 0. Pese al esmero que se ha puesto en detectar una tal anisotropía del espacio físico terrestre, es decir, una no equivalencia de las distintas direcciones, jamás ha podido ser observada. Lo cual es un argumento de peso a favor del principio de la relatividad.
#### 6. EL TEOREMA DE ADICIÓN DE VELOCIDADES SEGÚN LA MECÁNICA CLÁSICA
Supongamos que nuestro tan traído y llevado vagón de ferrocarril viaja con velocidad constante _v_ por la línea, e imaginemos que por su interior camina un hombre en la dirección de marcha con velocidad _w_. ¿Con qué velocidad _W_ avanza el hombre respecto a la vía al caminar? La única respuesta posible parece desprenderse de la siguiente consideración:
Si el hombre se quedara parado durante un segundo, avanzaría, respecto a la vía, un trecho _v_ igual a la velocidad del vagón. Pero en ese segundo recorre además, respecto al vagón, y por tanto también respecto a la vía, un trecho _w_ igual a la velocidad con que camina. Por consiguiente, en ese segundo avanza en total el trecho
_W_ = _v_ \+ _w_
respecto a la vía. Más adelante veremos que este razonamiento, que expresa el teorema de adición de velocidades según la mecánica clásica, es insostenible y que la ley que acabamos de escribir no es válida en realidad. Pero entre tanto edificaremos sobre su validez.
#### 7. LA APARENTE INCOMPATIBILIDAD DE LA LEY DE PROPAGACIÓN DE LA LUZ CON EL PRINCIPIO DE LA RELATIVIDAD
Apenas hay en la física una ley más sencilla que la de propagación de la luz en el espacio vacío. Cualquier escolar sabe (o cree saber) que esta propagación se produce en línea recta con una velocidad de _c_ = 300.000 km/s. En cualquier caso, sabemos con gran exactitud que esta velocidad es la misma para todos los colores, porque si no fuera así, el mínimo de emisión en el eclipse de una estrella fija por su compañera oscura no se observaría simultáneamente para los diversos colores. A través de un razonamiento similar, relativo a observaciones de las estrellas dobles, el astrónomo holandés De Sitter consiguió también demostrar que la velocidad de propagación de la luz no puede depender de la velocidad del movimiento del cuerpo emisor. La hipótesis de que esta velocidad de propagación depende de la dirección «en el espacio» es de suyo improbable. Supongamos, en resumen, que el escolar cree justificadamente en la sencilla ley de la constancia de la velocidad de la luz _c_ (en el vacío). ¿Quién diría que esta ley tan simple ha sumido a los físicos más concienzudos en grandísimas dificultades conceptuales? Los problemas surgen del modo siguiente.
Como es natural, el proceso de la propagación de la luz, como cualquier otro, hay que referirlo a un cuerpo de referencia rígido (sistema de coordenadas). Volvemos a elegir como tal las vías del tren e imaginamos que el aire que había por encima de ellas lo hemos eliminado por bombeo. Supongamos que a lo largo del terraplén se emite un rayo de luz cuyo vértice, según lo anterior, se propaga con la velocidad _c_ respecto a aquél. Nuestro vagón de ferrocarril sigue viajando con la velocidad _v_ , en la misma dirección en que se propaga el rayo de luz, pero naturalmente mucho más despacio. Lo que nos interesa averiguar es la velocidad de propagación del rayo de luz respecto al vagón. Es fácil ver que el razonamiento del epígrafe anterior tiene aquí aplicación, pues el hombre que corre con respecto al vagón desempeña el papel del rayo de luz. En lugar de su velocidad _W_ respecto al terraplén aparece aquí la velocidad de la luz respecto a éste; la velocidad _w_ que buscamos, la de la luz respecto al vagón, es por tanto igual a:
_w_ = _c_ – _v_.
Así pues, la velocidad de propagación del rayo de luz respecto al vagón resulta ser menor que _c_.
Ahora bien, este resultado atenta contra el principio de la relatividad expuesto en el apartado 5, porque, según este principio, la ley de propagación de la luz en el vacío, como cualquier otra ley general de la naturaleza, debería ser la misma si tomamos el vagón como cuerpo de referencia que si elegimos las vías, lo cual parece imposible según nuestro razonamiento. Si cualquier rayo de luz se propaga respecto al terraplén con la velocidad _c_ , la ley de propagación respecto al vagón parece que tiene que ser, por eso mismo, otra distinta... en contradicción con el principio de relatividad.
A la vista del dilema parece ineludible abandonar, o bien el principio de relatividad, o bien la sencilla ley de la propagación de la luz en el vacío. El lector que haya seguido atentamente las consideraciones anteriores esperará seguramente que sea el principio de relatividad —que por su naturalidad y sencillez se impone a la mente como algo casi ineludible— el que se mantenga en pie, sustituyendo en cambio la ley de la propagación de la luz en el vacío por una ley más complicada y compatible con el principio de relatividad. Sin embargo, la evolución de la física teórica demostró que este camino era impracticable. Las innovadoras investigaciones teóricas de H. A. Lorentz sobre los procesos electrodinámicos y ópticos en cuerpos móviles demostraron que las experiencias en estos campos conducen con necesidad imperiosa a una teoría de los procesos electromagnéticos que tiene como consecuencia irrefutable la ley de la constancia de la luz en el vacío. Por eso, los teóricos de vanguardia se inclinaron más bien por prescindir del principio de relatividad, pese a no poder hallar ni un solo hecho experimental que lo contradijera.
Aquí es donde entró la teoría de la relatividad. Mediante un análisis de los conceptos de espacio y tiempo se vio que _en realidad no existía ninguna incompatibilidad entre el principio de la relatividad y la ley de propagación de la luz_ , sino que, ateniéndose uno sistemáticamente a estas dos leyes, se llegaba a una teoría lógicamente impecable. Esta teoría, que para diferenciarla de su ampliación (comentada más adelante) llamamos «teoría de la relatividad especial», es la que expondremos a continuación en sus ideas fundamentales.
#### 8. SOBRE EL CONCEPTO DE TIEMPO EN LA FÍSICA
Un rayo ha caído en dos lugares muy distantes _A_ y _B_ de la vía. Yo añado la afirmación de que ambos impactos han ocurrido _simultáneamente_. Si ahora te pregunto, querido lector, si esta afirmación tiene o no sentido, me contestarás con un «sí» contundente. Pero si luego te importuno con el ruego de que me expliques con más precisión ese sentido, advertirás tras cierta reflexión que la respuesta no es tan sencilla como parece a primera vista.
Al cabo de algún tiempo quizá te acuda a la mente la siguiente respuesta: «El significado de la afirmación es claro de por sí y no necesita de ninguna aclaración; sin embargo, tendría que reflexionar un poco si se me exige determinar, mediante observaciones, si en un caso concreto los dos sucesos son o no simultáneos». Pero con esta respuesta no puedo darme por satisfecho, por la siguiente razón. Suponiendo que un experto meteorólogo hubiese hallado, mediante agudísimos razonamientos, que el rayo tiene que caer siempre simultáneamente en los lugares _A_ y _B_ , se plantearía el problema de comprobar si ese resultado teórico se corresponde o no con la realidad. Algo análogo ocurre en todas las proposiciones físicas en las que interviene el concepto de «simultáneo». Para el físico no existe el concepto mientras no se brinde la posibilidad de averiguar en un caso concreto si es verdadero o no. Hace falta, por tanto, una definición de simultaneidad que proporcione el método para decidir experimentalmente en el caso presente si los dos rayos han caído simultáneamente o no. Mientras no se cumpla este requisito, me estaré entregando como físico (¡y también como no físico!) a la ilusión de creer que puedo dar sentido a esa afirmación de la simultaneidad. (No sigas leyendo, querido lector, hasta concederme esto plenamente convencido.)
Tras algún tiempo de reflexión haces la siguiente propuesta para constatar la simultaneidad. Se mide el segmento de unión _AB_ a lo largo de la vía y se coloca en su punto medio _M_ a un observador provisto de un dispositivo (dos espejos formando 90° entre sí, por ejemplo) que le permite la visualización óptica simultánea de ambos lugares _A_ y _B_. Si el observador percibe los dos rayos simultáneamente, entonces es que son simultáneos.
Aunque la propuesta me satisface mucho, sigo pensando que la cuestión no queda aclarada del todo, pues me siento empujado a hacer la siguiente objeción: «Tu definición sería necesariamente correcta si yo supiese ya que la luz que la percepción de los rayos transmite al observador en _M_ se propaga con la misma velocidad en el segmento ( _A_ _M_ ) que en el segmento ( _B_ _M_ ). Sin embargo, la comprobación de este supuesto sólo sería posible si se dispusiera ya de los medios para la medición de tiempos. Parece, pues, que nos movemos en un círculo lógico».
Después de reflexionar otra vez, me lanzas con toda razón una mirada algo despectiva y me dices: «A pesar de todo, mantengo mi definición anterior, porque en realidad no presupone nada sobre la luz. A la definición de simultaneidad solamente hay que imponerle una condición, y es que en cualquier caso real permita tomar una decisión empírica acerca de la pertinencia o no pertinencia del concepto a definir. Que mi definición cubre este objetivo es innegable. Que la luz tarda el mismo tiempo en recorrer el camino _A_ _M_ que el _B_ _M_ no es en realidad ningún _supuesto previo ni hipótesis_ sobre la naturaleza física de la luz, sino una _estipulación_ que puedo hacer a discreción para llegar a una definición de simultaneidad».
Está claro que esta definición se puede utilizar para dar sentido exacto al enunciado de simultaneidad, no sólo de dos sucesos, sino de un número arbitrario de ellos, sea cual fuere su posición con respecto al cuerpo de referencia. Con ello se llega también a una definición del «tiempo» en la física. Imaginemos, en efecto, que en los puntos _A_ , _B_ , _C_ de la vía (sistema de coordenadas) existen relojes de idéntica constitución y dispuestos de tal manera que las posiciones de las manillas sean simultáneamente (en el sentido anterior) las mismas. Se entiende entonces por «tiempo» de un suceso la hora (posición de las manillas) marcada por aquel de esos relojes que está inmediatamente contiguo (espacialmente) al suceso. De este modo se le asigna a cada suceso un valor temporal que es esencialmente observable.
Esta definición entraña otra hipótesis física de cuya validez, en ausencia de razones empíricas en contra, no se podrá dudar. En efecto, se supone que todos los relojes marchan «igual de rápido» si tienen la misma constitución. Formulándolo exactamente: si dos relojes colocados en reposo en distintos lugares del cuerpo de referencia son puestos en hora de tal manera que la posición de las manillas del uno sea _simultánea_ (en el sentido anterior) _a la misma_ posición de las manillas del otro, entonces posiciones iguales de las manillas son en general simultáneas (en el sentido de la definición anterior).
#### 9. LA RELATIVIDAD DE LA SIMULTANEIDAD
Hasta ahora hemos referido nuestros razonamientos a un determinado cuerpo de referencia que hemos llamado «terraplén» o «vías». Supongamos que por los carriles viaja un tren muy largo, con velocidad constante _v_ y en la dirección señalada en la figura siguiente. Las personas que viajan en este tren hallarán ventajoso utilizar el tren como cuerpo de referencia rígido (sistema de coordenadas) y referirán todos los sucesos al tren. Todo suceso que se produce a lo largo de la vía, se produce también en un punto determinado del tren. Incluso la definición de simultaneidad se puede dar exactamente igual con respecto al tren que respecto a las vías. Sin embargo, se plantea ahora la siguiente cuestión:
Dos sucesos (por ejemplo, los dos rayos _A_ y _B_ ) que son simultáneos _respecto al terraplén_ , ¿son también simultáneos _respecto al tren_? Enseguida demostraremos que la respuesta tiene que ser negativa.
Cuando decimos que los rayos _A_ y _B_ son simultáneos respecto a las vías, queremos decir: los rayos de luz que salen de los lugares _A_ y _B_ se reúnen en el punto medio _M_ del tramo de vía _A-B_. Ahora bien, los sucesos _A_ y _B_ se corresponden también con lugares _A_ y _B_ en el tren. Sea _M'_ el punto medio del segmento _A-B_ del tren en marcha. Es cierto que en el instante de la caída de los rayos este punto _M'_ coincide con el punto _M_ , pero, como se indica en la figura, se mueve hacia la derecha con la velocidad _v_ del tren. Un observador que estuviera sentado en el tren en _M'_ , pero que no poseyera esta velocidad, permanecería constantemente en _M_ , y los rayos de luz que parten de las chispas _A_ y _B_ lo alcanzarían simultáneamente, es decir, estos dos rayos de luz se reunirían precisamente en él. La realidad es, sin embargo, que (juzgando la situación desde el terraplén) este observador va al encuentro del rayo de luz que viene de _B_ , huyendo en cambio del que avanza desde _A_. Por consiguiente, verá antes la luz que sale de _B_ que la que sale de _A_. En resumidas cuentas, los observadores que utilizan el tren como cuerpo de referencia tienen que llegar a la conclusión de que la chispa eléctrica _B_ ha caído antes que la _A_. Llegamos así a un resultado importante:
Sucesos que son simultáneos respecto al terraplén no lo son respecto al tren, y viceversa (relatividad de la simultaneidad). Cada cuerpo de referencia (sistema de coordenadas) tiene su tiempo especial, una localización temporal tiene sólo sentido cuando se indica el cuerpo de referencia al que remite.
Antes de la teoría de la relatividad, la física suponía siempre implícitamente que el significado de los datos temporales era absoluto, es decir, independiente del estado de movimiento del cuerpo de referencia. Pero acabamos de ver que este supuesto es incompatible con la definición natural de simultaneidad; si prescindimos de él, desaparece el conflicto, expuesto en el epígrafe 7, entre la ley de la propagación de la luz y el principio de la relatividad.
En efecto, el conflicto proviene del razonamiento del epígrafe 6 que ahora resulta insostenible. Inferimos allí que el hombre que camina por el vagón y recorre el trecho _w en un segundo_ , recorre ese mismo trecho también _en un segundo_ respecto a las vías. Ahora bien, toda vez que, en virtud de las reflexiones anteriores, el tiempo que necesita un proceso con respecto al vagón no cabe igualarlo a la duración del mismo proceso juzgada desde el cuerpo de referencia del terraplén, tampoco se puede afirmar que el hombre, al caminar respecto a las vías, recorra el trecho _w_ en un tiempo que —juzgado desde el terraplén— es igual a un segundo.
Digamos de paso que el razonamiento del apartado 6 descansa además en un segundo supuesto que, a la luz de una reflexión rigurosa, se revela arbitrario, lo cual no quita para que, antes de establecerse la teoría de la relatividad, fuese aceptado siempre (de modo implícito).
#### 10. SOBRE LA RELATIVIDAD DEL CONCEPTO DE DISTANCIA ESPACIAL
Observamos dos lugares concretos del tren que viaja con velocidad _v_ por la línea y nos preguntamos qué distancia hay entre ellos. Sabemos ya que para medir una distancia se necesita un cuerpo de referencia respecto al cual hacerlo. Lo más sencillo es utilizar el propio tren como cuerpo de referencia (sistema de coordenadas). Un observador que viaja en el tren mide la distancia, transportando en línea recta una regla sobre el suelo de los vagones, por ejemplo, hasta llegar desde uno de los puntos marcados al otro. El número que indica cuántas veces transportó la regla es entonces la distancia buscada.
Otra cosa es si se quiere medir la distancia desde la vía. Aquí se ofrece el método siguiente. Sean _A'_ y _B'_ los dos puntos del tren de cuya distancia se trata; estos dos puntos se mueven con velocidad _v_ a lo largo de la vía. Preguntémonos primero por los puntos _A_ y _B_ de la vía por donde pasan _A'_ y _B'_ en un momento determinado _t_ (juzgado desde la vía). En virtud de la definición de tiempo dada en el apartado 8, estos puntos _A_ y _B_ de la vía son determinables. A continuación se mide la distancia entre _A_ y _B_ transportando repetidamente el metro a lo largo de la vía.
A priori no está dicho que esta segunda medición tenga que proporcionar el mismo resultado que la primera. La longitud del tren, medida desde la vía, puede ser distinta que medida desde el propio tren. Esta circunstancia se traduce en una segunda objeción que oponer al razonamiento, aparentemente tan meridiano, del epígrafe 6. Pues si el hombre en el vagón recorre en una unidad de tiempo el trecho _w medido desde el tren_ , este trecho, _medido desde la vía_ , no tiene por qué ser igual a _w_.
#### 11. LA TRANSFORMACIÓN DE LORENTZ
Las consideraciones hechas en los tres últimos epígrafes nos muestran que la aparente incompatibilidad de la ley de propagación de la luz con el principio de relatividad en el epígrafe 7 está deducida a través de un razonamiento que tomaba a préstamo de la mecánica clásica dos hipótesis injustificadas; estas hipótesis son:
1. El intervalo temporal entre dos sucesos es independiente del estado de movimiento del cuerpo de referencia.
2. El intervalo espacial entre dos puntos de un cuerpo rígido es independiente del estado de movimiento del cuerpo de referencia.
Si eliminamos estas dos hipótesis, desaparece el dilema del epígrafe 7, porque el teorema de adición de velocidades deducido en el epígrafe 6 pierde su validez. Ante nosotros surge la posibilidad de que la ley de la propagación de la luz en el vacío sea compatible con el principio de relatividad. Llegamos así a la pregunta: ¿cómo hay que modificar el razonamiento del epígrafe 6 para eliminar la aparente contradicción entre estos dos resultados fundamentales de la experiencia? Esta cuestión conduce a otra de índole general. En el razonamiento del epígrafe 6 aparecen lugares y tiempos con relación al tren y con relación a las vías. ¿Cómo se hallan el lugar y el tiempo de un suceso con relación al tren cuando se conocen el lugar y el tiempo del suceso con respecto a las vías? ¿Esta pregunta tiene alguna respuesta de acuerdo con la cual la ley de la propagación en el vacío no contradiga al principio de relatividad? O expresado de otro modo: ¿cabe hallar alguna relación entre las posiciones y tiempos de los distintos sucesos con relación a ambos cuerpos de referencia, de manera que todo rayo de luz tenga la velocidad de propagación _c_ respecto a las vías y respecto al tren? Esta pregunta conduce a una respuesta muy determinada y afirmativa, a una ley de transformación muy precisa para las magnitudes espacio-temporales de un suceso al pasar de un cuerpo de referencia a otro.
Antes de entrar en ello, intercalemos la siguiente consideración. Hasta ahora solamente hemos hablado de sucesos que se producían a lo largo de la vía, la cual desempeñaba la función matemática de una recta. Pero, siguiendo lo indicado en el epígrafe 2, cabe imaginar que este cuerpo de referencia se prolonga hacia los lados y hacia arriba por medio de un andamiaje de varillas, de manera que cualquier suceso, ocurra donde ocurra, puede localizarse respecto a ese andamiaje. Análogamente, es posible imaginar que el tren que viaja con velocidad _v_ se prolonga por todo el espacio, de manera que cualquier suceso, por lejano que esté, también pueda localizarse respecto al segundo andamio. Sin incurrir en defecto teórico, podemos prescindir del hecho de que en realidad esos andamios se destrozarían uno contra el otro debido a la impenetrabilidad de los cuerpos sólidos. En cada uno de estos andamios imaginamos que se erigen tres paredes mutuamente perpendiculares que denominamos «planos coordenados» («sistema de coordenadas»). Al terraplén le corresponde entonces un sistema de coordenadas _K_ , y al tren otro _K'_. Cualquier suceso, donde quiera que ocurra, viene fijado espacialmente respecto a _K_ por las tres perpendiculares _x_ , _y_ , _z_ a los planos coordenados, y temporalmente por un valor _t_. _Ese mismo suceso_ viene fijado espacio-temporalmente respecto a _K'_ por valores correspondientes _x'_ , _y'_ , _z'_ , _t'_ , que, como es natural, no coinciden con _x_ , _y_ , _z_ , _t_. Ya explicamos antes con detalle cómo interpretar estas magnitudes como resultados de mediciones físicas.
Es evidente que el problema que tenemos planteado se puede formular exactamente de la manera siguiente: Dadas las cantidades _x_ , _y_ , _z_ , _t_ de un suceso respecto a _K_ , ¿cuáles son los valores _x'_ , _y'_ , _z'_ , _t'_ del mismo suceso respecto a _K'_? Las relaciones hay que elegirlas de tal modo que satisfagan la ley de propagación de la luz en el vacío para uno y el mismo rayo de luz (y además para cualquier rayo de luz) respecto a _K_ y _K'_. Para la orientación espacial relativa indicada en el dibujo de la figura de la página 214, el problema queda resuelto por las ecuaciones:
Este sistema de ecuaciones se designa con el nombre de «transformación de Lorentz».
Ahora bien, si en lugar de la ley de propagación de la luz hubiéramos tomado como base los supuestos implícitos en vieja mecánica, relativos al carácter absoluto de los tiempos y las longitudes, en vez de las anteriores ecuaciones de transformación habríamos obtenido estas otras:
_x'_ = _x_ – _vt_
_y'_ = _y_
_z'_ = _z_
_t'_ = _t_ ,
sistema que a menudo se denomina «transformación de Galileo». La transformación de Galileo se obtiene de la de Lorentz igualando en ésta la velocidad de la luz _c_ a un valor infinitamente grande.
El siguiente ejemplo muestra claramente que, según la transformación de Lorentz, la ley de propagación de la luz en el vacío se cumple tanto respecto al cuerpo de referencia _K_ como respecto al cuerpo de referencia _K'_. Supongamos que se envía una señal luminosa a lo largo del eje _x_ positivo, propagándose la excitación luminosa según la ecuación
_x_ = _ct_ ,
es decir, con velocidad _c_. De acuerdo con las ecuaciones de la transformación de Lorentz, esta sencilla relación entre _x_ y _t_ determina una relación entre _x'_ y _t'_. En efecto, sustituyendo _x_ por el valor _ct_ en las ecuaciones primera y cuarta de la transformación de Lorentz obtenemos:
de donde, por división, resulta inmediatamente
_x'_ = _ct'_
La propagación de la luz, referida al sistema _K'_ , se produce según esta ecuación. Se comprueba, por tanto, que la velocidad de propagación es también igual a _c_ respecto al cuerpo de referencia _K'_ ; y análogamente para rayos de luz que se propaguen en cualquier otra dirección. Lo cual, naturalmente, no es de extrañar, porque las ecuaciones de la transformación de Lorentz están derivadas con este criterio.
#### 12. EL COMPORTAMIENTO DE REGLAS Y RELOJES MÓVILES
Coloco una regla de un metro sobre el eje _x'_ de _K'_ , de manera que un extremo coincida con el punto _x'_ = 0 y el otro con el punto _x'_ = 1. ¿Cuál es la longitud de la regla respecto al sistema _K_? Para averiguarlo podemos determinar las posiciones de ambos extremos respecto a _K_ en un momento determinado _t_. De la primera ecuación de la transformación de Lorentz, para _t_ = 0, se obtiene para estos dos puntos:
estos dos puntos distan entre sí . Ahora bien, el metro se mueve respecto a _K_ con la velocidad _v_ , de donde se deduce que la longitud de una regla rígida de un metro que se mueve con velocidad _v_ en el sentido de su longitud es de metros. La regla rígida en movimiento es más corta que la misma regla cuando está en estado de reposo, y es tanto más corta cuanto más rápidamente se mueva. Para la velocidad _v_ = _c_ sería = 0, para velocidades aún mayores la raíz se haría imaginaria. De aquí inferimos que en la teoría de la relatividad la velocidad _c_ desempeña el papel de una velocidad límite que no puede alcanzar ni sobrepasar ningún cuerpo real.
Añadamos que este papel de la velocidad _c_ como velocidad límite se sigue de las propias ecuaciones de la transformación de Lorentz, porque éstas pierden todo sentido cuando _v_ se elige mayor que _c_.
Si hubiésemos procedido a la inversa, considerando un metro que se halla en reposo respecto a _K_ sobre el eje _X_ , habríamos comprobado que en relación a _K'_ , tiene la longitud de , lo cual está totalmente de acuerdo con el principio de la relatividad, en el cual hemos basado nuestras consideraciones.
A priori es evidente que las ecuaciones de transformación tienen algo que decir sobre el comportamiento físico de reglas y relojes, porque las cantidades _x_ , _y_ , _z_ , _t_ no son otra cosa que resultados de medidas obtenidas con relojes y reglas. Si hubiésemos tomado como base la transformación de Galileo, no habríamos obtenido un acortamiento de longitudes como consecuencia del movimiento.
Imaginemos ahora un reloj con segundero que reposa constantemente en el origen ( _x'_ = 0) de _K'_. Sean _t'_ = 0 y _t'_ = 1 dos señales sucesivas de este reloj. Para estos dos tics, las ecuaciones primera y cuarta de la transformación de Lorentz dan:
_t_ = 0
y
Juzgado desde _K_ , el reloj se mueve con la velocidad _v_ ; respecto a este cuerpo de referencia, entre dos de sus señales transcurre, no un segundo, sino segundos, o sea un tiempo algo mayor. Como consecuencia de su movimiento, el reloj marcha algo más despacio que un estado de reposo. La velocidad de la luz _c_ desempeña, también aquí, el papel de una velocidad límite inalcanzable.
#### 13. TEOREMA DE ADICIÓN DE VELOCIDADES. EXPERIMENTO DE FIZEAU
Dado que las velocidades con que en la práctica podemos mover relojes y reglas son pequeñas frente a la velocidad de la luz _c_ , es difícil que podamos comparar los resultados del epígrafe anterior con la realidad. Y puesto que, por otro lado, esos resultados le parecerán al lector harto singulares, voy a extraer de la teoría otra consecuencia que es muy fácil de deducir de lo anteriormente expuesto y que los experimentos confirman brillantemente.
En el epígrafe 6 hemos deducido el teorema de adición para velocidades de la misma dirección, tal y como resulta de las hipótesis de la mecánica clásica. Lo mismo se puede deducir fácilmente de la transformación de Galileo (epígrafe 11). En lugar del hombre que camina por el vagón introducimos un punto que se mueve respecto al sistema de coordenadas _K'_ según la ecuación
_x'_ = _wt'_.
Mediante las ecuaciones primera y cuarta de la transformación de Galileo se pueden expresar _x'_ y _t'_ en función de _x_ y _t_ , obteniendo
_x_ = ( _v_ \+ _w_ ) _t_.
Esta ecuación no expresa otra cosa que la ley de movimiento del punto respecto al sistema _K_ (del hombre respecto al terraplén), la velocidad que designamos por _W_ , con lo cual se obtiene, como en el epígrafe 6:
_W_ = _v_ \+ _w_.
(A)
Pero este razonamiento lo podemos efectuar igual de bien basándonos en la teoría de la relatividad. Lo que hay que hacer entonces es expresar _x'_ y _t'_ en la ecuación
_x'_ = _wt'_
en función de _x_ y _t_ , utilizando las ecuaciones primera y cuarta de la transformación de Lorentz. En lugar de la ecuación (A) se obtiene entonces esta otra:
(B)
que corresponde al teorema de adición de velocidades de igual dirección según la teoría de la relatividad. La cuestión es cuál de estos dos teoremas resiste el cotejo con la experiencia. Sobre el particular nos instruye un experimento extremadamente importante, realizado hace más de medio siglo por el genial físico Fizeau y desde entonces repetido por algunos de los mejores físicos experimentales, por lo cual el resultado es irrebatible. El experimento versa sobre la siguiente cuestión. Supongamos que la luz se propaga en un cierto líquido en reposo con una determinada velocidad _w_. ¿Con qué velocidad se propaga en el tubo _R_ de la figura
en la dirección de la flecha, cuando dentro de este tubo fluye el líquido con velocidad _v_?
En cualquier caso, fieles al principio de la relatividad, tendremos que sentar el supuesto de que, _respecto al líquido_ , la propagación de la luz se produce siempre con la misma velocidad _w_ , muévase o no el líquido respecto a otros cuerpos. Son conocidas, por tanto, la velocidad de la luz respecto al líquido y la velocidad de éste respecto al tubo, y se busca la velocidad de la luz respecto al tubo.
Está claro que el problema vuelve a ser el mismo que el del epígrafe 6. El tubo desempeña el papel de las vías o del sistema de coordenadas _K_ ; el líquido, el papel del vagón o del sistema de coordenadas _K'_ ; la luz, el del hombre que camina por el vagón o el del punto móvil mencionado en este apartado. Así pues, si llamamos _W_ a la velocidad de la luz respecto al tubo, ésta vendrá dada por la ecuación (A) o por la (B), según que sea la transformación de Galileo o la de Lorentz la que se corresponde con la realidad.
El experimento falla a favor de la ecuación (B) deducida de la teoría de la relatividad, y además con gran exactitud. Según las últimas y excelentes mediciones de Zeeman, la influencia de la velocidad de la corriente _v_ sobre la propagación de la luz viene representada por la fórmula (B) con una exactitud superior al 1 por 100.
Hay que destacar, sin embargo, que H. A. Lorentz, mucho antes de establecerse la teoría de la relatividad, dio ya una teoría de este fenómeno por vía puramente electrodinámica y utilizando determinadas hipótesis sobre la estructura electromagnética de la materia. Pero esta circunstancia no merma para nada el poder probatorio del experimento, en tanto que _experimentum crucis_ a favor de la teoría de la relatividad. Pues la electrodinámica de Maxwell-Lorentz, sobre la cual descansaba la teoría original, no está en absoluto en contradicción con la teoría de la relatividad. Esta última ha emanado más bien de la electrodinámica como resumen y generalización asombrosamente sencillos de las hipótesis, antes mutuamente independientes, que servía de fundamento a la electrodinámica.
#### 14. EL VALOR HEURÍSTICO
#### DE LA TEORÍA DE LA RELATIVIDAD
La cadena de ideas que hemos expuesto hasta aquí se puede resumir brevemente como sigue. La experiencia ha llevado a la convicción de que, por un lado, el principio de la relatividad (en sentido restringido) es válido, y por otro, que la velocidad de propagación de la luz en el vacío es igual a una constante _c_. Uniendo estos dos postulados resultó la ley de transformación para las coordenadas rectangulares _x_ , _y_ , _z_ y el tiempo _t_ de los sucesos que componen los fenómenos naturales, obteniéndose, no la transformación de Galileo, sino (en discrepancia con la mecánica clásica) la transformación de Lorentz.
En este razonamiento desempeñó un papel importante la ley de propagación de la luz, cuya aceptación viene justificada por nuestro conocimiento actual. Ahora bien, una vez en posesión de la transformación de Lorentz, podemos unir ésta con el principio de relatividad y resumir la teoría en el enunciado siguiente:
Toda ley general de la naturaleza tiene que estar constituida de tal modo que se transforme en otra ley de idéntica estructura al introducir, en lugar de las variables espacio-temporales _x_ , _y_ , _z_ , _t_ del sistema de coordenadas original _K_ , nuevas variables espacio-temporales _x'_ , _y'_ , _z'_ , _t'_ de otro sistema de coordenadas _K'_ , donde la relación matemática entre las cantidades con prima y sin prima viene dada por la transformación de Lorentz. Formulado brevemente: las leyes generales de la naturaleza son covariantes respecto a la transformación de Lorentz.
Ésta es una condición matemática muy determinada que la teoría de la relatividad prescribe a las leyes naturales, con lo cual se convierte en valioso auxiliar heurístico en la búsqueda de leyes generales de la naturaleza. Si se encontrara una ley general de la naturaleza que no cumpliera esa condición, quedaría refutado por lo menos uno de los dos supuestos fundamentales de la teoría. Veamos ahora lo que esta última ha mostrado en cuanto a resultados generales.
#### 15. RESULTADOS GENERALES DE LA TEORÍA
De las consideraciones anteriores se desprende que la teoría de la relatividad (especial) ha nacido de la electrodinámica y de la óptica. En estos campos no ha modificado mucho los enunciados de la teoría, pero ha simplificado notablemente el edificio teórico, es decir, la derivación de las leyes, y, lo que es incomparablemente más importante, ha reducido mucho el número de hipótesis independientes sobre las que descansa la teoría. A la teoría de Maxwell-Lorentz le ha conferido un grado tal de evidencia, que aquélla se habría impuesto con carácter general entre los físicos aunque los experimentos hubiesen hablado menos convincentemente a su favor.
La mecánica clásica precisaba de una modificación antes de poder armonizar con el requisito de la teoría de la relatividad especial. Pero esta modificación afecta únicamente, en esencia, a las leyes para movimientos rápidos en los que las velocidades _v_ de la materia no sean demasiado pequeñas frente a la de la luz. Movimientos tan rápidos sólo nos los muestra la experiencia en electrones e iones; en otros movimientos las discrepancias respecto a las leyes de la mecánica clásica son demasiado pequeñas para ser detectables en la práctica. Del movimiento de los astros no hablaremos hasta llegar a la teoría de la relatividad general. Según la teoría de la relatividad, la energía cinética de un punto material de masa _m_ no viene dado ya por la conocida expresión
2 , sino por la expresión
Esta expresión se hace infinita cuando la velocidad _v_ se aproxima a la velocidad de la luz _c_. Así pues, por grande que sea la energía invertida en la aceleración, la velocidad tiene que permanecer siempre inferior a _c_. Si se desarrolla en serie la expresión de la energía cinética, se obtiene:
El tercer término es siempre pequeño frente al segundo (el único considerado en la mecánica clásica) cuando es pequeño comparado con 1. El primer término _mc_ 2 no depende de la velocidad, por lo cual no entra en consideración al tratar el problema de cómo la energía de un punto material depende de la velocidad. Sobre su importancia teórica hablaremos más adelante.
El resultado más importante de índole general al que ha conducido la teoría de la relatividad especial concierne al concepto de masa. La física prerrelativista conoce dos principios de conservación de importancia fundamental, el de la conservación de la energía y el de la conservación de la masa; estos dos principios fundamentales aparecen completamente independientes uno de otro. La teoría de la relatividad los funde en uno solo. A continuación explicaremos brevemente cómo se llegó hasta ahí y cómo hay que interpretar esta fusión.
El principio de relatividad exige que el postulado de conservación de la energía se cumpla, no sólo respecto a un sistema de coordenadas _K_ , sino respecto a cualquier sistema de coordenadas _K'_ que se encuentre con relación a _K_ en movimiento de traslación uniforme (dicho brevemente, respecto a cualquier sistema de coordenadas «de Galileo»). En contraposición a la mecánica clásica, el paso entre dos de esos sistemas viene regido por la transformación de Lorentz.
A partir de estas premisas, y en conjunción con las ecuaciones fundamentales de la electrodinámica maxwelliana, se puede inferir rigurosamente, mediante consideraciones relativamente sencillas, que: un cuerpo que se mueve con velocidad _v_ y que absorbe la energía _E_ 0 en forma de radiación sin variar por eso su velocidad, experimenta un aumento de energía en la cantidad:
Teniendo en cuenta la expresión que dimos antes para la energía cinética, la energía del cuerpo vendrá dada por:
El cuerpo tiene entonces la misma energía que otro de velocidad _v_ y masa . Cabe por tanto decir: si un cuerpo absorbe la energía , su masa inercial crece en ; la masa inercial de un cuerpo no es una constante, sino variable según la modificación de su energía. La masa inercial de un sistema de cuerpos cabe contemplarla precisamente como una medida de su energía. El postulado de la conservación de la masa de un sistema coincide con el de la conservación de la energía y sólo es válido en la medida en que el sistema no absorbe ni emite energía. Si escribimos la expresión de la energía en la forma
se ve que el término _mc_ 2, que ya nos llamó la atención con anterioridad, no es otra cosa que la energía que poseía el cuerpo antes de absorber la energía _E_ 0.
El cotejo directo de este postulado con la experiencia queda por ahora excluido, porque las variaciones de energía _E_ 0 que podemos comunicar a un sistema no son suficientemente grandes para hacerse notar en forma de una alteración de la masa inercial del sistema. es demasiado pequeño en comparación con la masa _m_ que existía antes de la variación de energía. A esta circunstancia se debe el que se pudiera establecer con éxito un principio de conservación de la masa de validez independiente.
Una última observación de naturaleza teórica. El éxito de la interpretación de Faraday-Maxwell de la acción electrodinámica a distancia a través de procesos intermedios con velocidad de propagación finita hizo que entre los físicos arraigara la convicción de que no existían acciones a distancia instantáneas e inmediatas del tipo de la ley de gravitación de Newton. Según la teoría de la relatividad, en lugar de la acción instantánea a distancia, o acción a distancia con velocidad de propagación infinita, aparece siempre la acción a distancia con la velocidad de la luz, lo cual tiene que ver con el papel teórico que desempeña la velocidad _c_ en esta teoría. En la segunda parte se mostrará cómo se modifica este resultado en la teoría de la relatividad general.
#### 16. LA TEORÍA DE LA RELATIVIDAD ESPECIAL Y LA EXPERIENCIA
La pregunta de hasta qué punto se ve apoyada la teoría de la relatividad especial por la experiencia no es fácil de responder, por un motivo que ya mencionamos al hablar del experimento fundamental de Fizeau. La teoría de la relatividad especial cristalizó a partir de la teoría de Maxwell-Lorentz de los fenómenos electromagnéticos, por lo cual todos los hechos experimentales que apoyan esa teoría electromagnética apoyan también la teoría de la relatividad. Mencionaré aquí, por ser de especial importancia, que la teoría de la relatividad permite derivar, de manera extremadamente simple y en consonancia con la experiencia, aquellas influencias que experimenta la luz de las estrellas fijas debido al movimiento relativo de la Tierra respecto a ellas. Se trata del desplazamiento anual de la posición aparente de las estrellas fijas como consecuencia del movimiento terrestre alrededor del Sol (aberración) y el influjo que ejerce la componente radial de los movimientos relativos de las estrellas fijas respecto a la Tierra sobre el color de la luz que llega hasta nosotros; este influjo se manifiesta en un pequeño corrimiento de las rayas espectrales de la luz que nos llega desde una estrella fija, respecto a la posición espectral de las mismas rayas espectrales obtenidas con una fuente luminosa terrestre (principio de Doppler). Los argumentos experimentales a favor de la teoría de Maxwell-Lorentz, que al mismo tiempo son argumentos a favor de la teoría de la relatividad, son demasiado copiosos como para exponerlos aquí. De hecho, restringen hasta tal punto las posibilidades teóricas, que ninguna otra teoría distinta de la de Maxwell-Lorentz se ha podido imponer frente a la experiencia.
Sin embargo, hay dos clases de hechos experimentales constatados hasta ahora que la teoría de Maxwell-Lorentz sólo puede acomodar a base de recurrir a una hipótesis auxiliar que de suyo —es decir, sin utilizar la teoría de la relatividad— parece extraña.
Es sabido que los rayos catódicos y los así llamados rayos β emitidos por sustancias radiactivas constan de corpúsculos eléctricos negativos (electrones) de pequeñísima inercia y gran velocidad. Investigando la desviación de estas radiaciones bajo la influencia de campos eléctricos y magnéticos se puede estudiar muy exactamente la ley del movimiento de estos corpúsculos.
En el tratamiento teórico de estos electrones hay que luchar con la dificultad de que la electrodinámica por sí sola no es capaz de explicar su naturaleza. Pues dado que las masas eléctricas de igual signo se repelen, las masas eléctricas negativas que constituyen el electrón deberían separarse unas de otras bajo la influencia de su interacción si no fuese por la acción de otras fuerzas cuya naturaleza nos resulta todavía oscura. Si suponemos ahora que las distancias relativas de las masas eléctricas que constituyen el electrón permanecen constantes al moverse éste (unión rígida en el sentido de la mecánica clásica), llegamos a una ley del movimiento del electrón que no concuerda con la experiencia. H. A. Lorentz, guiado por consideraciones puramente formales, fue el primero en introducir la hipótesis de que el cuerpo del electrón experimenta, en virtud del movimiento, una contracción proporcional a la expresión en la dirección del movimiento. Esta hipótesis, que electrodinámicamente no se justifica en modo alguno, proporciona esa ley del movimiento que se ha visto confirmada con gran precisión por la experiencia en los últimos años.
La teoría de la relatividad suministra la misma ley del movimiento sin necesidad de sentar hipótesis especiales sobre la estructura y el comportamiento del electrón. Algo análogo ocurría, como hemos visto en el epígrafe 13, con el experimento de Fizeau, cuyo resultado lo explicaba la teoría de la relatividad sin tener que hacer hipótesis sobre la naturaleza física del fluido.
La segunda clase de hechos que hemos señalado se refiere a la cuestión de si el movimiento terrestre en el espacio se puede detectar o no en experimentos efectuados en la Tierra. Ya indicamos en el epígrafe 5 que todos los intentos realizados en este sentido dieron resultado negativo. Con anterioridad a la teoría relativista, la ciencia no podía explicar fácilmente este resultado negativo, pues la situación era la siguiente. Los viejos prejuicios sobre el espacio y el tiempo no permitían ninguna duda acerca de que la transformación de Galileo era la que regía el paso de un cuerpo de referencia a otro. Suponiendo entonces que las ecuaciones de Maxwell-Lorentz sean válidas para un cuerpo de referencia _K_ , resulta que no valen para otro cuerpo de referencia _K'_ que se mueva uniformemente respecto a _K_ si se acepta que entre las coordenadas de _K_ y _K'_ rigen las relaciones de la transformación de Galileo. Esto parece indicar que de entre todos los sistemas de coordenadas de Galileo se destaca físicamente uno ( _K_ ) que posee un determinado estado de movimiento. Físicamente se interpretaba este resultado diciendo que _K_ está en reposo respecto a un hipotético éter luminífero, mientras que todos los sistemas de coordenadas _K'_ en movimiento respecto a _K_ estarían también en movimiento respecto al éter. A este movimiento de _K'_ respecto al éter («viento del éter» en relación a _K'_ ) se le atribuían las complicadas leyes que supuestamente valían respecto a _K'_. Para ser consecuentes, había que postular también un viento del éter semejante con relación a la Tierra, y los físicos pusieron durante mucho tiempo su empeño en probar su existencia.
Michelson halló con este propósito un camino que parecía infalible. Imaginemos dos espejos montados sobre un cuerpo rígido, con las caras reflectantes mirándose de frente. Si todo este sistema se halla en reposo respecto al éter luminífero, cualquier rayo de luz necesita un tiempo muy determinado _T_ para ir de un espejo al otro y volver. Por el contrario, el tiempo (calculado) para ese proceso es algo diferente ( _T'_ ) cuando el cuerpo, junto con los espejos, se mueve respecto al éter. ¡Es más! Los cálculos predicen que, para una determinada velocidad _v_ respecto al éter, ese tiempo _T'_ es distinto cuando el cuerpo se mueve perpendicularmente al plano de los espejos que cuando lo hace paralelamente. Aun siendo ínfima la diferencia calculada entre estos dos intervalos temporales, Michelson y Morley realizaron un experimento de interferencias en el que esa discrepancia tendría que haberse puesto claramente de manifiesto. El resultado del experimento fue, no obstante, negativo, para gran desconcierto de los físicos. Lorentz y FitzGerald sacaron a la teoría de este desconcierto, suponiendo que el movimiento del cuerpo respecto al éter determinaba una contracción de aquél en la dirección del movimiento y que dicha contracción compensaba justamente esa diferencia de tiempos. La comparación con las consideraciones del epígrafe 12 demuestra que esta solución era también la correcta desde el punto de vista de la teoría de la relatividad. Pero la interpretación de la situación según esta última es incomparablemente más satisfactoria. De acuerdo con ella, no existe ningún sistema de coordenadas privilegiado que dé pie a introducir la idea del éter, ni tampoco ningún viento del éter ni experimento alguno que lo ponga de manifiesto. La contracción de los cuerpos en movimiento se sigue aquí, sin hipótesis especiales, de los dos principios básicos de la teoría; y lo decisivo para esta contracción no es el movimiento en sí, al que no podemos atribuir ningún sentido, sino el movimiento respecto al cuerpo de referencia elegido en cada caso. Así pues, el cuerpo que sostiene los espejos en el experimento de Michelson y Morley no se acorta respecto a un sistema de referencia solidario con la Tierra, pero sí respecto a un sistema que se halle en reposo en relación al Sol.
#### 17. EL ESPACIO CUADRIDIMENSIONAL DE MINKOWSKI
El no matemático se siente sobrecogido por un escalofrío místico al oír la palabra «cuadridimensional», una sensación no disímil de la provocada por el fantasma de una comedia. Y, sin embargo, no hay enunciado más banal que el que afirma que nuestro mundo cotidiano es un continuo espacio-temporal cuadridimensional.
El _espacio_ es un continuo tridimensional. Quiere decir esto que es posible describir la posición de un punto (en reposo) mediante tres números _x_ , _y_ , _z_ (coordenadas) y que, dado cualquier punto, existen puntos arbitrariamente «próximos» cuya posición se puede describir mediante valores coordenados (coordenadas) _x_ 1, _y_ 1, _z_ 1 que se aproximan arbitrariamente a las coordenadas _x_ , _y_ , _z_ del primero. Debido a esta última propiedad hablamos de un «continuo»; debido al carácter triple de las coordenadas, de «tridimensional».
Análogamente ocurre con el universo del acontecer físico, con lo que Minkowski llamara brevemente «mundo» o «universo», que es naturalmente cuadridimensional en el sentido espacio-temporal. Pues ese universo se compone de sucesos individuales, cada uno de los cuales puede describirse mediante cuatro números, a saber, tres coordenadas espaciales _x_ , _y_ , _z_ y una coordenada temporal, el valor del tiempo _t_. El «universo» es en este sentido también un continuo, pues para cada suceso existen otros (reales o imaginables) arbitrariamente «próximos» cuyas coordenadas _x_ 1, _y_ 1, _z_ 1, _t_ 1 se diferencian arbitrariamente poco de las del suceso contemplado _x_ , _y_ , _z_ , _t_. El que no estemos acostumbrados a concebir el mundo en este sentido como un continuo cuadridimensional se debe a que el tiempo desempeñó en la física prerrelativista un papel distinto, más independiente, frente a las coordenadas espaciales, por lo cual nos hemos habituado a tratar el tiempo como un continuo independiente. De hecho, en la física clásica el tiempo es absoluto, es decir, independiente de la posición y _del estado de movimiento_ del sistema de referencia, lo cual queda patente en la última ecuación de la transformación de Galileo ( _t'_ = _t_ ).
La teoría de la relatividad sirve en bandeja la visión cuadridimensional del «mundo», pues según esta teoría el tiempo es despojado de su independencia, tal y como muestra la cuarta ecuación de la transformación de Lorentz:
En efecto, según esta ecuación la diferencia temporal ∆ _t'_ de dos sucesos respecto a _K'_ no se anula en general, aunque la diferencia temporal ∆ _t_ de aquéllos respecto a _K_ sea nula. Una distancia puramente espacial entre dos sucesos con relación a _K_ tiene como consecuencia una distancia temporal de aquéllos con respecto a _K'_. La importancia del descubrimiento de Minkowski para el desarrollo formal de la teoría de la relatividad no reside tampoco aquí, sino en el reconocimiento de que el continuo cuadridimensional de la teoría de la relatividad muestra en sus principales propiedades formales el máximo parentesco con el continuo tridimensional del espacio geométrico euclidiano. Sin embargo, para hacer resaltar del todo este parentesco es preciso sustituir las coordenadas temporales usuales _t_ por la cantidad imaginaria √−1 _ct_ , proporcional a ellas. Las leyes de la naturaleza que satisfacen los requisitos de la teoría de la relatividad (especial) toman entonces formas matemáticas en las que la coordenada temporal desempeña exactamente el mismo papel que las tres coordenadas espaciales. Estas cuatro coordenadas se corresponden exactamente, desde el punto de vista formal, con las tres coordenadas espaciales de la geometría euclidiana. Incluso al no matemático le saltará a la vista que, gracias a este hallazgo puramente formal, la teoría tuvo que ganar una dosis extraordinaria de claridad.
Tan someras indicaciones no dan al lector sino una noción muy vaga de las importantes ideas de Minkowski, sin las cuales la teoría de la relatividad general, desarrollada a continuación en sus líneas fundamentales, se habría quedado quizá en pañales. Ahora bien, como para comprender las ideas fundamentales de la teoría de la relatividad especial o general no es necesario entender con más exactitud esta materia, sin duda de difícil acceso para el lector no ejercitado en la matemática, lo dejaremos en este punto para volver sobre ello en las últimas consideraciones de este librito.
### Segunda parte
### SOBRE LA TEORÍA DE LA RELATIVIDAD GENERAL
#### 18. PRINCIPIOS DE LA RELATIVIDAD ESPECIAL Y GENERAL
La tesis fundamental alrededor de la cual giraban todas las consideraciones anteriores era el principio de la relatividad _especial_ , es decir, el principio de la relatividad física de todo movimiento _uniforme_. Volvamos a analizar exactamente su contenido.
Que cualquier movimiento hay que entenderlo conceptualmente como un movimiento meramente _relativo_ es algo que siempre fue evidente. Volviendo al ejemplo, tantas veces frecuentado ya, del terraplén y el vagón de ferrocarril, el hecho del movimiento que aquí tiene lugar cabe expresarlo con igual razón en cualquiera de las dos formas siguientes:
a) el vagón se mueve respecto al terraplén, b) el terraplén se mueve respecto al vagón.
En el caso _a_ ) es el terraplén el que hace las veces de cuerpo de referencia; en el caso _b_ ), el vagón. Cuando se trata simplemente de constatar o describir el movimiento es teóricamente indiferente a qué cuerpo de referencia se refiera el movimiento. Lo cual es, repetimos, evidente y no debemos confundirlo con la proposición, mucho más profunda, que hemos llamado «principio de relatividad» y en la que hemos basado nuestras consideraciones.
El principio que nosotros hemos utilizado no se limita a sostener que para la descripción de cualquier suceso se puede elegir lo mismo el vagón que el terraplén como cuerpo de referencia (porque también eso es evidente). Nuestro principio afirma más bien que si se formulan las leyes generales de la naturaleza, tal y como resultan de la experiencia, sirviéndose
a) del terraplén como cuerpo de referencia,
b) del vagón como cuerpo de referencia,
en ambos casos dichas leyes generales (por ejemplo, las leyes de la mecánica o la ley de la propagación de la luz en el vacío) tienen exactamente el mismo enunciado. Dicho de otra manera: en la descripción _física_ de los procesos naturales no hay ningún cuerpo de referencia _K_ o _K'_ que se distinga del otro. Este último enunciado no tiene que cumplirse necesariamente a priori, como ocurre con el primero; no está contenido en los conceptos de «movimiento» y «cuerpo de referencia», ni puede deducirse de ellos, sino que su verdad o falsedad depende sólo de la _experiencia_.
Ahora bien, nosotros no hemos afirmado hasta ahora para nada la equivalencia de _todos_ los cuerpos de referencia _K_ de cara a la formulación de las leyes naturales. El camino que hemos seguido ha sido más bien el siguiente. Partimos inicialmente del supuesto de que existe un cuerpo de referencia _K_ con un estado de movimiento respecto al cual se cumple el principio fundamental de Galileo: un punto material abandonado a su suerte y alejado lo suficiente de todos los demás se mueve uniformemente y en línea recta. Referidas a _K_ (cuerpo de referencia de Galileo), las leyes de la naturaleza debían ser lo más sencillas posible. Pero al margen de _K_ , deberían ser privilegiados en este sentido y exactamente equivalentes a _K_ de cara a la formulación de las leyes de la naturaleza todos aquellos cuerpos de referencia _K'_ que ejecutan respecto a _K_ un movimiento _rectilíneo_ , _uniforme e irrotacional_ : a todos estos cuerpos de referencia se los considera cuerpos de referencia de Galileo. La validez del principio de la relatividad solamente la supusimos para estos cuerpos de referencia, no para otros (animados de otros movimientos). En este sentido hablamos del principio de la relatividad especial o de la teoría de la relatividad especial.
En contraposición a lo anterior entenderemos por «principio de la relatividad general» el siguiente enunciado: todos los cuerpos de referencia _K_ , _K'_ , etc., sea cual fuere su estado de movimiento, son equivalentes de cara a la descripción de la naturaleza (formulación de las leyes naturales generales). Apresurémonos a señalar, sin embargo, que esta formulación es preciso sustituirla por otra más abstracta, por razones que saldrán a la luz más adelante.
Una vez que la introducción del principio de la relatividad especial ha salido airosa, tiene que ser tentador, para cualquier espíritu que aspire a la generalización, el atreverse a dar el paso que lleva al principio de la relatividad general. Pero basta una observación muy simple, en apariencia perfectamente verosímil, para que el intento parezca en principio condenado al fracaso. Imagínese el lector instalado en ese famoso vagón de tren que viaja con velocidad uniforme. Mientras el vagón mantenga su marcha uniforme, los ocupantes no notarán en absoluto el movimiento del tren; lo cual explica asimismo que el ocupante pueda interpretar la situación en el sentido de que el vagón está en reposo y que lo que se mueve es el terraplén, sin sentir por ello que violenta su intuición. Y según el principio de la relatividad especial, esta interpretación está perfectamente justificada desde el punto de vista físico.
Ahora bien, si el movimiento del vagón se hace no uniforme porque el tren frena violentamente, pongamos por caso, el viajero experimentará un tirón igual de fuerte hacia adelante. El movimiento acelerado del vagón se manifiesta en el comportamiento mecánico de los cuerpos respecto a él; el comportamiento mecánico es distinto que en el caso antes considerado, y por eso parece estar excluido que con relación al vagón en movimiento no uniforme valgan las mismas leyes mecánicas que respecto al vagón en reposo o en movimiento uniforme. En cualquier caso, está claro que en relación al vagón que se mueve no uniformemente no vale el principio fundamental de Galileo. De ahí que en un primer momento nos sintamos impelidos a atribuir, en contra del principio de la relatividad general, una especie de realidad física absoluta al movimiento no uniforme. En lo que sigue veremos, sin embargo, que esta inferencia no es correcta.
#### 19. EL CAMPO GRAVITATORIO
A la pregunta de por qué cae al suelo una piedra levantada y soltada en el aire suele contestarse «porque es atraída por la Tierra». La física moderna formula la respuesta de un modo algo distinto, por la siguiente razón. A través de un estudio más detenido de los fenómenos electromagnéticos se ha llegado a la conclusión de que no existe una acción inmediata a distancia. Cuando un imán atrae un trozo de hierro, por ejemplo, no puede uno contentarse con la explicación de que el imán actúa directamente sobre el hierro a través del espacio intermedio vacío; lo que se hace es, según idea de Faraday, imaginar que el imán crea siempre en el espacio circundante algo físicamente real que se denomina «campo magnético». Este campo magnético actúa a su vez sobre el trozo de hierro, que tiende a moverse hacia el imán. No vamos a entrar aquí en la justificación de este concepto interviniente que en sí es arbitrario. Señalemos tan sólo que con su ayuda es posible explicar teóricamente de modo mucho más satisfactorio los fenómenos electromagnéticos, y en especial la propagación de las ondas electromagnéticas. De manera análoga se interpreta también la acción de la gravedad.
La influencia de la Tierra sobre la piedra se produce indirectamente. La Tierra crea alrededor suyo un campo gravitatorio. Este campo actúa sobre la piedra y ocasiona su movimiento de caída. La intensidad de la acción sobre un cuerpo decrece al alejarse más y más de la Tierra, y decrece según una ley determinada. Lo cual, en nuestra interpretación, quiere decir que: la ley que rige las propiedades espaciales del campo gravitatorio tiene que ser una ley muy determinada para representar correctamente la disminución de la acción gravitatoria con la distancia al cuerpo que ejerce la acción. Se supone, por ejemplo, que el cuerpo (la Tierra, pongamos por caso) genera directamente el campo en su vecindad inmediata; la intensidad y dirección del campo a distancias más grandes vienen entonces determinadas por la ley que rige las propiedades espaciales de los campos gravitatorios.
El campo gravitatorio, al contrario que el campo eléctrico y magnético, muestra una propiedad sumamente peculiar que es de importancia fundamental para lo que sigue. Los cuerpos que se mueven bajo la acción exclusiva del campo gravitatorio experimentan una _aceleración que no depende lo más mínimo ni del material ni del estado físico del cuerpo_. Un trozo de plomo y un trozo de madera, por ejemplo, caen exactamente igual en el campo gravitatorio (en ausencia de aire) cuando los dejamos caer sin velocidad inicial o con velocidades iniciales iguales. Esta ley, que se cumple con extremada exactitud, se puede formular también de otra manera sobre la base de la siguiente consideración.
Según la ley del movimiento de Newton se cumple
(fuerza) = (masa inercial) · (aceleración),
donde la «masa inercial» es una constante característica del cuerpo acelerado. Si la fuerza aceleradora es la de la gravedad, tenemos, por otro lado, que
(fuerza) = (masa gravitatoria) · (intensidad del campo gravitatorio),
Pues bien, si queremos que para un campo gravitatorio dado la aceleración sea siempre la misma, independientemente de la naturaleza y del estado del cuerpo, tal
y como demuestra la experiencia, la relación entre la masa gravitatoria y la masa inercial tiene que ser también igual para todos los cuerpos. Mediante adecuada elección de las unidades puede hacerse que esta relación valga 1, siendo entonces válido el teorema siguiente: la masa _gravitatoria_ y la masa _inercial_ de un cuerpo son iguales.
La antigua mecánica _registró_ este importante principio, pero no lo _interpretó_. Una interpretación satisfactoria no puede surgir sino reconociendo que _la misma_ cualidad del cuerpo se manifiesta como «inercia» o como «gravedad», según las circunstancias. En los apartados siguientes veremos hasta qué punto es ése el caso y qué relación guarda esta cuestión con el postulado de la relatividad general.
#### 20. LA IGUALDAD ENTRE MASA INERCIAL Y MASA GRAVITATORIA COMO ARGUMENTO A FAVOR DEL POSTULADO DE LA RELATIVIDAD GENERAL
Imaginemos un trozo amplio de espacio vacío, tan alejado de estrellas y de grandes masas que podamos decir con suficiente exactitud que nos encontramos ante el caso previsto en la ley fundamental de Galileo. Para esta parte del universo es entonces posible elegir un cuerpo de referencia de Galileo con respecto al cual los puntos en reposo permanecen en reposo y los puntos en movimiento persisten constantemente en un movimiento uniforme y rectilíneo. Como cuerpo de referencia nos imaginamos un espacioso cajón con la forma de una habitación; y suponemos que en su interior se halla un observador pertrechado de aparatos. Para él no existe, como es natural, ninguna gravedad. Tiene que sujetarse con cuerdas al piso, so pena de verse lanzado hacia el techo al mínimo golpe contra el suelo.
Supongamos que en el centro del techo del cajón, por fuera, hay un gancho con una cuerda, y que un ser —cuya naturaleza nos es indiferente— empieza a tirar de ella con fuerza constante. El cajón, junto con el observador, empezará a volar hacia «arriba» con movimiento uniformemente acelerado. Su velocidad adquirirá con el tiempo cotas fantásticas... siempre que juzguemos todo ello desde otro cuerpo de referencia del cual no se tire con una cuerda.
Pero el hombre que está en el cajón ¿cómo juzga el proceso? El suelo del cajón le transmite la aceleración por presión contra los pies. Por consiguiente, tiene que contrarrestar esta presión con ayuda de sus piernas si no quiere medir el suelo con su cuerpo. Así pues, estará de pie en el cajón igual que lo está una persona en una habitación de cualquier vivienda terrestre. Si suelta un cuerpo que antes sostenía en la mano, la aceleración del cajón dejará de actuar sobre aquél, por lo cual se aproximará al suelo en movimiento relativo acelerado. El observador se convencerá también de que _la aceleración del cuerpo respecto al suelo es siempre igual de grande_ , _independientemente del cuerpo con que realice el experimento_.
Apoyándose en sus conocimientos del campo gravitatorio, tal y como los hemos comentado en el último epígrafe, el hombre llegará así a la conclusión de que se halla, junto con el cajón, en el seno de un campo gravitatorio bastante constante. Por un momento se sorprenderá, sin embargo, de que el cajón no caiga en este campo gravitatorio, mas luego descubre el gancho en el centro del techo y la cuerda tensa sujeta a él e infiere correctamente que el cajón cuelga en reposo en dicho campo.
¿Es lícito reírse del hombre y decir que su concepción es un error? Opino que, si queremos ser consecuentes, no podemos hacerlo, debiendo admitir por el contrario que su explicación no atenta ni contra la razón ni contra las leyes mecánicas conocidas. Aun cuando el cajón se halle acelerado respecto al «espacio de Galileo» considerado en primer lugar, cabe contemplarlo como inmóvil. Tenemos, pues, buenas razones para extender el principio de relatividad a cuerpos de referencia que estén acelerados unos respecto a otros, habiendo ganado así un potente argumento a favor de un postulado de relatividad generalizado.
Tómese buena nota de que la posibilidad de esta interpretación descansa en la propiedad fundamental que posee el campo gravitatorio de comunicar a todos los cuerpos la misma aceleración, o lo que viene a ser lo mismo, en el postulado de la igualdad entre masa inercial y masa gravitatoria. Si no existiera esta ley de la naturaleza, el hombre en el cajón acelerado no podría interpretar el comportamiento de los cuerpos circundantes a base de suponer la existencia de un campo gravitatorio, y ninguna experiencia le autorizaría a suponer que su cuerpo de referencia está «en reposo».
Imaginemos ahora que el hombre del cajón ata una cuerda en la parte interior del techo y fija un cuerpo en el extremo libre. El cuerpo hará que la cuerda cuelgue «verticalmente» en estado tenso. Preguntémonos por la causa de la tensión. El hombre en el cajón dirá: «El cuerpo suspendido experimenta en el campo gravitatorio una fuerza hacia abajo y se mantiene en equilibrio debido a la tensión de la cuerda; lo que determina la magnitud de la tensión es la masa _gravitatoria_ del cuerpo suspendido». Por otro lado, un observador que flote libremente en el espacio juzgará la situación así: «La cuerda se ve obligada a participar del movimiento acelerado del cajón y lo transmite al cuerpo sujeto a ella. La tensión de la cuerda es justamente suficiente para producir la aceleración del cuerpo. Lo que determina la magnitud de la tensión en la cuerda es la _masa inercial_ del cuerpo». En este ejemplo vemos que la extensión del principio de relatividad pone de manifiesto la _necesidad_ del postulado de la igualdad entre masa inercial y gravitatoria. Con lo cual hemos logrado una interpretación física de este postulado.
El ejemplo del cajón acelerado demuestra que una teoría de la relatividad general ha de proporcionar resultados importantes en punto a las leyes de la gravitación. Y en efecto, el desarrollo consecuente de la idea de la relatividad general ha suministrado las leyes que satisface el campo gravitatorio. Sin embargo, he de prevenir desde este mismo momento al lector de una confusión a que pueden inducir estas consideraciones. Para el hombre del cajón existe un campo gravitatorio, pese a no existir tal respecto al sistema de coordenadas inicialmente elegido. Diríase entonces que la existencia de un campo gravitatorio es siempre meramente _aparente_. Podría pensarse que, independientemente del campo gravitatorio que exista, siempre cabría elegir otro cuerpo de referencia de tal manera que respecto a él no existiese ninguno. Pues bien, eso no es cierto para cualquier campo gravitatorio, sino sólo para aquellos que poseen una estructura muy especial. Es imposible, por ejemplo, elegir un cuerpo de referencia respecto al cual el campo gravitatorio de la Tierra desaparezca (en toda su extensión).
Ahora nos damos cuenta de por qué el argumento esgrimido al final del apartado 18 contra el principio de la relatividad general no es concluyente. Sin duda es cierto que el observador que se halla en el vagón siente un tirón hacia adelante como consecuencia del frenazo, y es verdad que en eso nota la no uniformidad del movimiento. Pero nadie le obliga a atribuir el tirón a una aceleración «real» del vagón. Igual podría interpretar el episodio así: «Mi cuerpo de referencia (el vagón) permanece constantemente en reposo. Sin embargo, (durante el tiempo de frenada) existe respecto a él un campo gravitatorio temporalmente variable, dirigido hacia adelante. Bajo la influencia de este último, el terraplén, junto con la Tierra, se mueve no uniformemente, de suerte que su velocidad inicial, dirigida hacia atrás, disminuye cada vez más. Este campo gravitatorio es también el que produce el tirón del observador».
#### 21. ¿HASTA QUÉ PUNTO SON INSATISFACTORIAS LAS BASES DE LA MECÁNICA Y DE LA TEORÍA DE LA RELATIVIDAD ESPECIAL?
Como ya hemos dicho en varias ocasiones, la mecánica clásica parte del principio siguiente: los puntos materiales suficientemente alejados de otros puntos materiales se mueven uniformemente y en línea recta o persisten en estado de reposo. También hemos subrayado repetidas veces que este principio fundamental sólo puede ser válido para cuerpos de referencia _K_ que se encuentran en determinados estados de movimiento y que se hallan en movimiento de traslación uniforme unos respecto a otros. Con relación a otros cuerpos de referencia _K'_ no vale el principio. Tanto en la mecánica clásica como en la teoría de la relatividad especial se distingue, por tanto, entre cuerpos de referencia _K_ respecto a los cuales son válidas las leyes de la naturaleza y cuerpos de referencia _K'_ respecto a los cuales no lo son.
Ahora bien, ninguna persona que piense con un mínimo de lógica se dará por satisfecha con este estado de cosas, y preguntará: ¿Cómo es posible que determinados cuerpos de referencia (o bien sus estados de movimiento) sean privilegiados frente a otros (o frente a sus estados de movimiento respectivos)? ¿ _Cuál es la razón de ese privilegio_? Para mostrar claramente lo que quiero decir con esta pregunta, me serviré de una comparación.
Estoy ante un hornillo de gas. Sobre él se encuentran, una al lado de la otra, dos ollas de cocina idénticas, hasta el punto de que podríamos confundirlas. Ambas están llenas de agua hasta la mitad. Advierto que de una de ellas sale ininterrumpidamente vapor, mientras que de la otra no, lo cual me llamará la atención aunque jamás me haya echado a la cara un hornillo de gas ni una olla de cocina. Si entonces percibo algo que brilla con luz azulada bajo la primera olla, pero no bajo la segunda, se desvanecerá mi asombro aun en el caso de que jamás haya visto una llama de gas, pues ahora podré decir que ese algo azulado es la causa, o al menos la _posible_ causa de la emanación de vapor. Pero si no percibo bajo ninguna de las dos ollas ese algo azulado y veo que una no cesa de echar vapor mientras que en la otra no es así, entonces no saldré del asombro y de la insatisfacción hasta que detecte alguna circunstancia a la que pueda hacer responsable del dispar comportamiento de las dos ollas.
Análogamente, busco en vano en la mecánica clásica (o en la teoría de la relatividad especial) un algo real al que poder atribuir el dispar comportamiento de los cuerpos respecto a los sistemas _K_ y _K'_. Esta objeción la vio ya Newton, quien intentó en vano neutralizarla. Pero fue E. Mach el que la detectó con mayor claridad, proponiendo como solución colocar la mecánica sobre fundamentos nuevos. La objeción solamente se puede evitar en una física que se corresponda con el principio de la relatividad general, porque las ecuaciones de una teoría semejante valen para cualquier cuerpo de referencia, sea cual fuere su estado de movimiento.
#### 22. ALGUNAS CONCLUSIONES DEL PRINCIPIO DE LA RELATIVIDAD GENERAL
Las consideraciones hechas en el epígrafe 20 muestran que el principio de la relatividad general nos permite deducir propiedades del campo gravitatorio por vía puramente teórica. Supongamos, en efecto, que conocemos la evolución espacio-temporal de un proceso natural cualquiera, tal y como ocurre en el terreno galileano respecto a un cuerpo de referencia de Galileo _K_. En estas condiciones es posible averiguar mediante operaciones puramente teóricas, es decir, por simples cálculos, cómo se comporta este proceso natural conocido respecto a un cuerpo de referencia _K'_ que está acelerado con relación a _K_. Y como respecto a este nuevo cuerpo de referencia _K'_ existe un campo gravitatorio, el cálculo nos informa de cómo influye el campo gravitatorio en el proceso estudiado.
Así descubrimos, por poner un caso, que un cuerpo que respecto a _K_ ejecuta un movimiento uniforme y rectilíneo (según el principio de Galileo), ejecuta respecto al cuerpo de referencia acelerado _K'_ (cajón) un movimiento acelerado, de trayectoria generalmente curvada. Esta aceleración, o esta curvatura, responde a la influencia que sobre el cuerpo móvil ejerce el campo gravitatorio que existe respecto a _K'_. Que el campo gravitatorio influye de este modo en el movimiento de los cuerpos es ya sabido, de modo que la reflexión no aporta nada fundamentalmente nuevo.
Sí se obtiene, en cambio, un resultado nuevo y de importancia capital al hacer consideraciones equivalentes para un rayo de luz. Respecto al cuerpo de referencia de Galileo _K_ , se propaga en línea recta con velocidad _c_. Respecto al cajón acelerado (cuerpo de referencia _K'_ ), la trayectoria del mismo rayo de luz ya no es una recta, como se deduce fácilmente. De aquí se infiere que _los rayos de luz en el seno de campos gravitatorios se propagan en general según líneas curvas_. Este resultado es de gran importancia por dos conceptos.
En primer lugar, cabe contrastarlo con la realidad. Aun cuando una reflexión detenida demuestra que la curvatura que predice la teoría de la relatividad general para los rayos luminosos es ínfima en el caso de los campos gravitatorios que nos brinda la experiencia, tiene que ascender a 1,7 segundos de arco para rayos de luz que pasan por las inmediaciones del Sol. Este efecto debería traducirse en el hecho de que las estrellas fijas situadas en las cercanías del Sol, y que son observables durante eclipses solares totales, aparezcan alejadas de él en esa cantidad, comparado con la posición que ocupan para nosotros en el cielo cuando el Sol se halla en otro lugar de la bóveda celeste. La comprobación de la verdad o falsedad de este resultado es una tarea de la máxima importancia, cuya solución es de esperar que nos la den muy pronto los astrónomos.
En segundo lugar, la consecuencia anterior demuestra que, según la teoría de la relatividad general, la tantas veces mencionada ley de la constancia de la velocidad de la luz en el vacío —que constituye uno de los dos supuestos básicos de la teoría de la relatividad especial— no puede aspirar a validez ilimitada, pues los rayos de luz solamente pueden curvarse si la velocidad de propagación de ésta varía con la posición. Cabría pensar que esta consecuencia da al traste con la teoría de la relatividad especial y con toda la teoría de la relatividad en general. Pero en realidad no es así. Tan sólo cabe inferir que la teoría de la relatividad especial no puede arrogarse validez en un campo ilimitado; sus resultados sólo son válidos en la medida en que se pueda prescindir de la influencia de los campos gravitatorios sobre los fenómenos (los luminosos, por ejemplo).
Habida cuenta de que los detractores de la teoría de la relatividad han afirmado a menudo que la relatividad general tira por la borda la teoría de la relatividad especial, voy a aclarar el verdadero estado de cosas mediante una comparación. Antes de quedar establecida la electrodinámica, las leyes de la electrostática pasaban por ser las leyes de la electricidad en general. Hoy sabemos que la electrostática sólo puede explicar correctamente los campos eléctricos en el caso —que en rigor jamás se da— de que las masas eléctricas estén estrictamente en reposo unas respecto a otras y en relación al sistema de coordenadas. ¿Quiere decir eso que las ecuaciones de campo electrodinámicas de Maxwell hayan tirado por la borda a la electrostática? ¡De ningún modo! La electrostática se contiene en la electrodinámica como caso límite; las leyes de esta última conducen directamente a las de aquélla en el supuesto de que los campos sean temporalmente invariables. El sino más hermoso de una teoría física es el de señalar el camino para establecer otra más amplia, en cuyo seno pervive como caso límite.
En el ejemplo que acabamos de comentar, el de la propagación de la luz, hemos visto que el principio de la relatividad general nos permite derivar por vía teórica la influencia del campo gravitatorio sobre la evolución de fenómenos cuyas leyes son ya conocidas para el caso de que no exista campo gravitatorio. Pero el problema más atractivo de entre aquellos cuya clave proporciona la teoría de la relatividad general tiene que ver con la determinación de las leyes que cumple el propio campo de gravitación. La situación es aquí la siguiente.
Conocemos regiones espacio-temporales que, previa elección adecuada del cuerpo de referencia, se comportan (aproximadamente) «al modo galileano», es decir, regiones en las cuales no existen campos gravitatorios. Si referimos una región semejante a un cuerpo de referencia de movimiento arbitrario _K'_ , entonces existe respecto a _K'_ un campo gravitatorio temporal y espacialmente variable. La estructura de este campo depende naturalmente de cómo elijamos el movimiento de _K'_. Según la teoría de la relatividad general, la ley general del campo gravitatorio debe cumplirse para todos los campos gravitatorios así obtenidos. Aun cuando de esta manera no se pueden engendrar ni de lejos todos los campos gravitatorios, cabe la esperanza de poder deducir de estos campos de clase especial la ley general de la gravitación. ¡Y esta esperanza se ha visto bellísimamente cumplida! Pero desde que se vislumbró claramente esta meta hasta que se llegó de verdad a ella hubo que superar una seria dificultad que no debo ocultar al lector, por estar arraigada en la esencia misma del asunto. La cuestión requiere profundizar nuevamente en los conceptos del continuo espaciotemporal.
#### 23. EL COMPORTAMIENTO DE RELOJES Y REGLAS SOBRE UN CUERPO DE REFERENCIA EN ROTACIÓN
Hasta ahora me he abstenido intencionadamente de hablar de la interpretación física de localizaciones espaciales y temporales en el caso de la teoría de la relatividad general. Con ello me he hecho culpable de un cierto desaliño que, según sabemos por la teoría de la relatividad especial, no es en modo alguno banal ni perdonable. Hora es ya de llenar esta laguna; pero advierto de antemano que el asunto demanda no poca paciencia y capacidad de abstracción por parte del lector.
Partimos una vez más de casos muy especiales y muy socorridos. Imaginemos una región espacio-temporal en la que, respecto a un cuerpo de referencia _K_ que posea un estado de movimiento convenientemente elegido, no exista ningún campo gravitatorio; en relación a la región considerada, _K_ es entonces un cuerpo de referencia de Galileo, siendo válidos respecto a él los resultados de la teoría de la relatividad especial. Imaginemos la misma región, pero referida a un segundo cuerpo de referencia _K'_ que rota uniformemente respecto a _K_. Para fijar las ideas, supongamos que _K'_ es un disco circular que gira uniformemente alrededor de su centro y en su mismo plano. Un observador sentado en posición excéntrica sobre el disco circular _K'_ experimenta una fuerza que actúa en dirección radial hacia afuera y que otro observador que se halle en reposo respecto al cuerpo de referencia original _K_ interpreta como acción inercial (fuerza centrífuga). Supongamos, sin embargo, que el observador sentado en el disco considera éste como un cuerpo de referencia «en reposo», para lo cual está autorizado por el principio de relatividad. La fuerza que actúa sobre él —y en general sobre los cuerpos que se hallan en reposo respecto al disco— la interpreta como la acción de un campo gravitatorio. La distribución espacial de este campo no sería posible según la teoría newtoniana de la gravitación. Pero como el observador cree en la teoría de la relatividad general, no le preocupa este detalle; espera, con razón, poder establecer una ley general de la gravitación que explique correctamente no sólo el movimiento de los astros, sino también el campo de fuerzas que él percibe.
Este observador, instalado en su disco circular, experimenta con relojes y reglas, con la intención de obtener, a partir de lo observado, definiciones exactas para el significado de los datos temporales y espaciales respecto al disco circular _K'_. ¿Qué experiencias tendrá en ese intento?
Imaginemos que el observador coloca primero dos relojes de idéntica constitución, uno en el punto medio del disco circular, el otro en la periferia del mismo, de manera que ambos se hallan en reposo respecto al disco. En primer lugar nos preguntamos si estos dos relojes marchan o no igual de rápido desde el punto de vista del cuerpo de referencia de Galileo _K_ , que no rota. Juzgado desde _K_ , el reloj situado en el centro no tiene ninguna velocidad, mientras que el de la periferia, debido a la rotación respecto a _K_ , está en movimiento. Según un resultado del epígrafe 12, este segundo reloj marchará constantemente más despacio —respecto a _K_ — que el reloj situado en el centro del disco circular. Lo mismo debería evidentemente constatar el hombre del disco, a quien vamos a imaginar sentado en el centro, junto al reloj que hay allí. Así pues, en nuestro disco circular, y con más generalidad en cualquier campo gravitatorio, los relojes marcharán más deprisa o más despacio según el lugar que ocupe el reloj (en reposo). Por consiguiente, con ayuda de relojes colocados en reposo respecto al cuerpo de referencia no es posible dar una definición razonable del tiempo. Análoga dificultad se plantea al intentar aplicar aquí nuestra anterior definición de simultaneidad, tema en el que no vamos a profundizar.
También la definición de las coordenadas espaciales plantea aquí problemas que en principio son insuperables. Porque si el observador que se mueve junto con el disco coloca su escala unidad (una regla pequeña, comparada con el radio del disco) tangencialmente sobre la periferia de éste, su longitud, juzgada desde el sistema de Galileo, será más corta que 1, pues según el epígrafe 12 los cuerpos en movimiento experimentan un acortamiento en la dirección del movimiento. Si en cambio coloca la regla en la dirección del radio del disco, no habrá acortamiento respecto a _K_. Por consiguiente, si el observador mide primero el perímetro del disco, luego su diámetro y divide estas dos medidas, obtendrá como cociente, no el conocido número _n_ = 3,14..., sino un número mayor, mientras que en un disco inmóvil respecto a _K_ debería resultar exactamente _n_ en esta operación, como es natural. Con ello queda ya probado que los teoremas de la geometría euclidiana no pueden cumplirse exactamente sobre el disco rotatorio ni, en general, en un campo gravitacional, al menos si se atribuye a la reglilla la longitud 1 en cualquier posición y orientación. También el concepto de línea recta pierde con ello su significado. No estamos, pues, en condiciones de definir exactamente las coordenadas _x_ , _y_ , _z_ respecto al disco, utilizando el método empleado en la teoría de la relatividad especial. Y mientras las coordenadas y los tiempos de los sucesos no estén definidos, tampoco tienen significado exacto las leyes de la naturaleza en las que aparecen esas coordenadas.
Todas las consideraciones que hemos hecho anteriormente sobre la relatividad general parecen quedar así en tela de juicio. En realidad hace falta dar un sutil rodeo para aplicar exactamente el postulado de la relatividad general. Las siguientes consideraciones prepararán al lector para este cometido.
#### 24. EL CONTINUO EUCLIDIANO Y EL NO EUCLIDIANO
Delante de mí tengo la superficie de una mesa de mármol. Desde cualquier punto de ella puedo llegar hasta cualquier otro a base de pasar un número (grande) de veces hasta un punto «vecino», o dicho de otro modo, yendo de un punto a otro sin dar «saltos». El lector (siempre que no sea demasiado exigente) percibirá sin duda con suficiente precisión lo que se entiende aquí por «vecino» y «saltos». Esto lo expresamos diciendo que la superficie es un continuo.
Imaginemos ahora que fabricamos un gran número de varillas cuyo tamaño sea pequeño comparado con las medidas de la mesa, y todas ellas igual de largas. Por esto último se entiende que se pueden enrasar los extremos de cada dos de ellas. Colocamos ahora cuatro de estas varillas sobre la superficie de la mesa, de modo que sus extremos formen un cuadrilátero cuyas diagonales sean iguales (cuadrado). Para conseguir la igualdad de las diagonales nos servimos de una varilla de prueba. Pegados a este cuadrado construimos otros iguales que tengan en común con él una varilla; junto a estos últimos otros tantos, etc. Finalmente tenemos todo el tablero cubierto de cuadrados, de tal manera que cada lado interior pertenece a dos cuadrados y cada vértice interior, a cuatro.
El que se pueda llevar a cabo esta operación sin tropezar con grandísimas dificultades es un verdadero milagro. Basta con pensar en lo siguiente. Cuando en un vértice convergen tres cuadrados, están ya colocados dos lados del cuarto, lo cual determina totalmente la colocación de los dos lados restantes de éste. Pero ahora ya no puedo retocar el cuadrilátero para igualar sus diagonales. Si lo son de por sí, será en virtud de un favor especial de la mesa y de las varillas, ante el cual me tendré que mostrar maravillado y agradecido. Y para que la construcción se logre, tenemos que asistir a muchos milagros parecidos.
Si todo ha ido realmente sobre ruedas, entonces digo que los puntos del tablero forman un continuo euclidiano respecto a la varilla utilizada como segmento. Si destaco uno de los vértices de la malla en calidad de «punto de origen», cualquier otro podré caracterizarlo, respecto al punto de origen, mediante dos números. Me basta con especificar cuántas varillas hacia «la derecha» y cuántas luego hacia «arriba» tengo que recorrer a partir del origen para llegar al vértice en cuestión. Estos dos números son entonces «las coordenadas cartesianas» de ese vértice con respecto al «sistema de coordenadas» determinado por las varillas colocadas.
La siguiente modificación del experimento mental demuestra que también hay casos en los que fracasa esta tentativa. Supongamos que las varillas «se dilatan» con la temperatura y que se calienta el tablero en el centro pero no en los bordes. Sigue siendo posible enrasar dos de las varillas en cualquier lugar de la mesa, pero nuestra construcción de cuadrados quedará ahora irremisiblemente desbaratada, porque las varillas de la parte interior de la mesa se dilatan, mientras que las de la parte exterior, no.
Respecto a nuestras varillas —definidas como segmentos unidad— la mesa ya no es un continuo euclidiano, y tampoco estamos ya en condiciones de definir directamente con su ayuda unas coordenadas cartesianas, porque no podemos realizar la construcción anterior. Sin embargo, como existen otros objetos sobre los cuales la temperatura de la mesa no influye de la misma manera que sobre las varillas (o sobre los cuales no influye ni siquiera), es posible, sin forzar las cosas, mantener aun así la idea de que la mesa es un «continuo euclidiano», y es posible hacerlo de modo satisfactorio mediante una constatación más sutil acerca de la medición o comparación de segmentos.
Ahora bien, si todas las varillas, de cualquier clase o material, mostraran _idéntico_ comportamiento termosensible sobre la mesa irregularmente temperada, y si no tuviéramos otro medio de percibir la acción de la temperatura que el comportamiento geométrico de las varillas en experimentos análogos al antes descrito, entonces podría ser conveniente adscribir a dos puntos de la mesa la distancia 1 cuando fuese posible enrasar con ellos los extremos de una de nuestras varillas; porque ¿cómo definir si no el segmento, sin caer en la más crasa de las arbitrariedades? En ese caso hay que abandonar, sin embargo, el método de las coordenadas cartesianas y sustituirlo por otro que no presuponga la validez de la geometría euclidiana. El lector advertirá que la situación aquí descrita se corresponde con aquella que ha traído consigo el postulado de la relatividad general (epígrafe 23).
#### 25. COORDENADAS GAUSSIANAS
Este tratamiento geométrico-analítico se puede conseguir, según Gauss, de la siguiente manera. Imaginemos dibujados sobre el tablero de la mesa un sistema de curvas arbitrarias (véase figura siguiente) que llamamos curvas _u_ , cada una de las cuales la caracterizamos con un número. En la figura están dibujadas las curvas _u_ = 1, _u_ = 2 y _u_ = 3. Pero entre las curvas _u_ = 1 y _u_ = 2 hay que imaginarse dibujadas infinitas más, correspondientes a todos los números reales que están comprendidos entre 1 y 2. Tenemos entonces un sistema de curvas _u_ que recubren la mesa de manera infinitamente densa. Ninguna curva _u_ corta a ninguna otra, sino que por cada punto de la mesa pasa una curva y sólo una. A cada punto de la superficie de la mesa le corresponde entonces un valor _u_ perfectamente determinado. Supongamos también que sobre la superficie se ha dibujado un sistema de curvas _v_ que satisfacen las mismas condiciones, que están caracterizadas de manera análoga por números y que pueden tener también una forma arbitraria. A cada punto de la mesa le corresponde así un valor _u_ y un valor _v_ , y a estos dos números los llamamos las coordenadas de la mesa (coordenadas gaussianas). El punto _P_ de la figura, por ejemplo, tiene como coordenadas gaussianas _u_ = 3; _v_ = 1. A dos puntos vecinos _P_ y _P'_ de la superficie les corresponden entonces las coordenadas
_P_ : _u_ ; _v_
_P_ ': _u_ \+ _du_ ; _v_ \+ _dv_ ,
donde _du_ y _dv_ representan números muy pequeños. Sea _ds_ un número también muy pequeño que representa la distancia entre _P_ y _P'_ medida con una reglilla. Según Gauss se cumple entonces:
_ds_ 2 = _g_ 11 _du_ 2 \+ 2 _g_ 12 _dudv_ \+ _g_ 22 _dv_ 2,
donde _g_ 11, _g_ 12, _g_ 22 son cantidades que dependen de manera muy determinada de _u_ y de _v_. Las cantidades _g_ 11, _g_ 12 y _g_ 22 determinan el comportamiento de las varillas respecto a las curvas _u_ y _v_ , y por tanto también respecto a la superficie de la mesa. En el caso de que los puntos de la superficie considerada constituyan respecto a las reglillas de medida un continuo euclidiano —y sólo en ese caso— será posible dibujar las curvas _u_ y _v_ y asignarles números de tal manera que se cumpla sencillamente
_ds_ 2 = _du_ 2 \+ _dv_ 2
Las curvas _u_ y _v_ son entonces líneas rectas en el sentido de la geometría euclidiana, y perpendiculares entre sí. Y las coordenadas gaussianas serán sencillamente coordenadas cartesianas. Como se ve, las coordenadas gaussianas no son más que una asignación de dos números a cada punto de la superficie considerada, de tal manera que a puntos espacialmente vecinos se les asigna valores numéricos que difieren muy poco entre sí.
Estas consideraciones valen en primera instancia para un continuo de dos dimensiones. Pero el método gaussiano se puede aplicar también a un continuo de tres, cuatro o más. Con un continuo de cuatro dimensiones, por ejemplo, resulta la siguiente representación. A cada punto del continuo se le asignan arbitrariamente cuatro números _x_ 1, _x_ 2, _x_ 3, _x_ 4 que se denominan «coordenadas». Puntos vecinos se corresponden con valores vecinos de las coordenadas. Si a dos puntos vecinos _P_ y _P'_ se les asigna una distancia _ds_ físicamente bien definida, susceptible de ser determinada mediante mediciones, entonces se cumple la fórmula:
_ds_ 2 = _g_ 11 _dx_ 12 \+ 2 _g_ 12 _dx_ l _dx_ 2 ... + _g_ 44 _dx_ 42,
donde las cantidades _g_ 11, etc. tienen valores que varían con la posición en el continuo. Solamente en el caso de que el continuo sea euclidiano será posible asignar las coordenadas _x_ l ... _x_ 4 a los puntos del continuo de tal manera que se cumpla simplemente
_ds_ 2 = _dx_ 12 \+ _dx_ 22 \+ _dx_ 32 \+ _dx_ 42.
Las relaciones que se cumplen entonces en el continuo cuadridimensional son análogas a las que rigen en nuestras mediciones tridimensionales.
Señalemos que la representación gaussiana para _ds_ 2 que acabamos de dar no siempre es posible; sólo lo es cuando existan regiones suficientemente pequeñas del continuo en cuestión que quepa considerar como continuos euclidianos. Lo cual se cumple evidentemente en el caso de la mesa y de la temperatura localmente variable, por ejemplo, porque en una porción pequeña de la mesa es prácticamente constante la temperatura, y el comportamiento geométrico de las varillas es _casi_ el que exigen las reglas de la geometría euclidiana. Así pues, las discordancias en la construcción de cuadrados del epígrafe anterior no se ponen claramente de manifiesto mientras la operación no se extienda a una parte importante de la mesa.
En resumen, podemos decir: Gauss inventó un método para el tratamiento de cualquier continuo en el que estén definidas relaciones de medidas («distancia» entre puntos vecinos). A cada punto del continuo se le asignan tantos números (coordenadas gaussianas) como dimensiones tenga el continuo. La asignación se realiza de tal modo que se conserve la univocidad y de manera que a puntos vecinos les correspondan números (coordenadas gaussianas) que difieran infinitamente poco entre sí. El sistema de coordenadas gaussianas es una generalización lógica del sistema de coordenadas cartesianas. También es aplicable a continuos no euclidianos, pero solamente cuando pequeñas porciones del continuo considerado se comporten, respecto a la medida definida («distancia»), tanto más euclidianamente cuanto menor sea la parte del continuo considerada.
#### 26. EL CONTINUO ESPACIO-TEMPORAL DE LA TEORÍA DE LA RELATIVIDAD ESPECIAL COMO CONTINUO EUCLIDIANO
Ahora estamos en condiciones de formular con algo más de precisión las ideas de Minkowski que esbozamos vagamente en el epígrafe 17. Según la teoría de la relatividad especial, en la descripción del continuo espacio-temporal cuadridimensional gozan de privilegio ciertos sistemas de coordenadas que hemos llamado «sistemas de coordenadas de Galileo». Para ellos, las cuatro coordenadas _x_ , _y_ , _z_ , _t_ que determinan un suceso —o expresado de otro modo, un punto del continuo cuadridimensional— vienen definidas físicamente de manera muy simple, como ya se explicó en la primera parte de este librito. Para el paso de un sistema de Galileo a otro que se mueva uniformemente respecto al primero son válidas las ecuaciones de la transformación de Lorentz, que constituyen la base para derivar las consecuencias de la teoría de la relatividad especial y que por su parte no son más que la expresión de la validez universal de la ley de propagación de la luz para todos los sistemas de referencia de Galileo.
Minkowski descubrió que las transformaciones de Lorentz satisfacen las sencillas condiciones siguientes. Consideremos dos sucesos vecinos, cuya posición mutua en el continuo cuadridimensional venga dada por las diferencias de coordenadas espaciales _dx_ , _dy_ , _dz_ y la diferencia temporal _dt_ respecto a un cuerpo de referencia de Galileo _K_. Respecto a un segundo sistema de Galileo, sean _dx'_ , _dy'_ , _dz'_ , _dt'_ las correspondientes diferencias para ambos sucesos. Entre ellas se cumple entonces siempre la condición:
_dx_ 2 \+ _dy_ 2 \+ _dz_ 2 – _c_ 2 _dt_ 2 = _dx'_ 2 \+ _dy'_ 2 \+ _dz'_ 2 – _c_ 2 _dt'_ 2
Esta condición tiene como consecuencia la validez de la transformación de Lorentz. Lo cual podemos expresarlo así: la cantidad
_ds_ 2 = _dx_ 2 \+ _dy_ 2 \+ _dz_ 2 – _c_ 2 _dt_ 2,
correspondiente a dos puntos vecinos del continuo espacio-temporal cuadridimensional, tiene el mismo valor para todos los cuerpos de referencia privilegiados (de Galileo). Si se sustituye _x_ , _y_ , _z_ , √−1 _ct_ , por _x_ 1, _x_ 2, _x_ 3, _x_ 4, se obtiene el resultado de que
_ds_ 2 = _dx_ 12 \+ _dx_ 22 \+ _dx_ 32 \+ _dx_ 42
es independiente de la elección del cuerpo de referencia. A la cantidad _ds_ la llamamos «distancia» de los dos sucesos o puntos cuadridimensionales.
Así pues, si se elige la variable imaginaria √−1 _ct_ , en lugar de la _t_ real como variable temporal, cabe interpretar el continuo espacio-temporal de la teoría de la relatividad especial como un continuo cuadridimensional «euclidiano», como se desprende de las consideraciones del último epígrafe.
#### 27. EL CONTINUO ESPACIO-TEMPORAL DE LA TEORÍA DE LA RELATIVIDAD NO ES UN CONTINUO EUCLIDIANO
En la primera parte de este opúsculo nos hemos podido servir de coordenadas espacio-temporales que permitían una interpretación física directa y simple y que, según el epígrafe 26, podían interpretarse como coordenadas cartesianas cuadridimensionales. Esto fue posible en virtud de la ley de la constancia de la velocidad de la luz, ley que, sin embargo, según el epígrafe 21, la teoría de la relatividad general no puede mantener; llegamos, por el contrario, al resultado de que según aquélla la velocidad de la luz depende siempre de las coordenadas cuando existe un campo gravitatorio. En el epígrafe 23 constatamos además, en un ejemplo especial, que la existencia de un campo gravitatorio hace imposible esa definición de las coordenadas y del tiempo que nos condujo a la meta en la teoría de la relatividad especial.
Teniendo en cuenta estos resultados de la reflexión, llegamos al convencimiento de que, según el principio de la relatividad general, no cabe interpretar el continuo espacio-temporal como un continuo euclidiano, sino que nos hallamos aquí ante el caso que vimos para el continuo bidimensional de la mesa con temperatura localmente variable. Así como era imposible construir allí un sistema de coordenadas cartesiano con varillas iguales, ahora es también imposible construir, con ayuda de cuerpos rígidos y relojes, un sistema (cuerpo de referencia) de manera que escalas y relojes que sean fijos respecto a otros indiquen directamente la posición y el tiempo. Ésta es en esencia la dificultad con que tropezamos en el epígrafe 23.
Sin embargo, las consideraciones de los epígrafes 25 y 26 señalan el camino que hay que seguir para superarla. Referimos de manera arbitraria el continuo espacio-temporal cuadridimensional a coordenadas gaussianas. A cada punto del continuo (suceso) le asignamos cuatro números _x_ 1, _x 2_, _x_ 3, _x_ 4 (coordenadas) que no poseen ningún significado físico inmediato, sino que sólo sirven para enumerar los puntos de una manera determinada, aunque arbitraria. Esta correspondencia no tiene ni siquiera que ser de tal carácter que obligue a interpretar _x_ 1, _x_ 2, _x_ 3 como coordenadas «espaciales» y _x_ 4 como coordenada «temporal».
El lector quizá piense que semejante descripción del mundo es absolutamente insatisfactoria. ¿Qué significa asignar a un suceso unas determinadas coordenadas _x_ 1, _x_ 2, _x_ 3, _x_ 4 que en sí no significan nada? Una reflexión más detenida demuestra, sin embargo, que la preocupación es infundada. Contemplemos, por ejemplo, un punto material de movimiento arbitrario. Si este punto tuviera sólo una existencia momentánea, sin duración, entonces vendría descrito espacio-temporalmente a través de un sistema de valores único _x_ 1, _x_ 2, _x_ 3, _x_ 4. Su existencia permanente viene, por tanto, caracterizada por un número infinitamente grande de semejantes sistemas de valores, en donde las coordenadas se encadenan ininterrumpidamente; al punto material le corresponde, por consiguiente, una línea (unidimensional) en el continuo cuadridimensional. Y a una multitud de puntos móviles les corresponden otras tantas líneas en nuestro continuo. De todos los enunciados que atañen a estos puntos, los únicos que pueden aspirar a realidad física son aquellos que versan sobre encuentros de estos puntos. En el marco de nuestra representación matemática, un encuentro de esta especie se traduce en el hecho de que las dos líneas que representan los correspondientes movimientos de los puntos tienen en común un determinado sistema _x_ 1, _x_ 2, _x_ 3, _x_ 4 de valores de las coordenadas. Que semejantes encuentros son en realidad las únicas constataciones reales de carácter espacio-temporal que encontramos en las proposiciones físicas es algo que el lector admitirá sin duda tras pausada reflexión.
Cuando antes describíamos el movimiento de un punto material respecto a un cuerpo de referencia, no especificábamos otra cosa que los encuentros de este punto con determinados puntos del cuerpo de referencia. Incluso las correspondientes especificaciones temporales se reducen a constatar encuentros del cuerpo con relojes, junto con la constatación del encuentro de las manillas del reloj con determinados puntos de la esfera. Y lo mismo ocurre con las mediciones espaciales con ayuda de escalas, como se verá a poco que se reflexione.
En general, se cumple lo siguiente: toda descripción física se reduce a una serie de proposiciones, cada una de las cuales se refiere a la coincidencia espacio-temporal de dos sucesos _A_ y _B_. Cada una de estas proposiciones se expresa en coordenadas gaussianas mediante la coincidencia de las cuatro coordenadas _x_ 1, _x_ 2, _x_ 3, _x_ 4. Por tanto, es cierto que la descripción del continuo espacio-temporal a través de coordenadas gaussianas sustituye totalmente a la descripción con ayuda de un cuerpo de referencia, sin adolecer de los defectos de este último método, pues no está ligado al carácter euclidiano del continuo a representar.
#### 28. FORMULACIÓN EXACTA DEL PRINCIPIO DE LA RELATIVIDAD GENERAL
Ahora estamos en condiciones de sustituir la formulación provisional del principio de la relatividad general que dimos en el epígrafe 18 por otra que es exacta. La versión de entonces —«Todos los cuerpos de referencia _K_ , _K'_ , etc., son equivalentes para la descripción de la naturaleza (formulación de las leyes generales de la naturaleza), sea cual fuere su estado de movimiento»— es insostenible, porque en general no es posible utilizar cuerpos de referencia rígidos en la descripción espacio-temporal en el sentido del método seguido en la teoría de la relatividad especial. En lugar del cuerpo de referencia tiene que aparecer el sistema de coordenadas gaussianas. La idea fundamental del principio de la relatividad general responde al enunciado: _Todos los sistemas de coordenadas gaussianas son esencialmente equivalentes para la formulación de las leyes generales de la naturaleza_.
Este principio de la relatividad general cabe enunciarlo en otra forma que permite reconocerlo aún más claramente como una extensión natural del principio de la relatividad especial. Según la teoría de la relatividad especial, al sustituir las variables espacio-temporales _x_ , _y_ , _z_ , _t_ de un cuerpo de referencia _K_ (de Galileo) por las variables espacio-temporales _x'_ , _y'_ , _z'_ , _t'_ de un nuevo cuerpo de referencia _K'_ utilizando la transformación de Lorentz, las ecuaciones que expresan las leyes generales de la naturaleza se convierten en otras de la misma forma. Por el contrario, según la teoría de la relatividad general, las ecuaciones tienen que transformarse en otras de la misma forma al hacer _cualesquiera sustituciones_ de las variables gaussianas _x_ l, _x 2_, _x 3_, _x 4_; pues toda sustitución (y no sólo la de la transformación de Lorentz) corresponde al paso de un sistema de coordenadas gaussianas a otro.
Si no se quiere renunciar a la habitual representación tridimensional, podemos caracterizar como sigue la evolución que vemos experimentar a la idea fundamental de la teoría de la relatividad general: la teoría de la relatividad especial se refiere a regiones de Galileo, es decir, aquellas en las que no existe ningún campo gravitatorio. Como cuerpo de referencia actúa aquí un cuerpo de referencia de Galileo, es decir, un cuerpo rígido cuyo estado de movimiento es tal que respecto a él es válido el principio de Galileo del movimiento rectilíneo y uniforme de puntos materiales «aislados».
Ciertas consideraciones sugieren referir esas mismas regiones de Galileo a cuerpos de referencia no galileanos también. Respecto a éstos existe entonces un campo gravitatorio de tipo especial (epígrafes 20 y 23).
Sin embargo, en los campos gravitatorios no existen cuerpos rígidos con propiedades euclidianas; la ficción del cuerpo de referencia rígido fracasa, pues, en la teoría de la relatividad general. Y los campos gravitatorios también influyen en la marcha de los relojes, hasta el punto de que una definición física del tiempo con la ayuda directa de relojes no posee ni mucho menos el grado de evidencia que tiene en la teoría de la relatividad especial.
Por esa razón se utilizan cuerpos de referencia no rígidos que, vistos como un todo, no sólo tienen un movimiento arbitrario, sino que durante su movimiento sufren alteraciones arbitrarias en su forma. Para la definición del tiempo sirven relojes cuya marcha obedezca a una ley arbitraria y todo lo irregular que se quiera; cada uno de estos relojes hay que imaginárselo fijo en un punto del cuerpo de referencia no rígido, y cumplen una sola condición: la de que los datos simultáneamente perceptibles en relojes espacialmente vecinos difieran infinitamente poco entre sí. Este cuerpo de referencia no rígido, que no sin razón cabría llamarlo «molusco de referencia», equivale en esencia a un sistema de coordenadas gaussianas, cuadridimensional y arbitrario. Lo que le confiere al «molusco» un cierto atractivo frente al sistema de coordenadas gaussianas es la conservación formal (en realidad injustificada) de la peculiar existencia de las coordenadas espaciales frente a la coordenada temporal. Todo punto del molusco es tratado como un punto espacial; todo punto material que esté en reposo respecto a él será tratado como en reposo, a secas, mientras se utilice el molusco como cuerpo de referencia. El principio de la relatividad general exige que todos estos moluscos se puedan emplear, con igual derecho y éxito parejo, como cuerpos de referencia en la formulación de las leyes generales de la naturaleza; estas leyes deben ser totalmente independientes de la elección del molusco.
En la profunda restricción que se impone con ello a las leyes de la naturaleza reside la sagacidad que le es inherente al principio de la relatividad general.
#### 29. LA SOLUCIÓN DEL PROBLEMA DE LA GRAVITACIÓN SOBRE LA BASE DEL PRINCIPIO DE LA RELATIVIDAD GENERAL
Si el lector ha seguido todos los razonamientos anteriores, no tendrá ya dificultad ninguna para comprender los métodos que conducen a la solución del problema de la gravitación.
Partimos de la contemplación de una región de Galileo, es decir, de una región en la que no existe ningún campo gravitatorio respecto a un cuerpo de referencia de Galileo _K_. El comportamiento de escalas y relojes respecto a _K_ es ya conocido por la teoría de la relatividad especial, lo mismo que el comportamiento de puntos materiales «aislados»; estos últimos se mueven en línea recta y uniformemente.
Referimos ahora esta región a un sistema de coordenadas gaussiano arbitrario, o bien a un «molusco», como cuerpo de referencia _K'_. Respecto a _K'_ existe entonces un campo gravitatorio _G_ (de clase especial). Por simple conversión se obtiene así el comportamiento de reglas y relojes, así como de puntos materiales libremente móviles, respecto a _K'_. Este comportamiento se interpreta como el comportamiento de reglas, relojes y puntos materiales bajo la acción del campo gravitatorio _G_. Se introduce entonces la hipótesis de que la acción del campo gravitatorio sobre escalas, relojes y puntos materiales libremente móviles se produce según las mismas leyes aun en el caso de que el campo gravitatorio reinante no se pueda derivar del caso especial galileano por mera transformación de coordenadas.
A continuación se investiga el comportamiento espacio-temporal del campo gravitatorio _G_ derivado del caso especial galileano por simple transformación de coordenadas y se formula este comportamiento mediante una ley que es válida independientemente de cómo se elija el cuerpo de referencia (molusco) utilizado para la descripción.
Esta ley no es todavía la ley _general_ del campo gravitatorio, porque el campo gravitatorio _G_ estudiado es de una clase especial. Para hallar la ley general del campo gravitatorio hace falta generalizar además la ley así obtenida; no obstante, cabe encontrarla, sin ningún género de arbitrariedad, si se tienen en cuenta los siguientes requisitos:
a) La generalización buscada debe satisfacer también el postulado de la relatividad general.
b) Si existe materia en la región considerada, entonces lo único que determina su acción generadora de un campo es su masa inercial, es decir, según el epígrafe 15, su energía únicamente.
c) Campo gravitatorio y materia deben satisfacer juntos la ley de conservación de la energía (y del impulso).
El principio de la relatividad general nos permite por fin determinar la influencia del campo gravitatorio sobre la evolución de todos aquellos procesos que en ausencia de campo gravitatorio discurren según leyes conocidas, es decir, que están incluidos ya en el marco de la teoría de la relatividad especial. Aquí se procede esencialmente por el método que antes analizamos para reglas, relojes y puntos materiales libremente móviles.
La teoría de la gravitación derivada así del postulado de la relatividad general no sólo sobresale por su belleza, no sólo elimina el defecto indicado en el epígrafe 21 y del cual adolece la mecánica clásica, no sólo interpreta la ley empírica de la igualdad entre masa inercial y masa gravitatoria, sino que ya ha explicado también dos resultados experimentales de la astronomía, esencialmente muy distintos, frente a los cuales fracasa la mecánica clásica. El segundo de estos resultados, la curvatura de los rayos luminosos en el campo gravitatorio del Sol, ya lo hemos mencionado; el primero tiene que ver con la órbita del planeta Mercurio.
En efecto, si se particularizan las ecuaciones de la teoría de la relatividad general al caso de que los campos gravitatorios sean débiles y de que todas las masas se muevan respecto al sistema de coordenadas con velocidades pequeñas comparadas con la de la luz, entonces se obtiene la teoría de Newton como primera aproximación; así pues, esta teoría resulta aquí sin necesidad de sentar ninguna hipótesis especial, mientras que Newton tuvo que introducir como hipótesis la fuerza de atracción inversamente proporcional al cuadrado de la distancia entre los puntos materiales que interactúan. Si se aumenta la exactitud del cálculo, aparecen desviaciones respecto a la teoría de Newton, casi todas las cuales son, sin embargo, todavía demasiado pequeñas para ser observables.
Una de estas desviaciones debemos examinarla aquí con especial detenimiento. Según la teoría newtoniana, los planetas se mueven en torno al Sol según una elipse que conservaría eternamente su posición respecto a las estrellas fijas si se pudiera prescindir de la influencia de los demás planetas sobre el planeta considerado, así como del movimiento propio de las estrellas fijas. Fuera de estas dos influencias, la órbita del planeta debería ser una elipse inmutable respecto a las estrellas fijas, siempre que la teoría de Newton fuese exactamente correcta. En todos los planetas, menos en Mercurio, el más próximo al Sol, se ha confirmado esta consecuencia —que se puede comprobar con eminente precisión— hasta el límite de exactitud que permiten los métodos de observación actuales. Ahora bien, del planeta Mercurio sabemos desde Leverrier que la elipse de su órbita respecto a las estrellas fijas, una vez corregida en el sentido anterior, no es fija, sino que rota —aunque lentísimamente— en el plano orbital y en el sentido de su revolución. Para este movimiento de rotación de la elipse orbital se obtuvo un valor de 43 segundos de arco por siglo, valor que es seguro con una imprecisión de pocos segundos de arco. La explicación de este fenómeno dentro de la mecánica clásica sólo es posible mediante la utilización de hipótesis poco verosímiles, inventadas exclusivamente con este propósito.
Según la teoría de la relatividad general resulta que toda elipse planetaria alrededor del Sol debe necesariamente rotar en el sentido indicado anteriormente, que esta rotación es en todos los planetas, menos en Mercurio, demasiado pequeña para poder detectarla con la exactitud de observación hoy día alcanzable, pero que en el caso de Mercurio debe ascender a 43 segundos de arco por siglo, exactamente como se había comprobado en las observaciones.
Al margen de esto, sólo se ha podido extraer de la teoría otra consecuencia accesible a la contrastación experimental, y es un corrimiento espectral de la luz que nos envían las grandes estrellas respecto a la luz generada de manera equivalente (es decir, por la misma clase de moléculas) en la Tierra. No me cabe ninguna duda de que también esta consecuencia de la teoría hallará pronto confirmación.
### Tercera parte
### CONSIDERACIONES ACERCA DEL UNIVERSO COMO UN TODO
#### 30. DIFICULTADES COSMOLÓGICAS DE LA TEORÍA NEWTONIANA
Aparte del problema expuesto en el epígrafe 21, la mecánica celeste clásica adolece de una segunda dificultad teórica que, según mis conocimientos, fue examinada detenidamente por primera vez por el astrónomo Seeliger. Si uno reflexiona sobre la pregunta de cómo imaginar el mundo como un todo, la respuesta inmediata será seguramente la siguiente. El universo es espacialmente (y temporalmente) infinito. Existen estrellas por doquier, de manera que la densidad de materia será en puntos concretos muy diversa, pero en todas partes la misma por término medio. Expresado de otro modo: por mucho que se viaje por el universo, en todas partes se hallará un enjambre suelto de estrellas fijas de aproximadamente la misma especie e igual densidad.
Esta concepción resulta del todo irreconciliable con la teoría newtoniana. Esta última exige más bien que el universo tenga una especie de centro en el cual la densidad de estrellas sea máxima, y que la densidad de estrellas disminuya de allí hacia afuera, de manera que acabe dando paso, más allá todavía, a un vacío infinito. El mundo estelar debería formar una isla finita en medio del infinito océano del espacio.
Esta representación es de por sí poco satisfactoria. Pero lo es aún menos porque de este modo se llega a la consecuencia de que la luz emitida por las estrellas, así como algunas de las estrellas mismas del sistema estelar, emigran ininterrumpidamente hacia el infinito, sin que jamás regresen ni vuelvan a entrar en interacción con otros objetos de la naturaleza. El mundo de la materia, apelotonada en un espacio finito, iría empobreciéndose entonces paulatinamente.
Para eludir estas consecuencias Seeliger modificó la _ley_ newtoniana en el sentido de suponer que a distancias grandes la atracción de dos masas disminuye más deprisa que la ley de . Con ello se consigue que la densidad media de la materia sea constante en todas partes hasta el infinito, sin que surjan campos gravitatorios infinitamente grandes, con lo cual se deshace uno de la antipática idea de que el mundo material posee una especie de punto medio. Sin embargo, el precio que se paga por liberarse de los problemas teóricos descritos es una modificación y complicación de la ley de Newton que no se justifican ni experimental ni teóricamente. Cabe imaginar un número arbitrario de leyes que cumplan el mismo propósito, sin que se pueda dar ninguna razón para que una de ellas prime sobre las demás; porque cualquiera de ellas está tan poco fundada en principios teóricos más generales como la ley de Newton.
#### 31. LA POSIBILIDAD DE UN UNIVERSO FINITO Y SIN EMBARGO NO LIMITADO
Las especulaciones en torno a la estructura del universo se movieron también en otra dirección muy distinta. En efecto, el desarrollo de la geometría no euclidiana hizo ver que es posible dudar de la _infinitud_ de nuestro espacio sin entrar en colisión con las leyes del pensamiento ni con la experiencia (Riemann, Helmholtz). Estas cuestiones las han aclarado ya con todo detalle Helmholtz y Poincaré, mientras que aquí yo no puedo hacer más que tocarlas fugazmente.
Imaginemos en primer lugar un suceso bidimensional. Supongamos que unos seres planos, provistos de herramientas planas —en particular pequeñas reglas planas y rígidas— se pueden mover libremente en un _plano_. Fuera de él no existe nada para ellos; el acontecer en su plano, que ellos observan en sí mismos y en sus objetos, es un acontecer causalmente cerrado. En particular son realizables las construcciones de la geometría euclidiana plana con varillas, por ejemplo la construcción reticular sobre la mesa que contemplamos en el epígrafe 24. El mundo de estos seres es, en contraposición al nuestro, espacialmente bidimensional, pero, al igual que el nuestro, de extensión infinita. En él tienen cabida infinitos cuadrados iguales construidos con varillas, es decir, su volumen (superficie) es infinito. Si estos seres dicen que su mundo es «plano», no dejará de tener sentido su afirmación, a saber, el sentido de que con sus varillas se pueden realizar las construcciones de la geometría euclidiana del plano, representando cada varilla siempre el mismo segmento, independientemente de su posición.
Volvamos ahora a imaginarnos un suceso bidimensional, pero no en un plano, sino en una superficie esférica. Los seres planos, junto con sus reglas de medida y demás objetos, yacen exactamente en esta superficie y no pueden abandonarla; todo su mundo perceptivo se extiende única y exclusivamente a la superficie esférica. Estos seres ¿podrán decir que la geometría de su mundo es una geometría euclidiana bidimensional y considerar que sus varillas son una realización del «segmento»? No pueden, porque al intentar materializar una recta obtendrán una curva, que nosotros, seres «tridimensionales», llamamos círculo máximo, es decir, una línea cerrada de determinada longitud finita que se puede medir con una varilla. Este mundo tiene asimismo una superficie finita que se puede comparar con la de un cuadrado construido con varillas. El gran encanto que depara el sumergirse en esta reflexión reside en percatarse de lo siguiente: _el mundo de estos seres es finito y sin embargo no tiene límites_.
Ahora bien, los seres esféricos no necesitan emprender un viaje por el mundo para advertir que no habitan en un mundo euclidiano, de lo cual pueden convencerse en cualquier trozo no demasiado pequeño de la esfera. Basta con que, desde un punto, tracen «segmentos rectos» (arcos de circunferencia, si lo juzgamos tridimensionalmente) de igual longitud en todas direcciones. La unión de los extremos libres de estos segmentos la llamarán «circunferencia». La razón entre el perímetro de la circunferencia, medido con una varilla, y el diámetro medido con la misma varilla es igual, según la geometría euclidiana del plano, a una constante π que es independiente del diámetro de la circunferencia. Sobre la superficie esférica, nuestros seres hallarían para esta razón el valor
es decir, un valor que es menor que π, y tanto menor cuanto mayor sea el radio de la circunferencia en comparación con el radio _R_ del «mundo esférico». A partir de esta relación pueden determinar los seres esféricos el radio _R_ de su mundo, aunque sólo tengan a su disposición una parte relativamente pequeña de la esfera para hacer sus mediciones. Pero si esa parte es demasiado reducida, ya no podrán constatar que se hallan sobre un mundo esférico y no sobre un plano euclidiano, porque un trozo pequeño de una superficie esférica difiere poco de un trozo de plano de igual tamaño.
Así pues, si nuestros seres esféricos habitan en un planeta cuyo sistema solar ocupa sólo una parte ínfima del universo esférico, no tendrán posibilidad de decidir si viven en un mundo finito o infinito, porque el trozo de mundo que es accesible a su experiencia es en ambos casos prácticamente plano o euclidiano. Esta reflexión muestra directamente que para nuestros seres esféricos el perímetro de la circunferencia crece al principio con el radio hasta alcanzar el «perímetro del universo», para luego, al seguir creciendo el radio, disminuir paulatinamente hasta cero. La superficie del círculo crece continuamente, hasta hacerse finalmente igual a la superficie total del mundo esférico entero.
Al lector quizá le extrañe que hayamos colocado a nuestros seres precisamente sobre una esfera y no sobre otra superficie cerrada. Pero tiene su justificación, porque la superficie esférica se caracteriza, frente a todas las demás superficies cerradas, por la propiedad de que todos sus puntos son equivalentes. Es cierto que la relación entre el perímetro _p_ de una circunferencia y su radio _r_ depende de _r_ ; pero, dado _r_ , es igual para todos los puntos del mundo esférico. El mundo esférico es una «superficie de curvatura constante».
Este mundo esférico bidimensional tiene su homólogo en tres dimensiones, el espacio esférico tridimensional, que fue descubierto por Riemann. Sus puntos son también equivalentes. Posee un volumen finito, que viene determinado por su «radio» _R_ (2π2 _R_ 3). ¿Puede uno imaginarse un espacio esférico? Imaginarse un espacio no quiere decir otra cosa que imaginarse un modelo de experiencias «espaciales», es decir, de experiencias que se pueden tener con el movimiento de cuerpos «rígidos». En este sentido sí que cabe imaginar un espacio esférico.
Desde un punto trazamos rectas (tensamos cuerdas) en todas direcciones y marcamos en cada una el segmento _r_ con ayuda de la regla de medir. Todos los extremos libres de estos segmentos yacen sobre una superficie esférica. Su área ( _A_ ) podemos medirla con un cuadrado hecho con reglas. Si el mundo es euclidiano, tendremos que _A_ = 4π _r_ 2; si el mundo es esférico, entonces _A_ será siempre menor que 4π _r_ 2. _A_ aumenta con _r_ desde cero hasta un máximo que viene determinado por el «radio del universo», para luego disminuir otra vez hasta cero al seguir creciendo el radio de la esfera _r_. Las rectas radiales que salen del punto origen se alejan al principio cada vez más unas de otras, vuelven a acercarse luego y convergen otra vez en el punto opuesto al origen; habrán recorrido entonces todo el espacio esférico. Es fácil comprobar que el espacio esférico tridimensional es totalmente análogo al bidimensional (superficie esférica). Es finito (es decir, de volumen finito) y no tiene límites.
Señalemos que existe también una subespecie del espacio esférico: el «espacio elíptico». Cabe concebirlo como un espacio esférico en el que los «puntos opuestos» son idénticos (no distinguibles). Así pues, un mundo elíptico cabe contemplarlo, en cierto modo, como un mundo esférico centralmente simétrico.
De lo dicho se desprende que es posible imaginar espacios cerrados que no tengan límites. Entre ellos destaca por su simplicidad el espacio esférico (o el elíptico), cuyos puntos son todos equivalentes. Según todo lo anterior, se les plantea a los astrónomos y a los físicos un problema altamente interesante, el de si el mundo en que vivimos es infinito o, al estilo del mundo esférico, finito. Nuestra experiencia no basta ni de lejos para contestar a esta pregunta. La teoría de la relatividad general permite, sin embargo, responder con bastante seguridad y resolver de paso la dificultad explicada en el epígrafe 30.
#### 32. LA ESTRUCTURA DEL ESPACIO SEGÚN LA TEORÍA DE LA RELATIVIDAD GENERAL
Según la teoría de la relatividad general, las propiedades geométricas del espacio no son independientes, sino que vienen condicionadas por la materia. Por eso no es posible inferir nada sobre la estructura geométrica del mundo a menos que la reflexión se funde en el conocimiento del estado de la materia. Sabemos, por la experiencia, que con una elección conveniente del sistema de coordenadas las velocidades de las estrellas son pequeñas frente a la velocidad de propagación de la luz. Así pues, si suponemos que la materia está en reposo, podremos conocer la estructura del universo en una primera y tosquísima aproximación.
Por anteriores consideraciones sabemos ya que el comportamiento de reglas de medir y relojes viene influido por los campos de gravitación, es decir, por la distribución de la materia. De aquí se sigue ya que la validez exacta de la geometría euclidiana en nuestro mundo es algo que no entra ni siquiera en consideración. Pero en sí es concebible que nuestro mundo difiera poco de un mundo euclidiano, idea que viene abonada por el hecho de que, según los cálculos, incluso masas de la magnitud de nuestro Sol influyen mínimamente en la métrica del espacio circundante. Cabría imaginar que nuestro mundo se comporta en el aspecto geométrico como una superficie que está irregularmente curvada pero que en ningún punto se aparta significativamente de un plano, lo mismo que ocurre, por ejemplo, con la superficie de un lago rizado por débiles olas. A un mundo de esta especie podríamos llamarlo con propiedad cuasi-euclidiano, y sería espacialmente infinito. Los cálculos indican, sin embargo, que en un mundo cuasi-euclidiano la densidad media de materia tendría que ser nula. Por consiguiente, un mundo semejante no podría estar poblado de materia por doquier; ofrecería el cuadro insatisfactorio que dibujamos en el epígrafe 30.
Si la densidad media de materia en el mundo no es nula (aunque se acerque mucho a cero), entonces el mundo no es cuasi-euclidiano. Los cálculos demuestran más bien que, con una distribución uniforme de materia, debería ser necesariamente esférico (o elíptico). Dado que la materia está distribuida de manera localmente no uniforme, el mundo real diferirá localmente del comportamiento esférico, es decir, será cuasi-esférico. Pero necesariamente tendrá que ser finito. La teoría proporciona incluso una sencilla relación entre la extensión espacial del mundo y la densidad media de materia en él.
### Apéndice 1
### UNA DERIVACIÓN SENCILLA DE LA TRANSFORMACIÓN DE LORENTZ
### (Anexo al epígrafe 11)
Con la orientación relativa de los sistemas de coordenadas indicada en la figura de la página 214, los ejes de abscisas de los dos sistemas coinciden constantemente. Aquí podemos desglosar el problema y considerar primero únicamente sucesos que estén localizados en el eje de las _X_. Un suceso semejante viene dado, respecto al sistema de coordenadas _K_ , por la abscisa _x_ y el tiempo _t_ , y respecto a _K'_ por la abscisa _x'_ y el tiempo _t'_. Se trata de hallar _x'_ y _t'_ cuando se conocen _x_ y _t_.
Una señal luminosa que avanza a lo largo del eje _X_ positivo se propaga según la ecuación
_x_ = _ct_
o bien
_x_ – _ct_ = 0.
(1)
Dado que la misma señal luminosa debe propagarse, también respecto a _K_ , con la velocidad _c_ , la propagación respecto a _K'_ vendrá descrita por la fórmula análoga
_x'_ – _ct'_ = 0.
(2)
Aquellos puntos del espacio-tiempo (sucesos) que cumplen (1) tienen que cumplir también (2), lo cual será el caso cuando se cumpla en general la relación
( _x'_ – _ct'_ ) = λ( _x_ – _ct_ )
(3)
donde λ es una constante; pues, según (3), la anulación de _x_ – _ct_ conlleva la de _x'_ – _ct'_.
Un razonamiento totalmente análogo, aplicado a rayos de luz que se propaguen a lo largo del eje _X_ negativo, proporciona la condición:
_x'_ \+ _ct'_ = µ( _x_ \+ _ct_ ).
(4)
Si se suman y restan, respectivamente, las ecuaciones (3) y (4), introduciendo por razones de comodidad las constantes
en lugar de las constantes _A_ y µ, se obtiene
(5)
Con ello quedaría resuelto el problema, siempre que conozcamos las constantes _a_ y _b_ ; éstas resultan de las siguientes consideraciones.
Para el origen de _K'_ se cumple constantemente _x'_ = 0, de manera que, por la primera de las ecuaciones (5):
Por tanto, si llamamos _v_ a la velocidad con que se mueve el origen de _K'_ respecto a _K_ , tenemos que
(6)
El mismo valor de _v_ se obtiene a partir de (5), al calcular la velocidad de otro punto de _K'_ respecto a _K_ o la velocidad (dirigida hacia el eje _X_ negativo) de un punto _K_ respecto a _K'_. Por tanto, es posible decir en resumen que _v_ es la velocidad relativa de ambos sistemas.
Además, por el principio de la relatividad, está claro que la longitud, juzgada desde _K_ , de una regla de medir unitaria que se halla en reposo respecto a _K'_ tiene que ser exactamente la misma que la longitud, juzgada desde _K'_ de una regla unidad que se halla en reposo respecto a _K_. Para ver qué aspecto tienen los puntos del eje _X'_ vistos desde _K_ basta con tomar una «fotografía instantánea» de _K'_ desde _K_ ; lo cual significa dar a _t_ (tiempo de _K_ ) un valor determinado, por ejemplo _t_ = 0. De la primera de las ecuaciones (5) se obtiene:
_x'_ – _ax_.
Así pues, dos puntos del eje _X'_ que medidos en _K'_ distan entre sí _x'_ = 1, tienen en nuestra instantánea la separación:
(7)
Pero si se toma la fotografía desde _K'_ ( _t'_ = 0), se obtiene a partir de (5), por eliminación de _t_ y teniendo en cuenta (6):
De aquí se deduce que dos puntos del eje _X_ que distan 1 (respecto a _K_ ) tienen en nuestra instantánea la separación
(7a)
Teniendo en cuenta que, por lo que llevamos dicho, las dos fotografías deben ser iguales, ∆ _x_ en (7) tiene que ser igual a ∆ _x'_ en (7a), de modo que se obtiene:
(7b)
Las ecuaciones (6) y (7b) determinan las constantes _a_ y _b_. Sustituyendo en (5) se obtienen las ecuaciones cuarta y quinta de la que dimos en el epígrafe 11.
(8)
Con ello hemos obtenido la transformación de Lorentz para sucesos localizados en el eje _X_ ; dicha transformación satisface la condición
_x'_ – _c_ 2 _t'_ 2 = _x_ 2 – _c_ 2 _t'_ 2
(8a)
La extensión de este resultado a sucesos que ocurren fuera del eje _X_ se obtiene reteniendo las ecuaciones (8) y añadiendo las relaciones
(9)
Veamos ahora que con ello se satisface el postulado de la constancia de la velocidad de la luz para rayos luminosos de dirección arbitraria, tanto para el sistema _K_ como también para el _K'_.
Supongamos que en el instante _t_ = 0 se emite una señal luminosa desde el origen de _K_. Su propagación obedece a la ecuación:
o bien, elevando el cuadrado,
_x_ 2 \+ _y_ 2 \+ _z_ 2 – _c_ 2 _t_ 2 = 0.
(10)
La ley de propagación de la luz, en conjunción con el postulado de la relatividad, exige que la propagación de esa misma señal, pero juzgada desde _K'_ , ocurra según la fórmula correspondiente
_r'_ = _ct'_
o bien
_x'_ 2 \+ y'2 \+ _z'_ 2 – _c_ 2 _t'_ 2 = 0.
(10a)
Para que la ecuación (10a) sea una consecuencia de (10), tiene que cumplirse que:
_x'_ 2 \+ _y'_ 2 \+ _z'_ 2 – _c_ 2 _t'_ 2 = _σ_ ( _x_ 2 \+ _y_ 2 \+ _z_ 2 – _c_ 2 _t_ 2).
(11)
Puesto que la ecuación (8a) tiene que cumplirse para los puntos situados sobre el eje _X_ , ha de ser _σ_ = 1. Es fácil ver que la transformación de Lorentz cumple realmente la ecuación (11) con _σ_ = 1, pues (11) es una consecuencia de (8a) y (9), y por tanto también de (8) y (9). Con ello queda derivada la transformación de Lorentz.
Es preciso ahora generalizar esta transformación de Lorentz, representada por (8) y (9). Evidentemente es inesencial que los ejes de _K'_ se elijan espacialmente paralelos a los de _K_. Tampoco es esencial que la velocidad de traslación de _K'_ respecto a _K_ tenga la dirección del eje _X_. La transformación de Lorentz, en este sentido general, cabe desglosarla —como muestra un simple razonamiento— en dos transformaciones, a saber: transformaciones de Lorentz en sentido especial y transformaciones puramente espaciales que equivalen a la sustitución del sistema de coordenadas rectangulares por otro con ejes dirigidos en direcciones distintas.
Matemáticamente se puede caracterizar la transformación de Lorentz generalizada de la siguiente manera:
Dicha transformación expresa _x'_ , _y'_ , _z'_ , _t'_ mediante unas funciones homogéneas y lineales de _x_ , _y_ , _z_ , _t_ que hacen que la relación
_x'_ 2 \+ _y'_ 2 \+ _z'_ 2 – _c_ 2 _t'_ 2 = _x_ 2 \+ _y_ 2 \+ _z_ 2 – _c_ 2 _t_ 2
(11a)
se cumpla idénticamente. Lo cual quiere decir: si se sustituye a la izquierda _x'_ , etc. por sus expresiones en _x_ , _y_ , _z_ , _t_ , entonces el miembro izquierdo de (11a) es igual al derecho.
### Apéndice 2
### EL MUNDO CUADRIDIMENSIONAL DE MINKOWSKI
### (Anexo al epígrafe 17)
La transformación de Lorentz generalizada puede caracterizarse de un modo aún más sencillo si en lugar de _t_ se introduce como variable temporal la variable imaginaria √−1 _ct_. Si de acuerdo con esto ponemos
_x_ 1 = _x_
_x_ 2 = _y_
_x_ 3 = _z_
_x_ 4 = √−1 _ct_ ,
y análogamente para el sistema con primas _K'_ , entonces la condición que satisface idénticamente la transformación será:
_x_ '12 + _x_ '22 + _x_ '32 + _x_ '42 = _x_ 12 + _x_ 22 + _x_ 32 + _x_ 42
(12)
Con la elección de «coordenadas» que acabamos de indicar, la ecuación (11a) se convierte en la (12).
De (12) se desprende que la coordenada temporal imaginaria _x_ 4 entra en la condición de transformación en pie de igualdad con las coordenadas espaciales _x_ 1, _x_ 2, _x_ 3. A eso responde el que, según la teoría de la relatividad, el «tiempo» _x_ 4 intervenga en las leyes de la naturaleza en la misma forma que las coordenadas espaciales _x_ 1, _x_ 2, _x_ 3.
Minkowski llamó «universo» o «mundo» al continuo cuadridimensional descrito por las «coordenadas» _x_ 1, _x 2_, _x 3_, _x_ 4, y «punto del universo» o «punto del mundo» al suceso puntual. La física deja de ser un _suceder_ en el espacio tridimensional para convertirse en cierto modo en un _ser_ en el «mundo» cuadridimensional.
Este «mundo» cuadridimensional guarda un profundo parecido con el «espacio» tridimensional de la geometría analítica (euclidiana). Pues si en este último se introduce un nuevo sistema de coordenadas cartesianas ( _x_ '1, _x_ '2, _x_ '3) con el mismo origen, entonces _x'_ 1, _x'_ 2, _x'_ 3 son funciones homogéneas y lineales de _x_ 1, _x_ 2, _x_ 3 que cumplen idénticamente la ecuación
_x_ '12 + _x_ '22 + _x_ '32 = _x_ 12 + _x_ 22 + _x_ 32
La analogía con (12) es completa. El mundo de Minkowski cabe contemplarlo formalmente como un espacio euclidiano cuadridimensional (con coordenada temporal imaginaria); la transformación de Lorentz se corresponde con una «rotación» del sistema de coordenadas en el «universo» cuadridimensional.
### Apéndice 3
### SOBRE LA CONFIRMACIÓN DE LA TEORÍA DE LA RELATIVIDAD GENERAL POR LA EXPERIENCIA
Bajo una óptica epistemológica esquemática, el proceso de crecimiento de una ciencia experimental aparece como un continuo proceso de inducción. Las teorías emergen como resúmenes de una cantidad grande de experiencias individuales en leyes empíricas, a partir de las cuales se determinan por comparación las leyes generales. Desde este punto de vista, la evolución de la ciencia parece análoga a una obra de catalogación o a un producto de mera empiria.
Esta concepción, sin embargo, no agota en modo alguno el verdadero proceso, pues pasa por alto el importante papel que desempeñan la intuición y el pensamiento deductivo en el desarrollo de la ciencia exacta. En efecto, tan pronto como una ciencia sobrepasa el estadio más primitivo, los progresos teóricos no nacen ya de una simple actividad ordenadora. El investigador, animado por los hechos experimentales, construye más bien un sistema conceptual que se apoya lógicamente en un número por lo general pequeño de supuestos básicos que se denominan axiomas. A un sistema conceptual semejante lo llamamos teoría. La teoría obtiene la justificación de su existencia por el hecho de conectar entre sí un número grande de experiencias aisladas; en esto reside su «verdad».
Frente a un mismo complejo de hechos de la experiencia puede haber diversas teorías que difieran mucho entre sí. La coincidencia de las teorías en las consecuencias accesibles a la experiencia puede ser tan profunda que resulte difícil encontrar otras, también accesibles a la experiencia, respecto a las cuales difieran. Un caso semejante, y de interés general, se da por ejemplo en el terreno de la biología, en la teoría darwiniana de la evolución por selección en la lucha por la existencia y en aquella otra teoría de la evolución que se funda en la hipótesis de la herencia de caracteres adquiridos.
Otro caso semejante de profunda concordancia de las consecuencias es el de la mecánica newtoniana, por un lado, y la teoría de la relatividad general, por otro. La concordancia llega hasta tal punto que hasta ahora se han podido encontrar muy pocas consecuencias de la teoría de la relatividad general a las cuales no conduzca también la física anterior, y eso a pesar de la radical diversidad de los supuestos básicos de una y otra teoría. Vamos a contemplar aquí de nuevo estas importantes consecuencias y comentar también brevemente las experiencias acumuladas hasta ahora al respecto.
A) EL MOVIMIENTO DEL PERIHELIO DE MERCURIO
Según la mecánica newtoniana y la ley de gravitación de Newton, un único planeta que girara en torno a un sol describiría una elipse alrededor de él (o más exactamente, alrededor del centro de gravedad común de ambos). El sol (o bien el centro de gravedad común) yace en uno de los focos de la elipse orbital, de manera que la distancia sol-planeta crece a lo largo de un año planetario hasta un máximo, para luego volver a decrecer hasta el mínimo. Si en lugar de la ley de atracción newtoniana se introduce en los cálculos otra distinta, entonces se comprueba que el movimiento según esta nueva ley tendría que seguir siendo tal que la distancia sol-planeta oscilase en un sentido y otro; pero el ángulo descrito por la línea solplaneta durante uno de esos períodos (de perihelio a perihelio) diferiría de 360°. La curva de la órbita no sería entonces cerrada, sino que llenaría con el tiempo una porción anular del plano orbital (entre el círculo de máxima y el de mínima distancia perihélica).
Según la teoría de la relatividad general, que difiere algo de la newtoniana, tiene que haber también una pequeña desviación de esta especie respecto al movimiento orbital previsto por Kepler-Newton, de manera que el ángulo descrito por el radio sol-planeta entre un perihelio y el siguiente difiera de un ángulo completo de rotación (es decir, del ángulo 2π, en la medida angular absoluta que es habitual en física) en la cantidad
( _a_ es el semieje mayor de la elipse, _e_ su excentricidad, _c_ la velocidad de la luz, _T_ el período de revolución). Expresado de otra manera: según la teoría de la relatividad general, el eje mayor de la elipse rota alrededor del Sol en el sentido del movimiento orbital. Esta rotación es, de acuerdo con la teoría, de 43 segundos de arco cada 100 años en el caso del planeta Mercurio, mientras que en los demás planetas de nuestro Sol sería tan pequeña que escapa a toda constatación.
Los astrónomos han comprobado efectivamente que la teoría de Newton no basta para calcular el movimiento observado de Mercurio con la precisión que pueden alcanzar hoy día las observaciones. Tras tener en cuenta todas las influencias perturbadoras que ejercen los demás planetas sobre Mercurio, se comprobó (Leverrier, 1859, y Newcomb, 1895) que en el movimiento del perihelio de la órbita de Mercurio quedaba sin explicar una componente que no difiere perceptiblemente de los + 43 segundos por siglo que acabamos de mencionar. La imprecisión de este resultado empírico, que concuerda con el resultado de la teoría general de la relatividad, es de pocos segundos.
B) LA DESVIACIÓN DE LA LUZ POR EL CAMPO GRAVITACIONAL
En el epígrafe 22 explicamos que, según la teoría de la relatividad general, cualquier rayo de luz tiene que experimentar en el seno de un campo gravitacional una curvatura que es análoga a la que experimenta la trayectoria de un cuerpo al lanzarlo a través de ese campo. De acuerdo con la teoría, un rayo de luz que pase al lado de un cuerpo celeste sufrirá una desviación hacia él; el ángulo de desviación _a_ , para un rayo luminoso que pase a una distancia de ∆ radios solares del Sol, debe ser de
Añadamos que, de acuerdo con la teoría, la mitad de esta desviación es producto del campo de atracción (newtoniano) del Sol; la otra mitad, producto de la modificación geométrica («curvatura») del espacio provocada por aquél.
Este resultado brinda la posibilidad de una comprobación experimental mediante fotografías estelares tomadas durante un eclipse total de Sol. Es necesario esperar a este fenómeno porque en cualquier otro momento la atmósfera, iluminada por la luz solar, resplandece tanto que las estrellas próximas al Sol resultan invisibles. El fenómeno esperado se deduce fácilmente de la figura siguiente. Si no existiese el Sol _S_ , cualquier estrella situada a distancia prácticamente infinita se vería en la dirección _R_ 1. Pero como consecuencia de la desviación provocada por el Sol se la ve en la dirección _R_ 2, es decir, separada del centro del Sol un poco más de lo que en realidad está.
La prueba se desarrolla en la práctica de la siguiente manera. Durante un eclipse de Sol se fotografían las estrellas situadas en las inmediaciones de aquél. Se toma además una segunda fotografía de las mismas estrellas cuando el Sol se halla en otro lugar del cielo (es decir, algunos meses antes o después). Las imágenes estelares fotografiadas durante el eclipse de Sol deben estar entonces desplazadas radialmente hacia afuera (alejándose del centro del Sol) respecto a la fotografía de referencia, correspondiendo el desplazamiento al ángulo _a_.
Hemos de agradecer a la Astronomical Royal Society la contrastación de este importante resultado. Sin dejarse turbar por la guerra ni por las consiguientes dificultades de índole psicológica, envió a varios de sus astrónomos más destacados (Eddington, Crommelin, Davidson) y organizó dos expediciones con el fin de hacer las fotografías pertinentes durante el eclipse de Sol del 29 de mayo de 1919 en Sobral (Brasil) y en la isla Príncipe (África occidental). Las desviaciones relativas que eran de esperar entre las fotografías del eclipse y las de referencias ascendían tan sólo a unas pocas centésimas de milímetro. Así pues, las demandas que se impuso a la precisión de las fotografías y a su medición no eran pequeñas.
El resultado de la medición confirmó la teoría de manera muy satisfactoria. Las componentes transversales de las desviaciones estelares observadas y calculadas (en segundos de arco) se contienen en la siguiente tabla:
C) EL CORRIMIENTO AL ROJO DE LAS RAYAS ESPECTRALES
En el epígrafe 23 se demuestra que en un sistema _K'_ que rota respecto a un sistema de Galileo _K_ , la velocidad de marcha de relojes en reposo y de idéntica constitución depende de la posición. Vamos a examinar cuantitativamente esta dependencia. Un reloj colocado a distancia _r_ del centro del disco tiene, respecto a _K_ , la velocidad
_v_ = _wr_ ,
donde _w_ designa la velocidad de rotación del disco ( _K'_ ) respecto a _K_. Si llamamos _v_ 0 al número de golpes del reloj por unidad de tiempo (velocidad de marcha) respecto a _K_ cuando el reloj está en reposo, entonces la velocidad de marcha _v_ del reloj cuando se mueve con velocidad _v_ respecto a _K_ y está en reposo respecto al disco es, según el epígrafe 12,
que se puede escribir también, con suficiente precisión, así
o bien
Si llamamos +Φ a la diferencia de potencial de la fuerza centrífuga entre el lugar que ocupa el reloj y el punto medio del disco, es decir, al trabajo (con signo negativo) que hay que aportar en contra de la fuerza centrífuga a la unidad de masa para transportarla desde su posición en el disco móvil hasta el centro, entonces tenemos que
con lo cual resulta
De aquí se desprende en primer lugar que dos relojes idénticos pero colocados a diferente distancia del centro del disco marchan a distinta velocidad, resultado que también es válido desde el punto de vista de un observador que gire con el disco.
Dado que —juzgado desde el disco— existe un campo gravitacional cuyo potencial es Φ, el resultado obtenido valdrá para campos gravitacionales en general. Y como además un átomo que emite rayas espectrales es posible considerarlo como un reloj, tenemos el siguiente teorema:
_Un átomo absorbe o emite una frecuencia que depende del potencial del campo gravitatorio en el que se encuentra_.
La frecuencia de un átomo que se halle en la superficie de un cuerpo celeste es algo menor que la de un átomo del mismo elemento que se encuentre en el espacio libre (o en la superficie de otro astro menor). Dado que donde _K_ es la constante de gravitación newtoniana, _M_ la masa y _r_ el radio del cuerpo celeste, debería producirse un corrimiento hacia el rojo en las rayas espectrales generadas en la superficie de las estrellas si se las compara con las generadas en la superficie de la Tierra, concretamente en la cuantía
En el Sol, el corrimiento al rojo que debería esperarse es de unas dos millonésimas de longitud de onda. En el caso de las estrellas fijas no es posible hacer un cálculo fiable, porque en general no se conoce ni la masa _M_ ni el radio _r_.
Que este efecto exista realmente o no es una cuestión abierta en cuya solución trabajan actualmente con gran celo los astrónomos. En el caso del Sol es difícil juzgar la existencia del efecto por ser muy pequeño. Mientras que Grebe y Bachem (Bonn) —sobre la base de sus propias mediciones y de las de Evershed y Schwarzschild en la así llamada banda cyan— así como Perot (sobre la base de observaciones propias) consideran probada la existencia del efecto, otros investigadores, especialmente W. H. Julius y S. Sohn, son de la opinión contraria o no están convencidos de la fuerza probatoria del anterior material empírico.
En las investigaciones estadísticas realizadas sobre las estrellas fijas no hay duda de que existen por término medio corrimientos de las rayas espectrales hacia el extremo de las ondas largas del espectro. Sin embargo, la elaboración que se ha hecho hasta ahora del material no permite todavía ninguna decisión acerca de si esos movimientos se deben realmente al efecto de la gravitación. El lector podrá encontrar en el trabajo de E. Freundlich «Prüfung der allgemeinen Relativitätstheorie» ( _Die Naturwissenschaften_ , 1919, H. 35, p. 520, Verlag Jul. Spinger, Berlín) una recopilación del material empírico, junto a un análisis detenido desde el punto de vista de la cuestión que aquí nos interesa.
En cualquier caso, los años venideros traerán la decisión definitiva. Si no existiese ese corrimiento al rojo de las rayas espectrales debido al potencial gravitatorio, la teoría de la relatividad general sería insostenible. Por otro lado, el estudio del corrimiento de las rayas espectrales, caso de que se demuestre que su origen está en el potencial gravitatorio, proporcionará conclusiones importantes sobre la masa de los cuerpos celestes.
### Apéndice 4
### LA ESTRUCTURA DEL ESPACIO EN CONEXIÓN CON LA TEORÍA DE LA RELATIVIDAD GENERAL
Nuestro conocimiento sobre la estructura global del espacio («problema cosmológico») ha experimentado, desde la aparición de la primera edición de este librito, una evolución importante, que es preciso mencionar incluso en una exposición de carácter divulgativo.
Mis iniciales consideraciones sobre este problema se basaban en dos hipótesis:
1. La densidad media de materia en todo el espacio es distinta de 0 e igual en todas partes.
2. La magnitud (o el «radio») del universo es independiente del tiempo.
Estas dos hipótesis demostraron ser compatibles según la teoría de la relatividad general, pero únicamente cuando se añadía a las ecuaciones de campo un término hipotético que ni era exigido por la propia teoría ni tampoco parecía natural desde el punto de vista teórico («término cosmológico de las ecuaciones de campo»).
La hipótesis 2 me parecía a la sazón inevitable, pues por aquel entonces pensaba que, de apartarse de ella, se caería en especulaciones sin límite.
Sin embargo, el matemático ruso Friedman descubrió, allá por los años veinte, que desde el punto de vista puramente teórico era más natural otro supuesto diferente. En efecto, Friedman se dio cuenta de que era posible mantener la hipótesis 1 sin introducir en las ecuaciones de campo de la gravitación el poco natural término cosmológico, siempre que uno se decidiese a prescindir de la hipótesis 2. Pues las ecuaciones de campo originales admiten una solución en la que el «radio del mundo» depende del tiempo (espacio en expansión). En este sentido cabe afirmar con Friedman que la teoría exige una expansión del espacio.
[...]
# III
## OTRAS CONSIDERACIONES SOBRE LA RELATIVIDAD
¿Qué sucede realmente cuando una bola de billar choca con otra? Antes del siglo XX, se creía que la bola en movimiento y la bola en reposo sólo interactuaban durante el breve momento en que estaban en contacto, tal como indica el sentido común. Esto está muy bien para bolas de billar, pero ¿qué hay de las fuerzas de la gravedad y el electromagnetismo, que parece que actúan a distancia? Las hipótesis de los científicos decían que esas fuerzas debían propagarse a través de un medio ponderable, conocido como el «éter luminífero», parecido a la propagación de una onda de choque a través del aire.
El éter, sin embargo, no aguantaba un escrutinio científico riguroso. En 1887, Albert Michelson y Edward Morley demostraron que, sea lo que fuera el éter, no se comportaba como materia normal. Por ejemplo, una onda de agua viajando a lo largo del cauce de un río se propagaría más deprisa si lo hacía en el mismo sentido del movimiento del agua que si lo hacía en sentido contrario. No obstante, en el caso de la luz, Michelson y Morley probaron que la velocidad de propagación era la misma independientemente del movimiento relativo entre el observador y el hipotético éter.
En «El éter y la teoría de la relatividad», Einstein muestra que la relatividad especial se basa en el hecho experimental de que la luz viaja a velocidad constante para todos los observadores, y por tanto el éter, sea lo que sea, no puede ser nada parecido a la materia común. Su teoría de la relatividad general aún complica más este asunto al proponer que la gravedad proporciona la estructura del mismo espacio. Dicho llanamente, la gravedad se define incluso en el espacio «vacío», y en consecuencia, tiene que haber _algo_.
Este «algo» es el éter, o, en lenguaje moderno, un campo. La relatividad general y la teoría del electromagnetismo de Maxwell representan las primeras teorías de campos: descripciones de cómo funciona el mundo en términos de campos omnipresentes en lugar de con partículas puntuales. En muchos aspectos, ésta es una de las contribuciones más importantes de la relatividad a la física. Bajo un punto de vista moderno, todas las fuerzas nacen de campos. Las bolas de billar descritas anteriormente en realidad no chocan, sino que sus campos electromagnéticos provocan la repulsión recíproca a escalas muy pequeñas. En la teórica cuántica de campos, desarrollada a mediados del siglo XX, unos cuarenta años después de este trabajo, no sólo las fuerzas sino también las partículas mismas tienen su origen en el campo. Así pues, considere usted este trabajo como un comentario de transición entre el escenario clásico de las partículas de Isaac Newton y el escenario moderno en el cual el universo se compone fundamentalmente de campos.
### 1
### EL ÉTER Y LA TEORÍA DE LA RELATIVIDAD*
Señores consejeros, profesores, doctores y estudiantes de esta universidad, me dirijo a todos ustedes, así como a todos aquellos, señoras y señores, que honran esta celebración con su presencia:
¿Cómo se les ocurre a los físicos desarrollar la idea de la existencia de otra materia, llamada éter, cuando ya contaban con esa idea abstraída de la cotidianidad que es la materia ponderable? La razón de esto hay que buscarla en los fenómenos que han dado origen a la teoría de las fuerzas que actúan a distancia, y en las propiedades de la luz, que nos han llevado a la teoría de ondas. Queremos dedicar aquí a ambas teorías unas breves consideraciones.
Fuera de la física, el pensamiento no sabe nada de fuerzas que actúan a distancia. Al intentar penetrar en las causas de las experiencias que hemos tenido con los cuerpos, parece a primera vista que no existen más efectos de transformación que aquellos que se producen por contacto directo, por ejemplo, la transmisión de movimiento mediante choque, presión y tracción, o el calentamiento o la iniciación de una combustión mediante una llama, etc. Sin embargo, en la experiencia cotidiana desempeña un papel fundamental la gravedad, es decir, una fuerza que actúa a distancia. Dado que en la experiencia de cada día la gravedad aparece como algo constante, independiente de cualquier causa cambiante en el espacio o en el tiempo, no le atribuimos en la vida cotidiana causa alguna, por lo que desconocemos su carácter de fuerza que actúa a distancia. Gracias a la teoría de la gravitación de Newton se pudo establecer por primera vez una causa para la gravedad, siendo ésta interpretada como una fuerza que actúa a distancia y que se deriva de la masa. La teoría de Newton fue sin duda el mayor paso que se ha dado en toda la historia en relación con el esfuerzo por establecer un encadenamiento causal de los fenómenos naturales. Y, sin embargo, esta teoría produjo un vivo malestar entre los contemporáneos de Newton, porque parecía entrar en contradicción con el otro principio derivado de la experiencia, a saber, que sólo se produce un efecto de transformación a través del contacto, y no por una actuación súbita a distancia.
La tendencia que impulsa a los seres humanos hacia el conocimiento soporta con dificultad un dualismo como éste. ¿Cómo se podría salvar la uniformidad en la interpretación de los fenómenos naturales? Una alternativa sería intentar considerar que aquellas fuerzas que se nos presentan como fuerzas de contacto son en realidad fuerzas a distancia que se hacen sentir sólo cuando el alejamiento es muy pequeño; ésta era la opción que preferían casi siempre aquellos sucesores de Newton que seguían fielmente sus enseñanzas. La otra alternativa era suponer que las fuerzas newtonianas que actuaban a distancia sólo lo hacían aparentemente, ya que en realidad se transmitían a través de un medio que impregnaba todo el espacio, ya fuera mediante movimientos o por deformación elástica de dicho medio. Así el afán de unificar nuestra interpretación de la naturaleza de las fuerzas dio como resultado la hipótesis del éter. Por cierto que esta hipótesis no supuso en aquel momento absolutamente ningún avance para la teoría de la gravitación, ni para la física en general, por lo que todo el mundo se acostumbró a manejar la ley de las fuerzas de Newton como algo que no pasaba de ser un axioma reductor. Sin embargo, la hipótesis del éter iba a desempeñar de manera continuada un papel importante en el pensamiento de los físicos, aunque en la mayoría de los casos sería ante todo un papel latente.
Cuando, en la primera mitad del siglo XIX, se hizo evidente la amplia similitud que existía entre las propiedades de la luz y las de las ondas elásticas en cuerpos ponderables, la hipótesis del éter ganó un nuevo respaldo. Parecía indudable que la luz tenía que ser interpretada como un fenómeno oscilatorio en un medio elástico e inerte que llenaba el universo. También parecía que de la capacidad de polarización de la luz se deducía necesariamente que este medio —el éter— tenía que ser del tipo de un cuerpo sólido, porque las ondas transversales sólo son posibles en un cuerpo así, y no en un fluido. Se llegaba así a la teoría del éter luminífero «cuasirígido», cuyas partes sólo podían realizar, unas con relación a las otras, pequeños movimientos de deformación, que eran los correspondientes a las ondas luminosas.
Esta teoría —llamada también teoría del éter luminífero inmóvil— encontró además un importante apoyo en el experimento de Fizeau, que también fue fundamental para la teoría especial de la relatividad y del cual se concluía necesariamente que el éter luminífero no participaba en los movimientos de los cuerpos. También el descubrimiento de la aberración de la luz fue un punto a favor de la teoría del éter cuasi-rígido.
El desarrollo de la teoría de la electricidad por las vías que abrieron Maxwell y Lorentz produjo un cambio peculiar e inesperado en el desarrollo de nuestras ideas sobre el éter. Para el propio Maxwell el éter era todavía una entidad dotada de propiedades meramente mecánicas mucho más complicadas que las de los cuerpos sólidos ponderables. Pero ni Maxwell ni sus sucesores consiguieron desarrollar un modelo mecánico para el éter que pudiera aportar una interpretación mecánica satisfactoria de las leyes de Maxwell para el campo electromagnético. Estas leyes eran claras y sencillas, pero sus interpretaciones mecánicas resultaban engorrosas y estaban llenas de contradicciones. Casi sin darse cuenta y a pesar de que, desde el punto de vista de sus programas de mecánica, la situación resultaba totalmente penosa, los físicos teóricos se fueron adaptando a este estado de cosas, especialmente por la influencia de las investigaciones electrodinámicas de Heinrich Hertz. Aunque con anterioridad siempre habían exigido de cualquier teoría definitiva que se basara en conceptos básicos pertenecientes de manera exclusiva a la mecánica (por ejemplo, densidades de masa, velocidades, deformaciones, presiones), se fueron acostumbrando gradualmente a admitir las intensidades de los campos eléctricos y magnéticos como conceptos básicos junto a los de la mecánica, sin pedir una interpretación mecánica de dichos conceptos. Así se fue abandonando poco a poco la interpretación meramente mecánica de la naturaleza. Pero este cambio desembocó en un dualismo de los conceptos fundamentales que con el tiempo se hizo insoportable. Para contrarrestar este dualismo se intentó actuar a la inversa, reduciendo los fundamentos mecánicos a fundamentos eléctricos, sobre todo en los experimentos con rayos y rayos catódicos de alta velocidad, que hicieron zozobrar la confianza que se tenía en la estricta validez de las ecuaciones de la mecánica de Newton.
En el caso de Heinrich Hertz el mencionado dualismo es aún más fuerte. En sus trabajos aparece la materia no sólo como portadora de velocidades, energía cinética y fuerzas mecánicas de presión, sino también de campos electromagnéticos. Dado que estos campos aparecen asimismo en el vacío —es decir, en el éter—, también el éter se presenta como portador de campos electromagnéticos. Se muestra además como algo semejante a la materia ponderable y coordinado con ella. Dentro de la materia toma parte en los movimientos de ésta, y siempre tiene en el vacío una velocidad tal, que se encuentra distribuido por igual en todo el espacio. El éter de Hertz fundamentalmente no se diferencia en nada de la materia ponderable (que en parte está constituida por éter).
La teoría de Hertz no sólo presenta el defecto de atribuir a la materia y al éter unas condiciones, por una parte mecánicas y por otra eléctricas, que en ningún contexto racional son compatibles, sino que contradice también el resultado del importante experimento de Fizeau sobre la velocidad de propagación de la luz en los fluidos en movimiento, así como otros experimentos fiables.
Así estaban las cosas cuando intervino H. A. Lorentz. Acomodó la teoría a la experiencia y lo consiguió mediante una asombrosa simplificación de los fundamentos teóricos. Fue el avance más importante en la teoría de la electricidad desde los tiempos de Maxwell y lo logró privando al éter de sus propiedades mecánicas y a la materia de sus propiedades electromagnéticas. Al igual que en un espacio vacío, también en el interior de los cuerpos materiales se encontraba exclusivamente el éter, no la materia pensada en el sentido de los átomos, como lugar donde se originan los campos electromagnéticos. Según Lorentz, sólo las partículas elementales de la materia son capaces de realizar movimientos; su actividad electromagnética se produce únicamente porque son portadoras de cargas eléctricas. De esta manera Lorentz consiguió reducir todos los fenómenos electromagnéticos a las ecuaciones de campo de Maxwell para campos en el vacío.
Por lo que respecta a la naturaleza mecánica del éter de Lorentz, aunque suene a broma, se puede decir que la inmovilidad es la única propiedad mecánica que H. A. Lorentz le dejó. A esto habría que añadir que toda la modificación en la concepción del éter que necesitaba la teoría especial de la relatividad consistía en privar al éter de su última característica mecánica, es decir, de su inmovilidad. Enseguida explicaremos cómo hay que entender esto.
A la teoría del espacio-tiempo y a la cinemática de la teoría especial de la relatividad les ha servido como modelo la teoría del campo electromagnético de Maxwell y Lorentz. Es por ello que esta teoría cumple las condiciones de la teoría especial de la relatividad; pero, desde el punto de vista de esta última, la teoría del campo electromagnético adquiere un aspecto nuevo. Sea K un sistema de coordenadas con respecto al cual el éter de Lorentz se encuentra en reposo, con lo cual las ecuaciones de Maxwell y Lorentz son por lo pronto válidas con respecto a K. Sin embargo, según la teoría especial de la relatividad, estas mismas ecuaciones son válidas también en el mismo sentido con respecto a cualquier nuevo sistema de coordenadas K', el cual con respecto a K se encuentra realizando un movimiento de traslación uniforme. Ahora surge una pregunta inquietante: ¿Por qué debo hacer una distinción especial en la teoría para el sistema K mediante la hipótesis de que el éter se encuentra en reposo con respecto a él, si este sistema es en sus aspectos físicos totalmente equivalente a los sistemas K'? Esta asimetría del edificio teórico, al que no corresponde asimetría alguna del sistema de las experiencias, es insoportable para cualquier físico teórico. La equivalencia física de K y K', junto con la hipótesis de que el éter está en reposo con respecto a K, pero se mueve con respecto a K', no es realmente incorrecta desde un punto de vista lógico, pero sí que es inaceptable.
El punto de vista que se ha de asumir a continuación con respecto a este estado de cosas parece ser el siguiente: el éter no existe. Los campos electromagnéticos no son las condiciones en que se encuentra un medio, sino realidades autónomas que no han de atribuirse a ninguna otra cosa y que no están vinculadas a vehículo alguno, exactamente igual que los átomos de la materia ponderable. Esta interpretación se aproxima más a la verdad, porque, según la teoría de Lorentz, la radiación electromagnética lleva consigo impulso y energía, al igual que la materia ponderable, y también porque la materia y la radiación, según la teoría especial de la relatividad, son sólo formas diferentes de una energía repartida, teniendo en cuenta que la masa ponderable pierde su posición privilegiada y sólo aparece como una forma especial de la energía.
Sin embargo, si se piensa más detenidamente, se ve que esta negación del éter en virtud del principio especial de la relatividad no es un requisito necesario. Se puede suponer la existencia de un éter; sólo que entonces hay que renunciar a atribuirle un determinado estado de movimiento, es decir, mediante la abstracción se le debe privar de la última propiedad mecánica que Lorentz le había dejado. Más adelante veremos que esta manera de interpretar las cosas, cuya viabilidad lógica intento yo aclarar mediante una comparación algo forzada, se justifica con los resultados de la teoría general de la relatividad.
Pensemos en las ondas que se generan en la superficie del agua. Para explicar este fenómeno podemos explicar dos cosas totalmente diferentes. En primer lugar se puede intentar descubrir cómo cambian con el tiempo las superficies en forma de onda que se encuentran entre el agua y el aire. Sin embargo, también se puede averiguar —con ayuda de pequeños cuerpos flotantes— cómo cambia con el tiempo la situación de cada una de las partículas de agua. Si, en principio, no existiera este tipo de pequeños cuerpos flotantes que nos permiten seguir el movimiento de las partículas de fluido, entonces en todo este fenómeno no se observaría más que la posición cambiante con el tiempo del espacio ocupado por el agua, por lo que no tendríamos ningún pretexto para suponer que el agua se componga de partículas en movimiento, pero podríamos, no obstante, considerarla como medio.
Algo parecido sucede en el caso de los campos electromagnéticos. Nos podríamos imaginar que el campo está formado por líneas de fuerza. Si queremos considerar estas líneas de fuerza como algo material, en el sentido habitual del término, nos veremos en la tentación de considerar los fenómenos dinámicos como fenómenos de movimiento de estas líneas de fuerza, de tal modo que cada línea será observada a través del tiempo. Sin embargo, todos sabemos que este tipo de observación conduce a caer en contradicciones.
Generalizando, hemos de decir lo siguiente. Se puede pensar en varios objetos de la física en los que el concepto de movimiento no tiene aplicación alguna. No se deben considerar como objetos formados por partículas cuya evolución se pueda seguir de forma individual a través del tiempo. En el lenguaje de Minkowski esto se expresa de la siguiente manera: no todas las estructuras expandidas en el universo tetradimensional se pueden concebir como objetos formados por hilos del universo. El principio especial de la relatividad nos prohíbe concebir el éter como algo formado por partículas cuya evolución se puede seguir en el tiempo, pero la hipótesis del éter en sí misma no contradice la teoría especial de la relatividad. Sólo hay que evitar atribuir al éter cualquier estado de movimiento.
Sin embargo, desde el punto de vista de la teoría especial de la relatividad, la hipótesis del éter parece ante todo una hipótesis vacía. En las ecuaciones del campo electromagnético intervienen, además de las densidades de carga eléctrica, sólo las intensidades de campo. El desarrollo de los fenómenos electromagnéticos en el vacío parece quedar totalmente determinado en virtud de aquella ley interna, sin que ejerzan influencia alguna otras magnitudes físicas. Los campos electromagnéticos surgen como realidades últimas, no atribuibles a nada previo, y parece ante todo superfluo postular la existencia de un medio etéreo homogéneo y cerrado en sí mismo, como si las circunstancias del mismo sirvieran para interpretar dichos campos.
Pero, por otro lado, se puede aportar un importante argumento a favor de la hipótesis del éter. Negar el éter significa en última instancia aceptar que al espacio vacío no le corresponde ningún tipo de propiedades físicas. Los hechos fundamentales de la mecánica no están de acuerdo con esta interpretación. El comportamiento mecánico de un sistema de cuerpos que flotan libremente en un espacio vacío depende, no sólo de las posiciones relativas (distancias) y de las velocidades relativas, sino también de su estado de rotación, que físicamente no se puede considerar como una característica que en sí misma le corresponde al sistema. Para poder considerar la rotación del sistema como algo real, al menos desde un punto de vista formal, Newton objetivó el espacio. Dado que él incluía su espacio absoluto entre las cosas reales, también la rotación con respecto a un espacio absoluto era para él algo real. Newton hubiera podido perfectamente llamar «éter» a su espacio absoluto; en esencia se trata tan sólo de que, junto a los objetos observados, se ha de ver como real otra cosa que no es perceptible, con el fin de poder considerar la aceleración, o en su caso la rotación, como algo real.
Se buscaba la necesidad de aceptar como real algo que no era observable, para poder así soslayar el hecho de que Newton en su mecánica se esforzó por establecer, en vez de la aceleración con respecto al espacio absoluto, una aceleración media con respecto a la totalidad de la masa del universo. Sin embargo, una resistencia inercial con respecto a la aceleración relativa de las masas situadas en posiciones alejadas presuponía un efecto directo a distancia. Puesto que los físicos modernos creían no poder aceptar tal efecto, con esta interpretación Newton aterrizó de nuevo en el asunto del éter, que debía transmitir los efectos de la inercia. Sin embargo, la idea del éter, a la que nos conducen las consideraciones de Mach, se distingue en esencia del concepto de éter que tuvieron Newton, Fresnel y H. A. Lorentz. Este éter de Mach no sólo condiciona el comportamiento de la masa inercial, sino que se ve él mismo condicionado en su propio estado por las masas inerciales.
La idea de Mach se desarrolla plenamente en el éter de la teoría general de la relatividad. Según esta teoría, las características métricas del continuo espacio-tiempo son distintas en el entorno de cada uno de los puntos espacio-temporales y están condicionadas por la materia existente fuera de la zona observada. Esta variabilidad espacio-temporal de las relaciones mutuas entre varas de medir y relojes, o bien el conocimiento de que el «espacio vacío» en sentido físico no es homogéneo ni isótropo, lo cual nos obliga a describir su estado mediante las diez funciones del potencial gravitatorio g, ha hecho descartar definitivamente la idea de que el espacio esté físicamente vacío. Con esto el concepto de éter vuelve a adquirir un contenido claro, que desde luego está muy lejos de parecerse al del éter de la teoría ondulatoria mecánica de la luz. El éter de la teoría general de la relatividad es un medio que está en sí mismo desprovisto de cualquier característica mecánica y cinemática, pero determina los fenómenos mecánicos (y electromagnéticos).
La novedad principal del éter de la teoría general de la relatividad con respecto al éter de la teoría de Lorentz consiste en el hecho de que el estado del primero en cada lugar está determinado mediante unas leyes en forma de ecuaciones diferenciales que relacionan la materia y el estado del éter en posiciones muy cercanas, mientras que el estado del éter de la teoría de Lorentz en ausencia de campos electromagnéticos no está condicionado por nada exterior y es igual en todos los lugares. El éter de la teoría general de la relatividad se transforma en el de Lorentz cuando se reemplazan por constantes las funciones espaciales que lo describen, dejando a un lado las causas que condicionan su estado. Por lo tanto, se podría decir que el éter de la teoría general de la relatividad se deduce del éter de Lorentz mediante un proceso de relativización.
No tenemos todavía claro cuál es el papel que el nuevo éter está llamado a desempeñar dentro de la configuración del mundo que dibujará la física del futuro. Sabemos que este éter determina las relaciones métricas en el continuo espacio-tiempo, por ejemplo, las posibilidades de configuración de los cuerpos sólidos, así como los campos gravitatorios; pero no sabemos si participa de una manera esencial en la formación de las partículas eléctricas elementales. Tampoco sabemos si su estructura difiere considerablemente de la del éter de Lorentz sólo en la proximidad de masas significativas, o si la geometría de los espacios de expansión cósmica es más o menos euclidiana. Sin embargo, basándonos en las ecuaciones relativistas de la gravitación, podemos afirmar que en los espacios de grandes dimensiones cósmicas debe producirse una desviación con respecto al comportamiento euclidiano, si es que existe en el universo una densidad media positiva de la materia que sea tan pequeña como la que se supone para dichos espacios. En este caso el universo debe ser necesariamente un espacio cerrado y de tamaño finito, estando determinado dicho tamaño por el valor de esa densidad media de la materia.
Si consideramos los campos gravitatorio y electromagnético desde el punto de vista de la hipótesis del éter, veremos que entre ambos existe una notable diferencia fundamental. No hay espacio, ni porción del espacio, donde no haya un potencial gravitatorio, ya que éste proporciona al espacio sus características métricas, sin las cuales dicho espacio no se puede concebir. La existencia del campo gravitatorio va ligada de manera directa a la existencia del espacio. En cambio, se puede pensar perfectamente que una parte del espacio esté desprovista de campos electromagnéticos; por lo tanto, el campo electromagnético, al contrario que el campo gravitatorio, parece estar unido al éter sólo de una manera en cierto modo secundaria, ya que la naturaleza formal del campo electromagnético no está determinada en absoluto por la del éter gravitatorio. Por lo que sabemos hoy en día en cuanto a la teoría, parece ser como si el campo electromagnético, a diferencia del gravitatorio, se basara en un motivo formal completamente nuevo, como si la naturaleza hubiera dotado al éter gravitatorio, no de campos del tipo del electromagnético, sino de campos de un tipo completamente diferente, por ejemplo, de campos que poseen un potencial escalar.
Dado que, según nuestras concepciones actuales, las partículas elementales de la materia no son esencialmente más que unas condensaciones del campo electromagnético, nuestra imagen actual del universo conoce dos realidades totalmente diferentes desde un punto de vista conceptual, pero con un vínculo mutuo causal, que son el éter gravitatorio y el campo electromagnético, o —dicho de otro modo— el espacio y la materia.
Desde luego sería un gran avance que se consiguiera concebir de manera unificada, como una única estructura, el campo gravitatorio y el electromagnético. Sólo entonces se podría clausurar de manera satisfactoria la etapa de la física teórica que iniciaron Faraday y Maxwell. En este caso la contraposición éter-materia se desvanecería y toda la física se convertiría en un sistema de pensamiento asimismo cerrado, como la geometría, la cinemática y la teoría de la gravitación mediante la teoría general de la relatividad. Un intento extraordinariamente inteligente en esta dirección es el que ha realizado el matemático H. Weyl; sin embargo, no creo que su teoría resista una confrontación con la realidad. Al pensar en un futuro cercano para la física teórica, no debemos seguir rechazando incondicionalmente la posibilidad de que los hechos de la teoría de campos recogidos en la teoría cuántica puedan establecer unas fronteras insuperables.
En resumen, podemos afirmar lo siguiente: según la teoría general de la relatividad el espacio está dotado de cualidades físicas; por lo tanto, en este sentido existe un éter. Según la teoría general de la relatividad es impensable la existencia de un espacio sin éter, porque en un espacio así no sólo nos encontraríamos con que nunca se produciría la propagación de la luz, sino que además no sería posible la existencia de varas de medir o de relojes, por lo que tampoco habría distancias espacio-temporales en el sentido de la física. Sin embargo, no se puede concebir que el éter esté dotado de la propiedad característica de los medios perceptibles, que es la de estar constituidos por partes de las que se puede hacer un seguimiento en el tiempo; el concepto de movimiento no se puede aplicar al éter.
Se tiene a la física y las matemáticas como dos caras de la misma moneda. Y sin embargo, son bastante distintas. La verdad de la física y de otras ciencias sólo puede ser establecida a través de la observación y el experimento. Y aun así, a lo máximo que puede aspirar una teoría, más que a ser demostrada como correcta, es a que no se demuestre como equivocada. O en términos de Karl Popper, _«_ refutada». Por otro lado, la matemática puede ser desarrollada y establecida incluso por alguien sin ningún contacto experimental desde el mundo de las ciencias naturales.
La geometría parece reposar en una posición intermedia entre las ciencias físicas y las matemáticas puras. En _«_ Geometría y experiencia» (1921), Einstein resalta que mientras las proposiciones pueden ser demostradas desde un axioma dado en geometría, los axiomas mismos no. Si uno debe estudiar geometría práctica, los postulados tienen que basarse en las propiedades físicas del universo real.
_Los Elementos_ de Euclides, una de las primeras inspiraciones de Einstein como joven pensador, determinan una serie de postulados geométricos que parecen obvios desde la experiencia diaria. Dos líneas rectas, por ejemplo, como mucho pueden cruzarse una vez, líneas paralelas permanecen paralelas, etc. La geometría euclidiana parece describir nuestro mundo de manera tan perfecta en nuestra escala humana, y está tan de acuerdo con nuestra intuición, que la mecánica de Newton parece fluir directamente de la geometría de Euclides.
Sin embargo, a mediados del siglo XIX, un número de matemáticos empezaron a explorar geometrías no-euclidianas, que partían de axiomas muy distintos a los de Euclides y que describían superficies curvas. Las líneas dibujadas en una esfera, por ejemplo, pueden cruzarse más de una vez. Considere el caso del globo terrestre: las líneas de longitud se cruzan en el polo sur y el polo norte. Una de las mayores contribuciones de Einstein a la física fue su apreciación de que la geometría no-euclidiana podía ser la descripción fundamentalmente correcta de la forma del universo.
La relatividad general es por tanto una forma de describir la geometría del universo. Como apuntaba John Archibald Wheeler, _«_ la materia dice al espacio-tiempo cómo curvarse, y el espacio-tiempo dice a la materia cómo moverse». ¿Y qué significa la _«_ curvatura del universo»? A pequeña escala, describe el movimiento de los planetas alrededor del Sol y la atracción gravitatoria entre usted y la Tierra.
A gran escala, también debería existir una curvatura general del universo. Esto puede ser comparado al caso de la Tierra, la cual tiene una esfericidad general además de pequeños baches y protuberancias (las cordilleras montañosas). La forma del universo describe si es finito o infinito, así como cuál será su destino último.
Las cuestiones que Einstein plantea en _«_ Geometría y experiencia» todavía están presentes entre nosotros. Medidas recientes del satélite _Wilkinson Microwave Anisotropy Probe (WMAP)_ , y otros experimentos sugieren que en la mayor escala posible el universo es plano; mientras que otros experimentos como el _Laser Interferometer Gravitacional Wave Observatory (LIGO)_ , o el _Laser Interferometer Space Antenna (LISA)_ , cuyo lanzamiento está previsto para el 2015, pretenden medir los baches y las protuberancias del espacio-tiempo a menor escala. Sin embargo, en todas las escalas nuestra intuición sirve de poco, y sólo por observación directa podremos realizar una medida «práctica y geométrica» de la forma del espacio-tiempo.
### 2
### GEOMETRÍA Y EXPERIENCIA*
Una razón por la que las matemáticas gozan de una especial estima, por encima de todas las demás ciencias, es que sus leyes son absolutamente ciertas e indiscutibles, mientras que las de todas las demás ciencias son en alguna medida discutibles y están en constante peligro de ser derrocadas por hechos recién descubiertos. Pese a ello, quien investiga en otras áreas de la ciencia no tendría que envidiar al matemático por el hecho de que las leyes de las matemáticas se refieran a objetos de nuestra mera imaginación, y no a objetos de la realidad. No puede producir sorpresa que personas diferentes lleguen a las mismas conclusiones lógicas cuando ya se han puesto de acuerdo en las leyes fundamentales (axiomas) y en los métodos mediante los que otras leyes van a deducirse de aquéllas. Pero hay otra razón para la alta reputación de las matemáticas, pues son las matemáticas las que proporcionan a las ciencias exactas y naturales una cierta medida de seguridad, que sin las matemáticas no podrían alcanzar.
Esto plantea un enigma que siempre ha intrigado a las mentes inquisitivas. ¿Cómo es posible que las matemáticas, que después de todo son un producto de la mente humana independiente de la experiencia, se adecuen de forma tan admirable a los objetos de la realidad? ¿Es la razón humana, mediante puro pensamiento y sin ayuda de la experiencia, capaz de descifrar las propiedades de los objetos reales?
En mi opinión, la respuesta a esta pregunta es, dicho brevemente, ésta: en la medida en que las leyes de las matemáticas se refieren a la realidad, ellas no son ciertas; y en la medida en que son ciertas, no se refieren a la realidad. Creo que este estado de cosas quedó aclarado por primera vez gracias a la nueva orientación en matemáticas que se conoce con el nombre de lógica matemática o «axiomática». El progreso logrado por la axiomática consiste en haber separado claramente la lógica formal de su contenido objetivo o intuitivo; según la axiomática, la materia-sujeto de las matemáticas es la lógica formal propiamente dicha, pero las matemáticas no se interesan en el contenido intuitivo o cualquier otro asociado con la lógica formal.
Consideremos desde este punto de vista cualquier axioma de la geometría. Por ejemplo, el siguiente: por dos puntos cualesquiera en el espacio pasa siempre una y solo una línea recta. ¿Cómo debe interpretarse este axioma en el sentido antiguo y en el sentido moderno?
Interpretación antigua: Todo el mundo sabe lo que es una línea recta y lo que es un punto. Si este conocimiento brota de una capacidad de la mente humana o si brota de la experiencia, si lo hace de alguna colaboración entre las dos o de alguna otra fuente, no es algo que corresponda decidir al matemático. Éste deja la pregunta al filósofo. Basado en este conocimiento, que precede a todas las matemáticas, el axioma antes enunciado es, como todos los demás axiomas, autoevidente; es decir, es la expresión de una parte de este conocimiento a priori.
Interpretación moderna: La geometría trata de entidades que son denotadas por las palabras línea recta, punto, etc. Estas entidades no presuponen ningún conocimiento ni intuición, sino sólo la validez de los axiomas, tales como el arriba enunciado, que deben tomarse en un sentido puramente formal, i.e., vacíos de contenido de intuición o experiencia. Todas las demás proposiciones de la geometría son inferencias lógicas a partir de los axiomas (que deben tomarse solamente en sentido nominalista). La materia de la que trata la geometría está definida en primer lugar por los axiomas. Por ello, Schlick, en su libro sobre epistemología, ha caracterizado a los axiomas de manera muy apropiada como «definiciones implícitas».
Esta visión de los axiomas, defendida por la axiomática moderna, elimina de las matemáticas todos los elementos superfluos y disipa así la oscuridad mística que antiguamente rodeaba a los principios de las matemáticas.
Pero una presentación de sus principios así clarificados hace también evidente que las matemáticas como tales no pueden predicar nada sobre los objetos perceptuales o los objetos reales. En geometría axiomática las palabras «punto», «línea recta», etc., representan sólo esquemas conceptuales vacíos. Lo que les da sustancia no es relevante para las matemáticas.
Pero, por otra parte, también es cierto que las matemáticas en general, y la geometría en particular, deben su existencia a la necesidad sentida de aprender algo sobre las relaciones mutuas entre las cosas reales. La misma palabra geometría, que significa medida de la Tierra, lo demuestra. En efecto, la medida terrestre tiene que ver con las posibilidades de disposición de ciertos objetos naturales con respecto a otros, a saber, con partes de la Tierra, líneas de medida, varas de medida, etc. Es evidente que el sistema de conceptos de la geometría axiomática sola no puede hacer ninguna afirmación respecto a las relaciones entre objetos reales de este tipo, que llamaremos cuerpos prácticamente-rígidos. Para poder hacer tales afirmaciones, la geometría debe ser despojada de su mero carácter lógico-formal mediante la coordinación de objetos reales de experiencia con el vacío armazón conceptual de la geometría axiomática. Para conseguirlo, tan sólo necesitamos añadir esta proposición: con respecto a sus posibles disposiciones, los cuerpos sólidos están relacionados de la forma en que lo están los cuerpos en la geometría euclidiana de tres dimensiones. De este modo las proposiciones de Euclides contienen afirmaciones respecto a las relaciones de cuerpos prácticamente-rígidos.
La geometría así completada es evidentemente una ciencia natural; de hecho, podemos considerarla la rama más antigua de la física. Sus afirmaciones descansan esencialmente en inducción a partir de la experiencia, y no solamente en inferencias lógicas. Llamaremos «geometría práctica» a esta geometría completada, y en lo que sigue la distinguiremos de la «geometría puramente axiomática». La cuestión de si la geometría práctica del universo es o no euclidiana tiene un claro significado, y su respuesta sólo puede proporcionarla la experiencia. Toda medida lineal en física es geometría práctica en este sentido, y también lo es la medida geodésica y la medida de distancias astronómicas si llamamos en nuestra ayuda a la ley de experiencia según la cual la luz se propaga en línea recta, y de hecho en una línea recta en el sentido de la geometría práctica.
Yo doy especial importancia a la visión de la geometría que acabo de presentar, porque sin ella no habría podido formular la teoría de la relatividad. Sin ella, habría sido imposible la siguiente reflexión: en un sistema de referencia en rotación con respecto a un sistema inercial, las leyes de disposición de cuerpos rígidos no guardan correspondencia con las reglas de la geometría euclidiana debido a la contracción de Lorentz; así, si admitimos sistemas no inerciales debemos abandonar la geometría euclidiana. El paso decisivo en la transición a las ecuaciones con covariancia general no se habría dado si la interpretación anterior no hubiera servido como un paso intermedio. Si negamos la relación entre el cuerpo de la geometría euclidiana axiomática y el cuerpo prácticamente-rígido de la realidad, llegamos rápidamente a la visión siguiente, que fue mantenida por un pensador tan agudo y profundo como H. Poincaré: la geometría euclidiana destaca sobre todas las demás geometrías axiomáticas imaginables por su simplicidad. Ahora bien, puesto que la geometría axiomática en sí misma no contiene afirmaciones respecto a la realidad que puede ser experimentada, sino que sólo puede hacerlo en combinación con leyes físicas, debería ser posible y razonable retener la geometría euclidiana —cualquiera que pueda ser la naturaleza de la realidad—. Así, si se manifestaran contradicciones entre teoría y experiencia, sería preferible cambiar las leyes físicas antes que cambiar la geometría euclidiana axiomática. Si negamos la relación entre el cuerpo prácticamente-rígido y la geometría, no nos liberaremos fácilmente de la convención según la cual la geometría euclidiana debe retenerse como la más simple. ¿Por qué Poincaré y otros investigadores niegan la equivalencia —que tan directamente se sugiere— entre el cuerpo prácticamente-rígido y el cuerpo de la geometría? Sencillamente porque en un examen más detallado los cuerpos sólidos reales en la naturaleza no son rígidos, pues su comportamiento geométrico, es decir, sus posibilidades de disposición relativa, dependen de la temperatura, fuerzas externas, etc. Así, la relación inmediata y original entre geometría y realidad física aparece destruida, y nos vemos llevados hacia la siguiente visión más general, que caracteriza el punto de vista de Poincaré. La geometría (G) no afirma nada sobre las relaciones entre las cosas reales, pero sólo la geometría junto con el contenido (P) de las leyes físicas puede hacerlo. Utilizando símbolos, podemos decir que sólo la suma de (G) + (P) está sujeta al control de la experiencia. Así (G) puede escogerse arbitrariamente, y también pueden escogerse partes de (P); todas estas leyes son convenciones. Todo lo que es necesario para evitar contradicciones es escoger el resto de (P) de modo que (G) y el conjunto de (P) estén en acuerdo con la experiencia. Concebida de esta manera, la geometría axiomática y la parte de la ley natural a la que se ha dado un estatus convencional aparecen como epistemológicamente equivalentes.
En mi opinión, Poincaré tiene razón _sub especie aeterni_. La idea de la vara de medir y la idea del reloj coordinado con ella en la teoría de la relatividad no encuentran su exacta correspondencia en el mundo real. Es también evidente que el cuerpo sólido y el reloj no desempeñan en el edificio conceptual de la física el papel de elementos irreducibles, sino el de estructuras compuestas que no pueden desempeñar papeles independientes en la física teórica. Pero estoy convencido de que, en la fase actual de desarrollo de la física teórica, estas ideas deben seguir siendo empleadas como ideas independientes, pues seguimos estando lejos de poseer ese conocimiento seguro de principios teóricos que nos permita hacer construcciones teóricas exactas de cuerpos sólidos y relojes.
Se puede objetar que no hay cuerpos realmente rígidos en la naturaleza, y que por consiguiente las propiedades predicadas de los cuerpos rígidos no se aplican a la realidad física. Pero esta objeción no es en absoluto tan radical como podría parecer en un examen apresurado. En efecto, no es una tarea difícil determinar el estado físico de una vara de medir de forma tan precisa que su comportamiento respecto a otros cuerpos de medir esté suficientemente libre de ambigüedad para permitir que sea sustituida por el cuerpo «rígido». Es a cuerpos de medir de este tipo a los que deben referirse los enunciados sobre cuerpos rígidos.
Toda la geometría práctica se basa en un principio que es accesible a la experiencia, y que ahora trataremos de entender. Llamaremos un tracto a lo que está encerrado entre dos fronteras, marcadas en un cuerpo prácticamente rígido. Imaginemos dos cuerpos prácticamente rígidos, con sendos tractos marcados en ellos. Se dice que estos tractos son «iguales entre sí» si las fronteras de un tracto pueden llevarse a coincidir permanentemente con las fronteras del otro. Ahora suponemos que:
Si se encuentra que dos tractos son iguales una vez y en algún lugar, entonces son iguales siempre y en todo lugar.
No sólo la geometría práctica de Euclides, sino también su más próxima generalización, la geometría práctica de Riemann, y con ella la teoría de la relatividad general, descansan en esta hipótesis. De las razones experimentales que avalan esta hipótesis citaré sólo una. El fenómeno de la propagación de la luz en el espacio vacío asigna un tracto, a saber, el correspondiente camino de la luz, a cada intervalo de tiempo local, y a la inversa. De ello se sigue que la hipótesis anterior para tractos debe ser también válida para intervalos de tiempo de reloj en la teoría de la relatividad. En consecuencia puede ser formulada como sigue: Si dos relojes ideales marchan al mismo ritmo en cualquier instante y en cualquier lugar (cuando ambos están inmediatamente próximos), siempre marcharán al mismo ritmo, independientemente de dónde y cuándo se encuentran con respecto al otro. Si esta ley no fuera válida para relojes reales, las frecuencias propias de átomos separados de un mismo elemento químico no estarían en un acuerdo tan grande como el que muestra la experiencia. La existencia de líneas espectrales estrechas es una convincente prueba experimental del principio de geometría práctica antes mencionado. Éste es, de hecho, el fundamento último que nos permite hablar con sentido de la medida, en el sentido riemanniano de la palabra, del continuo tetra-dimensional del espacio-tiempo.
Según la visión que se está defendiendo aquí, la cuestión de si la estructura de este continuo es euclidiana, o si está de acuerdo con el esquema general de Riemann o cualquier otro es, propiamente hablando, una cuestión física que debe ser respondida por la experiencia, y no una cuestión de mera convención que debe ser seleccionada sobre bases prácticas. La geometría de Riemann será la correcta si las leyes de disposición de cuerpos prácticamente-rígidos son transformables en las de los cuerpos de la geometría euclidiana con una exactitud que aumenta a medida que disminuyen las dimensiones de la parte del espacio-tiempo bajo consideración.
Es cierto que esta interpretación física que se propone de la geometría se viene abajo cuando se aplica inmediatamente a espacios de un orden de magnitud sub-molecular. Pero aun así, incluso en cuestiones relativas a la constitución de las partículas elementales, retiene parte de su importancia. Pues incluso cuando se trata de describir las partículas elementales eléctricas que constituyen la materia, sigue siendo posible tratar de dar relevancia física a aquellas ideas de los campos que han sido definidas físicamente con el objetivo de describir el comportamiento geométrico de cuerpos que son grandes comparados con las moléculas. Sólo el éxito puede decidir si es o no justificable dicho intento, que postula realidad física para los principios fundamentales de la geometría de Riemann fuera del dominio de sus definiciones físicas. Podría resultar que esta extrapolación no tenga mejor garantía que la extrapolación de la idea de temperatura a partes de un cuerpo de un orden de magnitud molecular.
Menos problemático parece extender las ideas de la geometría práctica a espacios de un orden de magnitud cósmico. Podría objetarse, por supuesto, que una construcción compuesta de varas sólidas se aparta cada vez más de la rigidez ideal a medida que aumenta su extensión espacial. Pero creo que difícilmente será posible atribuir un significado fundamental a esta objeción. Por consiguiente, la pregunta de si el universo es espacialmente finito o no, me parece muy significativa en el sentido de la geometría práctica. Ni siquiera considero imposible que esta pregunta sea respondida por la astronomía antes de que pase mucho tiempo. Déjenme recordar lo que la teoría de la relatividad general enseña a este respecto. Ofrece dos posibilidades:
1. El universo es espacialmente infinito. Esto sólo puede suceder si la densidad media de materia en el universo, concentrada en las estrellas, se anula, i.e. si la razón entre la masa total de las estrellas y el espacio en el que están dispersas se aproxima indefinidamente al valor cero cuando los espacios tomados en consideración son cada vez mayores.
2. El universo es espacialmente finito. Esto debe suceder si hay una densidad media de materia ponderable en el universo que es diferente de cero. Cuanto menor es esta densidad media, mayor es el volumen del universo.
No debo dejar de mencionar que puede aducirse un argumento teórico a favor de la hipótesis de un universo finito. La teoría de la relatividad general enseña que la inercia de un cuerpo dado es mayor cuanto mayor es la masa ponderable que hay en su proximidad; parece así muy natural reducir el efecto total de la inercia de un cuerpo a acción y reacción entre él y los demás cuerpos en el universo, como de hecho lo ha sido la gravedad desde la época de Newton. De las ecuaciones de la teoría de la relatividad general puede deducirse que esta reducción total de la inercia a acción recíproca entre masas —como requiere E. Mach, por ejemplo— es posible sólo si el universo es espacialmente finito.
Este argumento no impresiona a muchos físicos y astrónomos. Sólo la experiencia puede decidir cuál de las dos posibilidades se realiza en la naturaleza. ¿Cómo puede la experiencia proporcionar una respuesta? A primera vista parecería posible determinar la densidad media de materia mediante la observación de la parte del universo que es accesible a nuestra percepción. Esta esperanza es ilusoria. La distribución de las estrellas visibles es extraordinariamente irregular, de modo que no podemos aventurarnos a afirmar que la densidad media de materia estelar en el universo es igual a, digamos, la densidad media en la Vía Láctea. Por grande que pueda ser el espacio examinado, no podríamos quedar convencidos de que no haya más estrellas más allá de dicho espacio. Por lo tanto, parece imposible estimar la densidad media. Pero hay otro camino, en mi opinión más practicable aunque también presenta grandes dificultades. En efecto, si examinamos las implicaciones de la teoría de la relatividad general que son accesibles a la experiencia, y las comparamos con las implicaciones de la teoría newtoniana, encontramos ante todo una desviación que se manifiesta en la proximidad a una masa gravitante, desviación que ha sido confirmada en el caso del planeta Mercurio. Pero si el universo es espacialmente finito hay una segunda desviación respecto de la teoría newtoniana que, en el lenguaje de esta última, puede expresarse así: el campo gravitatorio es de tal naturaleza que parece producido no sólo por las masas ponderables sino también por una densidad de masa de signo negativo distribuida uniformemente a lo largo del espacio. Puesto que esta densidad de masa ficticia tendría que ser enormemente pequeña, sólo podría dejar sentir su presencia en sistemas gravitantes de muy gran extensión.
Suponiendo que conocemos, digamos, la distribución estadística de estrellas en la Vía Láctea, así como sus masas, entonces podemos calcular mediante la ley de Newton el campo gravitatorio y las velocidades medias que deben tener aquéllas para que la Vía Láctea no colapse bajo la mutua atracción de sus estrellas, sino que se mantenga en su extensión actual. Ahora bien, si las velocidades reales de las estrellas que puedan medirse fueran menores que las velocidades calculadas, entonces tendríamos una prueba de que las atracciones reales a grandes distancias son menores que las que da la ley de Newton. A partir de dicha desviación podría demostrarse indirectamente que el universo es finito. Incluso sería posible estimar su magnitud espacial.
¿Podemos imaginarnos un universo tridimensional que es finito pero ilimitado?
La respuesta habitual a esta pregunta es «No», pero ésta no es la respuesta correcta. El propósito de los comentarios siguientes es mostrar que la respuesta debería ser «Sí». Quiero mostrar que sin ninguna dificultad extraordinaria podemos ilustrar la teoría de un universo finito por medio de una imagen mental a la que, con cierta práctica, pronto nos acostumbraremos.
Antes de nada, una observación de carácter epistemológico. Una teoría físico-geométrica como tal no puede ser representada directamente, al ser meramente un sistema de conceptos. Pero estos conceptos se utilizan con el propósito de reunir en la mente una multiplicidad de experiencias sensoriales reales o imaginarias. «Visualizar» una teoría, o acomodarla en la mente, significa por lo tanto dar una representación de esa abundancia de experiencias para las que la teoría ofrece una ordenación esquemática. En el caso presente tenemos que preguntarnos cómo podemos representar esa relación de cuerpos sólidos con respecto a su disposición recíproca (contacto) que corresponde a la teoría de un universo finito. No hay realmente nada nuevo en lo que tengo que decir sobre esto, pero innumerables preguntas que se me hacen me demuestran que los requerimientos de quienes ansían el conocimiento de estas materias no han sido aún completamente satisfechos.
Por lo tanto, ¿me perdonarán los iniciados si parte de lo que voy a exponer es conocido desde hace tiempo?
¿Qué queremos expresar cuando decimos que nuestro espacio es infinito? Sencillamente que podemos colocar un número cualquiera de cuerpos del mismo tamaño lado a lado sin llenar nunca el espacio. Supongamos que disponemos de muchísimos cubos de madera, todos ellos del mismo tamaño. De acuerdo con la geometría euclidiana podemos colocarlos encima, al lado y detrás uno de otro para llenar una región del espacio de cualquier dimensión; podríamos seguir añadiendo más y más cubos sin llegar a encontrar que ya no hay lugar para más. Eso es lo que queremos expresar cuando decimos que el espacio es infinito. Sería mejor decir que el espacio es infinito con respecto a cuerpos prácticamente-rígidos, suponiendo que las leyes de disposición para dichos cuerpos están dadas por la geometría euclidiana.
Otro ejemplo de un continuo infinito es el plano. En una superficie plana podemos colocar cuadrados de cartulina de modo que cada lado de un cuadrado tenga adyacente el lado de otro cuadrado. La construcción nunca termina; siempre podemos seguir añadiendo cuadrados —si sus leyes de disposición corresponden a las de figuras planas en la geometría euclidiana—. Por consiguiente, el plano es infinito con respecto a los cuadrados de cartulina. En consecuencia decimos que el plano es un continuo infinito de dos dimensiones, y el espacio es un continuo infinito de tres dimensiones. Creo que puedo suponer conocido lo que se entiende aquí por el número de dimensiones.
Consideremos ahora un ejemplo de un continuo bidimensional que es finito pero ilimitado. Imaginemos la superficie de un globo grande y una cantidad de pequeños discos de papel, todos ellos del mismo tamaño. Coloquemos uno de los discos en un lugar cualquiera de la superficie del globo. Si movemos el disco a cualquier lugar que queramos, sobre la superficie del globo, no llegamos a un límite o frontera en ningún lugar del recorrido. Por ello decimos que la superficie esférica del globo es un continuo ilimitado. Además, la superficie esférica es un continuo finito. En efecto, si pegamos los discos de papel en el globo, de tal forma que nunca se solapen, la superficie del globo estará al final tan llena que ya no quedará lugar para otro disco. Esto significa sencillamente que la superficie esférica del globo es finita con respecto a los discos de papel. Además, la superficie esférica es un continuo no-euclidiano de dos dimensiones; es decir, las leyes de disposición para figuras rígidas que yacen en ella no concuerdan con las del plano euclidiano. Esto puede mostrarse de la siguiente manera. Coloquemos un disco de papel sobre la superficie esférica, y coloquemos en círculo a su alrededor otros seis discos, cada uno de los cuales está rodeado a su vez por seis discos, y así sucesivamente. Si hacemos esta construcción en una superficie plana, tenemos una disposición ininterrumpida en la que hay seis discos tangentes a cada disco, excepto a los que yacen en el borde.
Sobre la superficie esférica la construcción también parece prometer éxito al principio, y cuanto menor es el radio de los discos en relación con el de la esfera, más prometedora parece. Pero a medida que la construcción avanza se hace cada vez más patente que la disposición de los discos a la manera indicada, sin interrupción, no es posible, como debería serlo según la geometría euclidiana de la superficie plana. De esta manera, criaturas que no pudieran dejar la superficie esférica, y ni siquiera pudieran mirar desde la superficie esférica al espacio tridimensional, podrían descubrir, simplemente experimentando con discos, que su «espacio» bidimensional no es euclidiano, sino un espacio esférico.
De los últimos resultados de la teoría de la relatividad parece probable que nuestro espacio tridimensional es también aproximadamente esférico, es decir, que las leyes de disposición de cuerpos rígidos en el mismo no están dadas por la geometría euclidiana sino aproximadamente por la geometría esférica, siempre que consideremos regiones del espacio que sean suficientemente grandes. Éste es el momento en donde la imaginación del lector se paraliza. «Nadie puede imaginar esto», grita indignado. «Puede decirse, pero no puede pensarse. Yo puedo imaginarme una superficie esférica bastante bien, pero nada parecido a ella en tres dimensiones».
Debemos tratar de superar esta barrera mental, y el lector paciente verá que en absoluto es una tarea particularmente difícil. Con este objetivo dirigiremos antes nuestra atención una vez más a la geometría de las superficies esféricas de dos dimensiones. En la figura adjunta sea _K_ la superficie esférica, tangente en _S_ a un plano, _E_ , que por facilidad de presentación se muestra en el dibujo como una superficie acotada. Sea _L_ un disco en la superficie esférica. Imaginemos ahora que en el punto _N_ de la superficie esférica, diametralmente opuesto a _S_ , hay un punto luminoso que arroja una sombra _L_ ' del disco _L_ sobre el plano _E_. Si el disco en la esfera _K_ se mueve, su sombra _L_ ' en el plano _E_ también lo hace. Cuando el disco _L_ está en _S_ , coincide casi exactamente con su sombra. Si se mueve sobre la superficie esférica alejándose de _S_ hacia arriba, el disco sombra _L_ ' en el plano también se mueve alejándose de _S_ en el plano, y haciéndose cada vez más grande. A medida que el disco _L_ se aproxima al punto luminoso _N_ , la sombra se aleja hacia el infinito, y se hace infinitamente grande.
Planteemos ahora la pregunta: ¿cuáles son las leyes de disposición de las sombras-disco _L_ ' en el plano _E_? Evidentemente son exactamente las mismas que las leyes de disposición de los discos _L_ en la superficie esférica. Por cada figura original en _K_ hay una correspondiente figura-sombra en _E_. Si dos discos en _K_ son tangentes, sus sombras en _E_ también lo son. La geometría de sombras en el plano coincide con la geometría de discos en la esfera. Si llamamos figuras rígidas a las sombras-disco, entonces la geometría esférica es válida en el plano _E_ con respecto a estas figuras rígidas. Además, el plano es finito con respecto a las sombras-disco, puesto que sólo un número finito de discos encuentra cabida en el plano.
En este momento alguien dirá, «Esto es absurdo. Las sombrasdisco _no_ son figuras rígidas. Sólo tenemos que mover una regla de un metro por el plano _E_ para convencernos de que las sombras aumentan constantemente de tamaño a medida que se alejan de _S_ en el plano hacia el infinito». Pero ¿qué pasaría si la regla de un metro se comportara en el plano _E_ de la misma forma que las sombras-disco _L_ '? Sería entonces imposible demostrar que las sombras aumentan de tamaño a medida que se alejan de _S_ ; tal afirmación ya no tendría ningún significado. De hecho, la única afirmación objetiva que puede hacerse sobre las sombras-disco es precisamente ésta: que están relacionadas exactamente de la misma forma que lo están los discos rígidos en la superficie esférica en el sentido de la geometría euclidiana.
Tenemos que entender muy bien que nuestro enunciado respecto al crecimiento de las sombras-disco, a medida que se alejan de _S_ hacia el infinito, no tiene en sí mismo un significado objetivo en la medida en que no podemos emplear cuerpos rígidos euclidianos que puedan moverse en el plano _E_ con el fin de comparar los tamaños de las sombras-disco. Con respecto a las leyes de disposición de las sombras _L_ ', el punto _S_ no tiene ningún privilegio especial en el plano más que en la superficie esférica.
La representación que acabamos de dar de la geometría esférica en el plano es importante para nosotros, porque permite ser transferida inmediatamente al caso tridimensional.
Imaginemos un punto _S_ de nuestro espacio, y un gran número de pequeñas esferas, _L_ ', que pueden ser puestas en contacto. Pero estas esferas ya no van a ser rígidas en el sentido de la geometría euclidiana; su radio va a aumentar (en el sentido de la geometría euclidiana) cuando se alejan de _S_ hacia el infinito, y este aumento va a tener lugar en acuerdo exacto con la misma ley que se aplica al aumento de los radios de las sombras-disco _L_ ' en el plano.
Después de haber obtenido una vívida imagen mental del comportamiento de nuestras esferas _L_ ', supongamos que en nuestro espacio no hay cuerpos rígidos en absoluto en el sentido de la geometría euclidiana, sino sólo cuerpos que se comportan como nuestras esferas _L_ '. Entonces tendremos una vívida representación del espacio esférico tridimensional, o más bien de la geometría esférica tridimensional. Aquí nuestras esferas deben ser llamadas esferas «rígidas». Su aumento en tamaño cuando se alejan de _S_ no será detectado midiendo con varas de medir, igual que sucedía en el caso de las sombras-disco en _E_ , porque los patrones de medida se comportarán de la misma forma que las esferas. El espacio es homogéneo, es decir, las mismas configuraciones esféricas son posibles en el entorno de cada punto. Nuestro espacio es finito porque, a consecuencia del «crecimiento» de las esferas, solo un número finito de ellas puede encontrar cabida en el espacio.
De este modo, utilizando como plataforma la práctica de pensamiento y visualización que nos ofrece la geometría euclidiana, hemos adquirido una imagen mental de la geometría esférica. Podemos impartir sin dificultad más profundidad y vigor a estas ideas realizando construcciones imaginarias especiales. Tampoco sería difícil representar de una manera similar el caso de lo que se denomina geometría elíptica. Mi único propósito hoy ha sido mostrar que la facultad humana de visualización no está abocada en absoluto a capitular ante la geometría no-euclidiana.
# IV
## _EL SIGNIFICADO DE LA RELATIVIDAD_ (ANTOLOGÍA)
Trescientos años antes de Einstein, Galileo Galilei desarrolló una teoría de la relatividad que llegó a formar uno de los pilares de la mecánica de Isaac Newton. En _El significado de la relatividad_ , Einstein presenta la relatividad galileana como una precursora, no sólo del trabajo de Newton, sino también del suyo.
La relatividad de Galileo se basa en la idea simple e intuitiva de que el tiempo fluye de manera constante para todos los observadores independientemente de su estado de movimiento. Así, Galileo anticipa la primera ley del movimiento de Newton: los objetos en movimiento se mantendrán a velocidad constante en módulo y dirección a menos que sobre ellos actúe una fuerza exterior.
A pesar de toda la aparente complejidad matemática de _El significado de la relatividad_ , los objetivos de Einstein son bastante modestos. Simplemente demuestra que mediciones en sistemas de referencia tanto estacionarios como en movimiento darán resultados que satisfagan las leyes de Newton. Al final del capítulo, prosigue mostrando cómo un conjunto de transformaciones, las sugeridas por Hendrik Lorentz, son necesarias para hacer que las ecuaciones de Maxwell de electricidad y magnetismo funcionen en sistemas de referencia en movimiento.
La diferencia entre las trasformaciones de Galileo y las de Lorentz reside en que en las primeras el tiempo resulta un flujo constante para todos los observadores, mientras que las segundas dan como resultado pasos de tiempo distintos para observadores en distintos estados de movimiento.
¿Y qué queremos decir con «un flujo constante de tiempo»? Resulta simple decir que un evento precede a otro, o que ocurren simultáneamente, pero ¿cómo medimos el tiempo si no es a través del mismo tiempo? Teniendo en cuenta que la luz viaja a la misma velocidad independientemente del estado de movimiento del observador, Einstein sugiere usar el reflejo de la emisión de un haz de luz como reloj. Este simple experimento mental rinde algunos resultados muy sorprendentes.
Por ejemplo, se llega a la conclusión de que un observador que mide los eventos sucedidos en un tren en movimiento que pasa frente a él verá cómo sus pasajeros se mueven más despacio: sus corazones laten más despacio, los relojes de pared van más despacio, y todos los demás tiempos se ralentizan también. Del mismo modo, un pasajero del tren se verá a sí mismo de manera completamente normal, pero verá cómo el reloj de la estación de tren funciona más lentamente. Dado que la luz siempre tiene que viajar a velocidad constante y el tiempo está relacionado con la velocidad de movimiento, entonces las longitudes a lo largo de la dirección de movimiento deben estar relacionadas también. En el día a día, estos efectos no son evidentes, y sólo se ponen de manifiesto cuando hablamos de velocidades cercanas a las de la luz. Por consiguiente, a velocidades normales, las relatividades de Einstein y Galileo se comportan exactamente de la misma manera.
En este trabajo, Einstein argumenta de manera precisa que aunque prácticamente todas nuestras experiencias sugieren que Galileo y Newton estaban en lo cierto, la unificación de distintas ramas de la física requiere una nueva teoría: la relatividad especial.
### 1
### EL ESPACIO Y EL TIEMPO EN LA FÍSICA PRE-RELATIVISTA*
La teoría de la relatividad está íntimamente relacionada con la teoría del espacio y el tiempo. Por ello empezaré con una breve investigación sobre el origen de nuestras ideas de espacio y tiempo, aunque al hacerlo sé que introduzco un tema controvertido. El objeto de toda ciencia, ya sea ciencia natural o psicología, consiste en coordinar nuestras experiencias para que formen un sistema lógico. ¿Cuál es la relación entre nuestras ideas habituales de espacio y tiempo y la naturaleza de nuestras experiencias?
Las experiencias de cada uno de nosotros se nos presentan ordenadas en una serie de sucesos; en dicha serie, los sucesos individuales que recordamos parecen estar ordenados según el criterio de «anterior» y «posterior», que ya no admite más análisis. Por consiguiente, existe para cada individuo un tiempo-yo, o tiempo subjetivo, que no es medible en sí mismo. Yo puedo, de hecho, asociar un número a cada suceso, de modo que con el suceso posterior haya asociado un número más alto que con el anterior, pero la naturaleza de esta asociación puede ser completamente arbitraria. Puedo definir esta asociación por medio de un reloj, comparando el orden de los sucesos que proporciona el reloj con el orden de la serie de sucesos dada. Entendemos por un reloj algo que proporciona una serie de sucesos que pueden contarse y que tiene además otras propiedades de las que luego hablaremos.
Con la ayuda del lenguaje diferentes individuos pueden, en cierta medida, comparar sus experiencias. El resultado es que hay correspondencia entre algunas percepciones sensoriales de individuos diferentes, mientras que para otras percepciones no puede establecerse tal correspondencia. Estamos acostumbrados a considerar reales aquellas percepciones sensoriales que son comunes a individuos diferentes, y que por consiguiente son, en cierta medida, impersonales. Las ciencias naturales, y en particular la más fundamental de ellas, la física, trabajan con tales percepciones sensoriales. La idea de cuerpo físico, en particular de cuerpo rígido, consiste en un complejo relativamente constante de tales percepciones sensoriales. Un reloj es también un cuerpo, o un sistema, en este mismo sentido, con la propiedad adicional de que todos los elementos que forman parte de la serie de sucesos que cuenta pueden considerarse iguales.
La única justificación para nuestros conceptos y sistemas de conceptos es que sirven para representar el complejo de nuestras experiencias; más allá de esto, ellos no tienen legitimidad. Estoy convencido de que los filósofos han tenido un efecto dañino en el progreso del pensamiento científico al sacar ciertos conceptos fundamentales fuera del dominio del empirismo, donde están bajo nuestro control, y llevarlos a las alturas intangibles del a priori. Incluso podría parecer que el universo de ideas no puede deducirse de la experiencia por medios lógicos, sino que es, en cierto sentido, una creación de la mente humana, sin la que no hay ciencia posible, este universo de ideas es en cualquier caso tan dependiente de la naturaleza de nuestras experiencias como nuestras vestimentas lo son de la forma del cuerpo humano. Esto es particularmente cierto de nuestros conceptos de tiempo y espacio, a los que los físicos se han visto obligados por los hechos a bajar del Olimpo del a priori para ajustarlos de modo que resulten útiles.
Llegamos ahora a nuestros conceptos y juicios concernientes al espacio. También es esencial aquí prestar una gran atención a la relación entre la experiencia y nuestros conceptos. Creo que Poincaré reconoció claramente la verdad en la exposición que hizo en su libro _La Ciencia y la Hipótesis_. Entre todos los cambios que podemos percibir en un cuerpo rígido, los que pueden ser cancelados por movimientos voluntarios de nuestro cuerpo se caracterizan por su simplicidad; Poincaré les llama cambios de posición. Mediante simples cambios de posición podemos poner dos cuerpos en contacto. Los teoremas de congruencia, fundamentales en geometría, tienen que ver con las leyes que gobiernan tales cambios de posición. En el caso del concepto de espacio parece esencial lo siguiente. Podemos formar nuevos cuerpos juntando cuerpos _B_ , _C_ ,... a un cuerpo _A_ ; decimos entonces que _continuamos_ el cuerpo _A_. Podemos continuar el cuerpo _A_ de tal manera que entre en contacto con cualquier otro cuerpo _X_. Podemos llamar «espacio del cuerpo _A_ » al conjunto de todas las continuaciones del cuerpo _A._ Entonces es cierto que todos los cuerpos están en el «espacio del cuerpo _A_ (arbitrariamente escogido)». En este sentido no podemos hablar de espacio en abstracto sino sólo del «espacio perteneciente a un cuerpo _A_ ». La corteza terrestre desempeña un papel tan dominante en nuestra vida cotidiana, cuando juzgamos las posiciones relativas de los cuerpos, que nos ha llevado a una concepción abstracta del espacio que ciertamente no puede defenderse. Para liberarnos de este error fatal hablaremos solamente de «cuerpos de referencia», o «espacio de referencia». Este refinamiento de conceptos sólo llegó a hacerse necesario con la teoría de la relatividad general, como veremos más tarde.
No voy a entrar en detalles respecto a aquellas propiedades del espacio de referencia que llevan a nuestra idea de los puntos como elementos del espacio, y al espacio como un continuo. Tampoco intentaré un análisis más detallado de las propiedades del espacio que justifican la concepción de series continuas de puntos, o líneas. Si se dan por supuestos estos conceptos, y su relación con los cuerpos sólidos de la experiencia, entonces es fácil explicar lo que entendemos por la tridimensionalidad del espacio: a cada punto se le pueden asociar tres números _x_ 1, _x_ 2, _x_ 3 (coordenadas) de tal manera que esta asociación es biunívoca, y además _x_ 1, _x_ 2, _x_ 3 varían de forma continua cuando el cuerpo describe una serie continua de puntos (una línea).
En la física pre-relativista se supone que las leyes de configuración de los cuerpos sólidos ideales son compatibles con la geometría euclidiana. Lo que esto significa puede expresarse así: dos puntos marcados en un cuerpo rígido definen un _intervalo_. Tal intervalo puede orientarse, en reposo, de múltiples maneras con respecto a nuestro espacio de referencia. Si ahora los puntos del espacio pueden referirse a coordenadas _x_ 1, _x_ 2, _x_ 3 de tal manera que las diferencias de las coordenadas, ∆ _x_ 1, ∆ _x_ 2, ∆ _x_ 3, de los dos extremos del intervalo dan la misma suma de cuadrados
_s_ 2 = ∆ _x_ 12 + ∆ _x_ 22 + ∆ _x_ 32
(1)
para cada orientación del intervalo, entonces el espacio de referencia se denomina euclidiano, y las coordenadas se denominan cartesianas. En realidad basta con hacer esta hipótesis en el límite de un intervalo infinitamente pequeño. En esta hipótesis hay implícito algo mucho menos especial, sobre lo que debemos llamar la atención por su importancia fundamental. En primer lugar, se supone que podemos mover un cuerpo rígido ideal de una manera arbitraria. En segundo lugar, se supone que el comportamiento de los cuerpos rígidos ideales respecto a la orientación es independiente del material de los cuerpos y de sus cambios de posición, en el sentido de que si dos intervalos pueden hacerse coincidir una vez, entonces pueden hacerse coincidir en cualquier instante y lugar. Ambas hipótesis, que son de importancia fundamental para la geometría, y en especial para las medidas físicas, surgen de forma natural de la experiencia. En la teoría de la relatividad general sólo hay que suponer su validez para cuerpos y espacios de referencia que son infinitamente pequeños comparados con las dimensiones astronómicas.
Llamaremos a la cantidad _s_ longitud del intervalo. Para que pueda ser unívocamente determinada es necesario fijar arbitrariamente la longitud de un intervalo definido; por ejemplo, podemos hacerlo igual a 1 (unidad de longitud). Entonces pueden determinarse las longitudes de todos los demás intervalos. Si hacemos las _x_ _ν_ linealmente independientes de un parámetro λ,
_x_ _ν_ = _a_ _ν_ \+ λ _b_ _ν_ ,
obtenemos una línea que tiene todas las propiedades de una línea recta de la geometría euclidiana. En particular, se deduce fácilmente que tendiendo _n_ veces el intervalo _s_ en línea recta se obtiene un intervalo de longitud _n · s_. Por lo tanto, una longitud significa el resultado de una medición realizada a lo largo de una línea recta por medio de una vara de medir unidad. Tiene un significado que es tan independiente del sistema de coordenadas como lo es el de línea recta, como veremos a continuación.
Llegamos ahora a una línea de pensamiento que desempeña un papel similar en las teorías de la relatividad especial y general. Planteamos la pregunta: además de las coordenadas cartesianas que hemos utilizado, ¿hay otras coordenadas equivalentes? Un intervalo tiene un significado físico independiente de la elección de coordenadas, y lo mismo sucede con la superficie esférica que obtenemos como el lugar geométrico de los puntos extremos de todos los intervalos iguales que tendemos a partir de un punto arbitrario de nuestro espacio de referencia. Si _x v_ y _x' v_ ( _v_ de 1 a 3) son coordenadas cartesianas de nuestro espacio de referencia, entonces la superficie esférica se expresará en nuestros dos sistemas de coordenadas por las ecuaciones
(2)
(2a)
¿Cómo deben expresarse las _x' v_ en función de las _x v_ para que las ecuaciones (2) y (2a) puedan ser equivalentes? Considerando las _x' v_ expresadas en función de las _x v_, podemos escribir, por el teorema de Taylor, para valores pequeños de ∆ _x v_,
Si sustituimos (2a) en esta ecuación y la comparamos con (1), vemos que las _x' v_ deben ser funciones lineales de las _x v_. Si, por consiguiente, hacemos
(3)
o
(3a)
entonces la equivalencia de las ecuaciones (2) y (2a) se expresa en la forma
(2b)
Se sigue así que λ debe ser una constante. Si hacemos λ = 1, entonces (2b) y (3a) proporcionan las condiciones
(4)
en donde δ _αβ_ = 1 o δ _αβ_ = 0, según sea _α_ = _β_ o _α_ ≠ _β_. Las condiciones (4) se denominan condiciones de ortogonalidad, y las transformaciones (3), (4) son transformaciones lineales ortogonales. Si estipulamos que sea igual al cuadrado de la longitud en todo sistema de coordenadas, y si siempre medimos con la misma unidad de escala, entonces debe ser igual a 1. Por lo tanto, las transformaciones lineales ortogonales son las únicas mediante las que podemos pasar en nuestro sistema de referencia de un sistema de coordenadas cartesianas a otro. Vemos que al aplicar tales transformaciones las ecuaciones de una línea recta se convierten en ecuaciones de una línea recta. Invirtiendo las ecuaciones (3a) multiplicando ambos miembros por _b_ _νβ_ y sumando para todas las ν obtenemos
(5)
Los mismos coeficientes, _b_ , determinan también la sustitución inversa de ∆ _x_ _ν_. Geométricamente, _b_ _να_ es el coseno del ángulo entre el eje _x'_ _ν_ y el eje _x_ _α_.
Para resumir, podemos decir que en geometría euclidiana hay (en un espacio de referencia dado) sistemas de coordenadas privilegiados, los sistemas cartesianos, que se transforman unos en otros mediante transformaciones lineales ortogonales. La distancia _s_ entre dos puntos de nuestro espacio de referencia, medida por una vara de medir, se expresa en tales coordenadas de una manera particularmente simple. Toda la geometría puede fundarse en esta idea de distancia. En el tratamiento presente, la geometría está relacionada con las cosas reales (cuerpos rígidos), y sus teoremas son enunciados que se refieren al comportamiento de estas cosas, que pueden probarse verdaderos o falsos.
Normalmente estamos acostumbrados a estudiar la geometría divorciada de cualquier relación entre sus conceptos y la experiencia. Hay ventajas en aislar lo que es puramente lógico e independiente de lo que es, en principio, empirismo incompleto. Esto es satisfactorio para el matemático puro. Él se satisface si puede deducir correctamente sus teoremas de los axiomas, sin errores de lógica. La cuestión de si la geometría de Euclides es verdadera o no, no le incumbe. Pero para nuestro propósito es necesario asociar los conceptos fundamentales de la geometría con objetos naturales; sin dicha asociación la geometría no tiene valor para el físico. El físico está interesado en la cuestión de si los teoremas de la geometría son verdaderos o no. Desde este punto de vista la geometría euclidiana afirma algo más que las meras deducciones derivadas lógicamente de las definiciones, como puede verse a partir de la siguiente consideración.
Entre _n_ puntos del espacio hay distancias, _s_ _µν_ ; entre éstas y las 3 _n_ coordenadas tenemos las relaciones
A partir de estas ecuaciones pueden eliminarse las 3 _n_ coordenadas, y de esta eliminación quedan al menos ecuaciones en las _s_ _µν_. Puesto que las _s_ _µν_ son cantidades medibles y, por definición, son mutuamente independientes, estas relaciones entre las _s_ _µν_ no son necesariamente a priori.
De lo anterior es evidente que las ecuaciones de las transformaciones (3), (4) tienen una importancia fundamental en la geometría euclidiana, en cuanto que gobiernan la transformación de un sistema de coordenadas cartesiano a otro. Los sistemas de coordenadas cartesianos se caracterizan por la propiedad de que en ellos la distancia medible entre dos puntos, _s_ , se expresa mediante la ecuación
Si _K_ (x _v_ ) y _K'_ (x _v_ ) son dos sistemas de coordenadas cartesianos, entonces
El segundo miembro de esta ecuación es idénticamente igual al primero debido a las ecuaciones de la transformación lineal ortogonal, y el segundo miembro difiere del primero solamente en que las _x v_ sustituyen a las _x'_ _ν_. Esto se expresa en el enunciado de que es invariante con respecto a las transformaciones lineales ortogonales. Es evidente que en la geometría euclidiana sólo tienen un significado objetivo, independiente de la elección particular del sistema de coordenadas, esas cantidades que pueden expresarse como un invariante con respecto a transformaciones lineales ortogonales. Ésta es la razón de que la teoría de invariantes, que tiene que ver con las leyes que gobiernan la forma de los invariantes, sea tan importante para la geometría analítica.
Como segundo ejemplo de invariante geométrico, consideremos un volumen. Éste se expresa por
Por el teorema de Jacobi podemos escribir
donde el integrando en la última integral es el determinante funcional de las _x'_ _ν_ con respecto a las _x v_, y éste, por (3), es igual al determinante | _b_ _µν_ | de los coeficientes de sustitución _b_ _να_. Si formamos el determinante de las δ _µα_ de la expresión (4), obtenemos, por el teorema de multiplicación de determinantes
(6)
Si nos limitamos a aquellas transformaciones que tienen determinante +1, (y sólo éstas se dan para variaciones continuas de los sistemas de coordenadas), entonces _V_ es un invariante.
Los invariantes, sin embargo, no son las únicas formas mediante las que podemos dar expresión a la independencia de la elección concreta de las coordenadas cartesianas. Vectores y tensores son otras formas de expresión. Expresemos el hecho de que el punto con las coordenadas actuales _x_ está sobre una línea recta. Tenemos
_x_ ν − Aν = λ _B_ ν ( _ν_ de 1 a 3).
Sin pérdida de generalidad podemos escribir
Si multiplicamos las ecuaciones por _b_ βν (comparemos (3b) y (5)) y sumamos para todas las _ν_ 's obtenemos
_x'_ _β_ – _A_ ' _β_ = λ _B_ ' _β_
donde hemos escrito
Éstas son ecuaciones de líneas rectas con respecto a un segundo sistema de coordenadas cartesianas _K_ '. Tienen la misma forma que las ecuaciones con respecto al sistema de coordenadas original. Es por ello evidente que las líneas rectas tienen un significado que es independiente del sistema de coordenadas. Formalmente, esto depende del hecho de que las cantidades ( _x_ _ν_ – _A_ _ν_ ) – λ _B_ _ν_ se transforman como las componentes de un intervalo ∆ _x_ ν. El conjunto de tres ecuaciones, definido para cada sistema de coordenadas cartesianas, y que se transforman como un intervalo, se denomina un vector. Si las tres componentes de un vector se anulan en un sistema de coordenadas cartesianas, entonces se anulan en todos los sistemas, porque las ecuaciones de la transformación son homogéneas. Podemos así obtener el significado del concepto de un vector sin hacer referencia a una representación geométrica. Este comportamiento de las ecuaciones de una línea recta puede expresarse diciendo que la ecuación de una línea recta es covariante con respecto a transformaciones lineales ortogonales.
Mostraremos ahora rápidamente que hay entidades geométricas que llevan al concepto de tensores. Sea _P_ 0 el centro de una superficie de segundo grado, _P_ un punto cualquiera de la superficie y ξν las proyecciones del intervalo _P_ 0 _P_ sobre los ejes de coordenadas. Entonces la ecuación de la superficie es
En éste, y en casos análogos, omitiremos el signo para la suma y entenderemos que la suma se hace sobre aquellos índices que aparecen dos veces. Así, escribimos la ecuación de la superficie
_a_ _µν_ ξ _µ_ ξ _ν_ = 1.
Las cantidades _a_ µν determinan la superficie por completo, para una posición dada del centro, con respecto a un sistema de coordenadas cartesianas escogido. A partir de la conocida ley de transformación de las ξν (3a) para las transformaciones lineales ortogonales, encontramos fácilmente la ley de transformación para las _a_ _µν_ :
_a'_ _σr_ = _b_ _σµ_ _b_ _rν_ _a_ _µν_.
Esta transformación es homogénea y de primer grado en las _a_ _µν_. Debido a esta transformación, las _a_ _µν_ se denominan componentes de un tensor de segundo rango (esto último debido al subíndice doble). Si todas las componentes, _a_ _µν_ , de un tensor con respecto a cualquier sistema de coordenadas cartesianas se anulan, también se anulan con respecto a cualquier otro sistema de coordenadas. La forma y la posición de esta superficie de segundo grado están descritas por este tensor ( _a_ ).
Tensores de rango (número de índices) superior pueden definirse analíticamente. Es posible, y ventajoso, considerar los vectores como tensores de rango 1, y los invariantes (escalares) como tensores de rango 0. A este respecto, el problema de la teoría de invariantes puede formularse de este modo: ¿De acuerdo con qué leyes pueden formarse nuevos tensores a partir de tensores dados? Consideraremos estas leyes ahora para poder utilizarlas más adelante. En primer lugar trabajaremos sólo con las propiedades de tensores con respecto a la transformación de un sistema de coordenadas en otro en el mismo espacio de referencia, por medio de transformaciones lineales ortogonales. Puesto que las leyes son completamente independientes del número de dimensiones, dejaremos este número _n_ indefinido inicialmente.
_Definición_
Si un objeto está definido con respecto a todo sistema de coordenadas cartesianas en un espacio de referencia de _n_ dimensiones por los _n_ α números _A_ µνρ... (α = número de índices), entonces dichos números son las componentes de un tensor de rango α si la ley de transformación es
_A_ ' _µ'ν'ρ'_... _= b_ _µ'µ_ _b_ _ν'ν_ ___b_ _ρ'ρ_... _A_ _µνρ_...
(7)
_Comentario_
De esta definición se sigue que
_A_ _µνρ_... _B_ _µ_ _C_ _ν_ _D_ _ρ_...
(8)
es un invariante, con tal de que ( _B_ ), ( _C_ ), ( _D_ )... sean vectores. Recíprocamente, el carácter tensorial de ( _A_ ) puede inferirse si se conoce que la expresión (8) conduce a un invariante para una elección arbitraria de los vectores ( _B_ ), ( _C_ ), etc.
_Suma y resta_
Por suma y resta de las componentes correspondientes de tensores del mismo rango, resulta un tensor de igual rango
_A_ _µνρ_... _± B_ _µνρ_... _= C_ _µνρ_...
(9)
La demostración se sigue de la definición de tensor dada antes.
_Multiplicación_
A partir de un tensor de rango α y un tensor de rango _β_ podemos obtener un tensor de rango _α_ \+ _β_ multiplicando todas las componentes del primer tensor por todas las componentes del segundo tensor:
T _µνρ_... _αβγ_...= _A_ _µνρ_... _B_ _αβγ_...
(10)
_Contracción_
Puede obtenerse un tensor de rango α – 2 a partir de un tensor de rango α igualando dos índices definidos y sumando luego sobre este único índice
(11)
La demostración es
Además de estas reglas de operación elementales existe también la formación de tensores por diferenciación («Erweiterung»):
(12)
De acuerdo con estas reglas de operación pueden formarse nuevos tensores, respecto a transformaciones lineales ortogonales, a partir de tensores dados.
_Propiedades de simetría de los tensores_
Los tensores se denominan simétricos o antisimétricos con respecto a dos de sus índices, µ y ν, si las dos componentes que resultan de intercambiar los índices µ y ν son iguales o iguales pero con signo opuesto:
Condición de simetría: _A_ _µνρ_ = _A_ _νµρ'_.
Condición de antisimetría: _A_ _µνρ_ = − _A_ _νµρ'_.
_Teorema_
El carácter de simetría o antisimetría existe independientemente de la elección de coordenadas, y en ello reside su importancia. La demostración se sigue de la ecuación que define a los tensores.
_Tensores especiales_
I. Las cantidades δ _ρσ_ (4) son componentes tensoriales (tensor fundamental).
_Demostración_
Si en el segundo miembro de la ecuación de transformación _A'_ _µν_ = _b_ _µα_ _b_ _νβ_ _A_ _αβ_ , sustituimos _A_ _αβ_ por las cantidades _δ αβ_ (que son iguales a 1 o 0 según sea _α_ = _β_ o _α_ ≠ _β_ ), obtenemos
_A'_ _µν_ = _b_ _µβ_ _b_ _να_ = _δ µν_.
La justificación para el último signo de la igualdad se hace evidente si aplicamos (4) a la sustitución inversa (5).
II. Existe un tensor ( _δ µνρ_...) antisimétrico con respecto a todo par de índices, cuyo rango es igual al número de dimensiones, _n_ , y cuyas componentes son iguales a +1 o –1 según _µνρ_... sea una permutación par o impar de 123....
La demostración se hace con ayuda del teorema antes demostrado | _b_ _ρσ_ _|_ = 1.
Estos pocos teoremas sencillos constituyen el aparato de la teoría de invariantes para la construcción de ecuaciones de la física pre-relativista y de la teoría de la relatividad especial.
Hemos visto que en física pre-relativista se requiere un cuerpo o un espacio de referencia para especificar relaciones en el espacio, y además un sistema cartesiano de coordenadas. Podemos fusionar los dos conceptos en uno si pensamos en un sistema de coordenadas cartesiano como un armazón cúbico formado por varas cada una de ellas de longitud unidad. Las coordenadas de los puntos reticulares de este armazón son números enteros. De la relación fundamental
_s_ 2 = ∆ _x_ 12 + ∆ _x_ 22 + ∆ _x_ 32
(13)
se sigue que todos los miembros del espacio-retículo son de longitud unidad. Para especificar relaciones en el tiempo necesitamos, además, un reloj estándar colocado, digamos, en el origen de nuestro sistema de coordenadas o marco de referencia cartesiano. Si un suceso tiene lugar en un lugar cualquiera, podemos asignarle tres coordenadas _x_ _ν_ y un tiempo _t_ , en cuanto hayamos especificado qué tiempo del reloj en el origen es simultáneo con el suceso. Por consiguiente, damos (hipotéticamente) un significado objetivo a la afirmación de la simultaneidad de sucesos distantes, mientras que previamente hemos estado interesados sólo en la simultaneidad de dos experiencias de un mismo individuo. El tiempo así especificado es, para todos los sucesos, independientemente de la posición del sistema de coordenadas en nuestro espacio de referencia, y por lo tanto es un invariante con respecto a la transformación (3).
Se postula que el sistema de ecuaciones que expresa las leyes de la física pre-relativista es covariante con respecto a la transformación (3), como lo son las relaciones de la geometría euclidiana. La isotropía y homogeneidad del espacio se expresan de esta manera. Consideraremos ahora algunas de las ecuaciones más importantes de la física desde este punto de vista.
Las ecuaciones de movimiento de una partícula material son
(14)
( _dx_ _ν_ ) es un vector; _dt_ , y por consiguiente también es un invariante; por lo tanto es un vector; y de la misma manera puede demostrarse que es un vector. En general, la operación de diferenciación con respecto al tiempo no altera el carácter tensorial. Puesto que _m_ es un invariante (tensor de rango 0), es un vector, o tensor de rango 1 (por el teorema de multiplicación de tensores). Si la fuerza ( _X_ _ν_ ) tiene un carácter vectorial, lo mismo sucede con la diferencia . Estas ecuaciones de movimiento son así válidas en cualquier otro sistema de coordenadas cartesianas en el espacio de referencia. En el caso de que las fuerzas sean conservativas podemos reconocer fácilmente el carácter vectorial de ( _X_ _ν_ ). En efecto, existe una energía potencial, Φ, que depende sólo de las distancias entre las partículas, y es por lo tanto un invariante. El carácter vectorial de la fuerza, , es entonces una consecuencia de nuestro teorema general sobre la derivada de un tensor de rango 0.
Multiplicando por la velocidad un tensor de rango 1, obtenemos la ecuación tensorial
Por contracción y multiplicación con el escalar _dt_ obtenemos la ecuación de la energía cinética
Si ξ _ν_ denota la diferencia entre las coordenadas de la partícula material y un punto fijo en el espacio, entonces las ξ _ν_ tienen carácter vectorial. Evidentemente tenemos , de modo que las ecuaciones de movimiento de la partícula pueden escribirse
Multiplicando esta ecuación por las ξν obtenemos una ecuación tensorial
Contrayendo el tensor de la izquierda y promediando en el tiempo obtenemos el teorema del virial, en el que no vamos a insistir. Escribiendo una ecuación similar con los índices intercambiados y restando las dos obtenemos, después de una sencilla transformación, el teorema de los momentos,
(15)
De esta forma resulta evidente que el momento de un vector no es un vector sino un tensor. Debido a su carácter antisimétrico no hay nueve, sino sólo tres ecuaciones independientes en este sistema. La posibilidad de reemplazar tensores antisimétricos de segundo rango en el espacio de tres dimensiones por vectores depende de la formación del vector
Si multiplicamos el tensor antisimétrico de rango 2 por el tensor especial antisimétrico δ antes introducido, y contraemos dos veces, resulta un vector cuyas componentes son numéricamente iguales a las del tensor. Éstos son los denominados vectores axiales que se transforman de forma diferente de las ∆ _x_ _ν_ al pasar de un sistema dextrógiro a un sistema levógiro. Resulta más gráfico considerar un tensor antisimétrico de rango 2 como un vector en el espacio de tres dimensiones, pero esto no representa tan bien la naturaleza exacta de la cantidad correspondiente como considerarlo un tensor.
Consideremos a continuación las ecuaciones de movimiento de un medio continuo. Sea _ρ_ la densidad, _u_ _ν_ las componentes de la velocidad consideradas como función de las coordenadas y del tiempo, _X_ _ν_ las fuerzas de volumen por unidad de masa, y _p_ _νσ_ las tensiones sobre una superficie perpendicular al eje σ en la dirección de las _x_ _ν_ crecientes. Entonces las ecuaciones de movimiento son, por la ley de Newton,
en donde es la aceleración de la partícula que en el instante _t_ tiene las coordenadas _x_ _ν_. Si expresamos esta aceleración mediante derivadas parciales, obtenemos, después de dividir por _ρ_ ,
(16)
Debemos demostrar que esta ecuación es válida independientemente de la elección particular del sistema cartesiano de coordenadas. ( _u_ _ν_ ) es un vector, y por consiguiente es también un vector. es un tensor de rango 2, un tensor de rango 3. El segundo término de la izquierda resulta de la contracción en los índices σ, τ. El carácter vectorial del segundo término de la derecha es obvio. Para que el primer término de la derecha pueda ser también un vector es necesario que _p_ _νσ_ sea un tensor. Entonces por diferenciación y contracción resulta , que es por consiguiente un vector, y también lo es tras la multiplicación por el escalar recíproco . Que _p_ νσ es un tensor, y por consiguiente se transforma según la ecuación
_p'_ _µν_ = _b_ _µα __b_ _νβ_ _p_ _αβ_ ,
se demuestra en mecánica integrando esta ecuación sobre un tetraedro infinitamente pequeño. También se demuestra, por aplicación del teorema de los momentos a un paralelepípedo infinitamente pequeño, que _p_ _νσ_ = _p_ _σν_ , y con ello que el tensor de tensiones es un tensor simétrico. De todo lo anterior se sigue que, con ayuda de las reglas antes dadas, la ecuación es covariante con respecto a transformaciones ortogonales en el espacio (transformaciones rotacionales); y se hacen también evidentes las reglas de acuerdo con las cuales deben transformarse las cantidades en la ecuación para que ésta se haga covariante.
La covariancia de la ecuación de continuidad
(17)
no requiere, dado lo anterior, ninguna discusión particular.
También comprobaremos la covariancia de las ecuaciones que expresan la dependencia de las componentes de la tensión respecto de las propiedades de la materia, y estableceremos dichas ecuaciones para el caso de un fluido viscoso compresible con la ayuda de las condiciones de covariancia. Si olvidamos la viscosidad, la presión, _p_ , será un escalar y dependerá solamente de la densidad y la temperatura del fluido. La contribución al tensor de tensiones es entonces
_p_ δ _µν_
en donde δ _µν_ es el tensor simétrico especial. Este término también estará presente en el caso de un fluido viscoso. Pero en este caso habrá también términos de presión, que dependen de las derivadas espaciales de las _u_ _ν_. Supondremos que esta dependencia es lineal. Puesto que estos términos deben ser tensores simétricos, los únicos que entran serán
(pues es un escalar). Por razones físicas (ausencia de deslizamiento) se supone que para dilataciones simétricas en todas direcciones, i.e. cuando
no hay fuerzas de fricción presentes, de lo que se sigue que . Si sólo es diferente de cero, sea , por lo que está determinado. Entonces obtenemos para el tensor de tensiones completo
(18)
El valor heurístico de la teoría de invariantes, que aparece de la isotropía del espacio (equivalencia en todas direcciones), se hace evidente a partir de este ejemplo.
Finalmente, consideremos las ecuaciones de Maxwell en la forma en que son la base de la teoría del electrón de Lorentz.
(19)
(20)
_i_ es un vector, porque la densidad de corriente se define como la densidad de electricidad multiplicada por el vector velocidad de la electricidad. Según las tres primeras ecuaciones es evidente que también _e_ debe considerarse un vector. Entonces _h_ no puede considerarse un vector. Sin embargo, las ecuaciones pueden interpretarse fácilmente si _h_ se considera un tensor antisimétrico de segundo rango. En consecuencia escribimos _h 23_, _h 31_, _h 12_ en lugar de _h 1_, _h 2_, _h 3_, respectivamente. Prestando atención a la antisimetría de _h_ µν, las tres primeras ecuaciones de (19) y (20) pueden escribirse en la forma
(19a)
(20a)
En contraste con _e_ , _h_ aparece como una magnitud que tiene el mismo tipo de simetría que una velocidad angular. Las ecuaciones de divergencia toman entonces la forma
(19b)
(20b)
La última ecuación es una ecuación tensorial antisimétrica de tercer rango (la antisimetría del primer miembro con respecto a cualquier par de índices puede demostrarse fácilmente si se presta atención a la antisimetría de las _h_ _µ ν_ ). Esta notación es más natural que la usual porque, en contraste con la última, es aplicable a sistemas cartesianos levógiros tanto como a sistemas dextrógiros sin cambio de signo.
# V
## _LA EVOLUCIÓN DE LA FÍSICA_ (ANTOLOGÍA)
Durante la primera mitad del siglo XX, las teorías cuánticas estaban transformando el paisaje de la física, de manera muy similar a como lo había hecho el electromagnetismo un siglo antes. En _La evolución de la física_ , Albert Einstein y Leopold Infeld describen esta revolución desde el ojo del huracán. Hoy nosotros estamos ya tan acostumbrados a términos como «nanotecnología» y «microelectrónica» (tecnologías que no existirían sin la mecánica cuántica), que resulta fácil olvidar el enorme cambio en nuestra comprensión general de las cosas para pensar en términos cuánticos.
En el escenario continuo, una pieza de hierro, por ejemplo, puede tener una masa. En el escenario cuántico, esto muestra ser una ilusión. Cada trozo de hierro contiene un cierto número de átomos, y cada átomo tiene una masa fija. Otro trozo puede sólo diferir del primero por un número entero de átomos y, por tanto, por una masa «cuantificada». Los mismos átomos están hechos de elementos cuantificados todavía menores: protones y neutrones. Y eso no es todo. Unas dos décadas después de la publicación de _La evolución de la física_ , Murray Gell-Mann y Kazuhiko Nishijima propusieron que los protones y neutrones estaban hechos de partículas cuantificadas aún más pequeñas conocidas como quarks.
La idea de que las partículas puedan ser divididas sólo un número finito de veces antes de llegar a la escala atómica no era nueva. Tiene sus lejanos orígenes en tiempos de Demócrito y los antiguos atomistas griegos. La solidez de las teorías cuánticas modernas reside más bien en las propiedades adscritas a partículas microscópicas. Mientras a escala humana, normalmente decimos que una partícula tiene una posición y velocidad bien definidas, no podemos realizar estas declaraciones a escala cuántica. En cambio, las partículas se definen por sus ondas de probabilidad. Uno de los ejemplos más extraños de la singularidad asociada a la cuántica proviene de la idea de que antes de la observación mediante un experimento, un electrón no tiene una posición bien definida. Pero al observarlo, lo «forzamos» a un estado particular. Seamos claros, la física cuántica no dice que no _sepamos_ la posición antes de la observación, sino que algo como una posición definida... ¡en realidad no existe!
Lo más asombroso es que mientras el mundo microscópico está gobernado por estadísticas, el mundo macroscópico parece gobernado por las leyes de Newton, las cuales son en sí mismas deterministas. ¿Y cómo puede ocurrir esto teniendo en cuenta que, al fin y al cabo, los objetos macroscópicos están hechos de protones, neutrones y electrones? Vemos el mismo efecto al pensar en el aire de una habitación. Mientras las moléculas individuales vuelan al azar, a escala humana parecen mucho más estables. En cierto sentido, la distinción entre las propiedades de una onda y de una partícula de la materia se encuentra sólo como función de la escala física. Las teorías cuánticas demuestran que, en las escalas más pequeñas, las partículas se comportan más y más como una onda, y están gobernadas más y más por las estadísticas.
Sin embargo, esta dualidad onda-partícula (también conocida como dualidad onda-corpúsculo) no sólo existe para objetos como electrones y protones. Isaac Newton originariamente propuso que la luz debía tener propiedades corpusculares, una teoría que fue rechazada en el siglo XIX cuando se observó que la luz exhibía patrones de interferencia, una propiedad característica de las ondas. Finalmente, se vio que la luz poseía ambas propiedades, como las de las ondas de radio y las de las partículas cuantificadas, y que fueron llamadas fotones. La modestia debió de impedir a Einstein señalar que fue su propia interpretación del efecto fotoeléctrico lo que en última instancia ocasionó el moderno punto de vista de entender la luz como partícula. En este experimento, un haz de luz ultravioleta se proyecta en un metal, y como consecuencia se expulsan electrones, un comportamiento típico de partículas. Su artículo de 1905 describiendo este efecto le otorgó el premio Nobel en 1921.
_La evolución de la física_ de Einstein nos adentra en el estatus de la ciencia de principios del siglo XX, incluyendo algún que otro vistazo hacia sus notables contribuciones. Casi setenta años más tarde, a pesar de que sus modelos han sido refinados considerablemente, los físicos todavía siguen gestionando la tormenta de peculiaridades surgida del escenario cuántico del universo.
### 1
### CAMPO Y RELATIVIDAD
#### EL CAMPO COMO REPRESENTACIÓN
Durante la segunda mitad del siglo XIX, se introdujeron en la física ideas nuevas y revolucionarias, que abrieron el camino a un nuevo punto de vista filosófico, distinto del anterior mecanicista. Los resultados de los trabajos de Faraday, Maxwell y Hertz condujeron al desarrollo de la física moderna, a la creación de nuevos conceptos que constituyeron una nueva imagen de la realidad.
Nos proponemos describir, en las páginas que van a continuación, la revolución producida en la ciencia por esos nuevos conceptos y mostrar cómo ganaron éstos, a su vez, en claridad y fuerza. Trataremos de reconstruir lógicamente la línea de su desarrollo, sin preocuparnos demasiado de su orden cronológico.
Los conceptos nuevos se originaron en el estudio de los fenómenos eléctricos, pero resulta más sencillo introducirlos a través de la mecánica. Sabemos que dos partículas se atraen mutuamente con una fuerza que decrece con el cuadrado de la distancia. Podemos representar este hecho de una manera original como se hace en la figura 1, a pesar de que resulta difícil comprender qué se gana con ello.
El pequeño círculo del gráfico representa el cuerpo atrayente, como, por ejemplo, el Sol. En realidad este diagrama debe imaginarse en el espacio y no como una figura plana. El círculo representa, entonces, una esfera, la del Sol, en nuestro ejemplo. Un cuerpo, el llamado _cuerpo de prueba,_ colocado en un punto próximo al Sol, será atraído según la recta que une los centros de ambos cuerpos. Así, las líneas de la figura 1 indican la dirección de las fuerzas atractivas del Sol correspondientes a distintas posiciones del cuerpo de prueba. La flecha dibujada sobre cada una de las líneas indica que la fuerza es atractiva, es decir, que está dirigida hacia el Sol. Estas rectas se llaman líneas de _fuerza del campo gravitatorio._ Por ahora esto es, simplemente, un nombre, y no existe razón para asignarle mayor importancia. Hay un detalle característico de esta representación que se señalará oportunamente: las líneas de fuerza están trazadas en el espacio vacío. Por el momento, el conjunto total de líneas de fuerza, o, más brevemente, el _campo,_ indica, tan sólo, cómo se comportaría un cuerpo de prueba colocado en la proximidad de la esfera, campo que hemos, así, representado.
**Fig. 1**
Las líneas del modelo espacial son siempre perpendiculares a la superficie de la esfera. Dado que esas líneas se reúnen en un punto, el centro de la esfera, es evidente que su densidad es mayor en la proximidad de ella y disminuye a medida que se alejan. Considerando zonas a distancias dobles, triples, etc., de la esfera, la densidad de las líneas en ellas en el modelo espacial, aunque no en nuestro dibujo, se hará cuatro, nueve, etc., veces menor, respectivamente. Luego las líneas de fuerza cumplen una función doble. Por una parte, indican la dirección de la fuerza que actúa sobre un cuerpo colocado en las inmediaciones de la esfera solar y, por otra, su densidad en el espacio señala la variación de la fuerza en relación con la distancia. La representación gráfica del campo, correctamente interpretada, indica, pues, la dirección de la fuerza de gravitación y su variación según la distancia. Esta representación objetiva de la ley de gravitación es tan clara como una buena descripción verbal o como el lenguaje preciso y económico de las matemáticas. La _representación del campo,_ como la llamaremos, es clara e interesante, pero no hay razón alguna para creer que represente un progreso real. Resultaría muy difícil, pongamos por caso, probar su utilidad en el caso de la gravitación. Tal vez alguien encuentre útil considerar estas líneas como algo más que una representación, e imagine que las acciones de la fuerza de gravitación se efectúan realmente, mediante tales líneas. Esto puede hacerse, pero entonces la velocidad de dichas acciones a lo largo de las líneas de fuerza debe suponerse infinitamente grande. La fuerza entre dos cuerpos, según la ley de Newton, depende tan sólo de la distancia; el tiempo no interviene en su formulación. ¡La fuerza tiene, pues, que pasar instantáneamente de un cuerpo a otro! Pero como un movimiento con velocidad de transmisión infinita no tiene significado para ninguna persona razonable, la tentativa de transformar nuestra representación en algo más que un modelo auxiliar, no conduce a nada.
No es nuestra intención, sin embargo, discutir ahora el problema de la gravitación. Nos sirvió sólo como introducción, simplificando la explicación de métodos semejantes de razonamiento de la teoría de la electricidad.
Empezaremos con un análisis del experimento que ha creado serias dificultades al punto de vista mecanicista. Recordemos que al establecer una corriente en un conductor circular, en cuyo centro se halla una aguja magnética, aparece una fuerza que actúa sobre el polo magnético perpendicularmente a la línea que une dicho polo con el conductor. Dicha fuerza, originada por una carga móvil, depende de su velocidad según el experimento de Rowland. Estos hechos experimentales contradicen la concepción filosófica según la cual todas las fuerzas debieran depender únicamente de la distancia y actuar en la línea de unión de las partículas entre las que se manifiestan.
La expresión exacta del modo de actuar de una corriente eléctrica sobre un polo magnético es, evidentemente, mucho más complicada que la ley de la gravitación. Sin embargo, es posible visualizar las acciones de dicha fuerza, como lo hicimos en el caso de la fuerza de gravitación. Nuestro problema lo podemos formular así: ¿con qué fuerza actúa la corriente eléctrica sobre un polo magnético colocado en su proximidad?
Resultaría más bien difícil describirla con palabras. Aun con una fórmula matemática, ello sería complicado. Más difícil es representar todo lo que sabemos de esta fuerza en un gráfico, o más bien, en un modelo espacial de líneas de fuerza. Se encuentra cierta dificultad en ello por el hecho de que todo polo magnético existe siempre unido a otro, formando un dipolo. Es posible, no obstante, imaginar una aguja de longitud suficiente para que podamos considerar sólo la fuerza que actúa sobre el polo más próximo a la corriente. El otro polo lo consideramos bastante alejado para que la fuerza que actúa sobre él pueda no tomarse en cuenta. Para evitar ambigüedad, supondremos que el polo magnético colocado cerca del conductor es el positivo.
Las características de la fuerza que actúa sobre dicho polo se pueden deducir de la representación gráfica de la figura 2.
**Fig. 2**
En esta figura las flechas dirigidas según el conductor indican el sentido de la corriente. Las curvas con flechas, dibujadas sobre el plano de la figura normal al plano del conductor, son las líneas de fuerza. Si las trazamos correctamente, nos dan la dirección del vector fuerza que representa la acción de la corriente eléctrica sobre un polo magnético positivo dado, suministrando, al mismo tiempo, una idea aproximada de su magnitud. Veamos, ahora, cómo se puede obtener de dicha representación la dirección y el sentido de la fuerza en cada punto del espacio.
La regla que nos permite establecer su dirección en el modelo en cuestión, es tan sencilla como la del ejemplo anterior donde las líneas de fuerza eran rectas. En la figura 3 está representada una sola línea de fuerza, con el objeto de hacer más clara la explicación de aquella regla. Consideremos un punto cualquiera de esta línea. El vector fuerza está sobre la tangente a ella en dicho punto, como se ve en la figura. La flecha que indica el sentido de esta fuerza y las flechas de la línea de fuerza están igualmente dirigidas. Queda determinada así la fuerza que actúa sobre el polo magnético, en dirección y sentido. Un gráfico bien hecho o un buen modelo nos dará asimismo una referencia de la longitud de dicho vector en cada punto: es más largo donde las líneas son más densas, es decir, cerca del conductor, y más corto en las regiones de menor densidad de dichas líneas, o sea al alejarnos de aquél.
**Fig. 3**
De esta manera las líneas de fuerza, o, en otras palabras, el campo, nos permiten determinar las fuerzas que actúan sobre un polo magnético en cualquier punto del espacio. Ésta es, por el momento, la única justificación de nuestra compleja construcción del campo. Sabiendo lo que representa, examinemos con más detenimiento las líneas de fuerza del campo correspondiente a una corriente. Estas líneas son circunferencias que rodean al conductor y están sobre un plano perpendicular a él. La fuerza, como se ve en la figura, es tangente a las líneas de fuerza, resultando, de acuerdo con la experiencia, normal a toda recta que una al conductor con el polo, pues la tangente a una circunferencia es siempre perpendicular a su radio. Todo nuestro conocimiento de las fuerzas en cuestión queda así resumido en la construcción del campo correspondiente.
En otras palabras, situamos el concepto de campo entre el de corriente y el de polo magnético, con el objeto de representar de manera sencilla las fuerzas actuantes.
Toda corriente va acompañada de un campo magnético, es decir, que siempre se nota la acción de una fuerza sobre un polo magnético colocado cerca de un conductor por el cual circula una corriente eléctrica. Observemos, de paso, que esta propiedad nos permite la construcción de aparatos sensibles que indiquen la existencia de una corriente eléctrica.
Habiendo aprendido a inferir la naturaleza de las fuerzas magnéticas del modelo del campo, utilizaremos en adelante esta representación con el fin de visualizar la acción de esas fuerzas en las proximidades de cualquier conductor por el que circule una corriente eléctrica. Consideremos, por ejemplo, el caso de una corriente que circula por un solenoide, llamando así a un conductor en forma de espiral, como el de la figura 4. En esta figura se ve la estructura del campo de una corriente solenoidal, obtenido experimentalmente. Las líneas de fuerza son curvas cerradas que rodean al solenoide análogamente a las del campo magnético de una corriente circular.
El campo de una barra magnética puede representarse de la misma manera que el campo de una corriente. Las líneas de fuerza se trazan del polo positivo al negativo (figura 5). El vector fuerza está siempre sobre la tangente a la línea de fuerza y es más largo cerca de los polos porque la densidad de las líneas es máxima en estos puntos. El vector fuerza representa la acción del imán sobre un polo magnético positivo. En este caso la fuente del campo es el imán y no la corriente eléctrica.
**Fig. 4**
Los gráficos de las figuras 4 y 5 deben ser comparados cuidadosamente. En el primero tenemos el campo magnético de una corriente en forma de solenoide; en el segundo, el campo de una barra imantada. Prescindamos del solenoide y del imán y observemos sólo los dos campos exteriores; se nota de inmediato que tienen exactamente el mismo carácter: las líneas de fuerza se dirigen, en ambos, de un extremo a otro del solenoide o de la barra imantada.
¡La representación del campo da aquí un primer fruto! Hubiera sido más bien difícil descubrir la gran similitud entre el campo de un solenoide y el de una barra magnética, si no nos fuera revelada por la construcción del campo.
El concepto de campo puede, ahora, exponerse a una prueba mucho más severa. Enseguida veremos si es algo más que una nueva representación de las fuerzas actuantes. Podríamos razonar así: supongamos, por un momento, que el campo representa de una manera unívoca todas las acciones determinadas por su fuente. Esto es sólo una conjetura, cuyo significado en el caso que tratamos es que, si un solenoide y una barra imantada tienen un mismo campo, todas sus acciones deben, necesariamente, ser iguales. Ello significa, asimismo, que dos solenoides recorridos por corrientes eléctricas se comportarán como dos barras imantadas atrayéndose o repeliéndose con fuerzas que dependen, exactamente como en el caso de los imanes, de sus posiciones relativas. Debemos esperar, también, que un solenoide y un imán se atraigan o repelan de la misma manera que dos imanes. En resumen, dicha suposición significa que todas las acciones de un solenoide recorrido por una corriente deben ser iguales a las de una barra magnética, ya que sus campos tienen la misma estructura. ¡La experiencia confirma plenamente nuestra conjetura!
**Fig. 5**
¡Qué difícil hubiera sido llegar a estas conclusiones sin el concepto de campo! La expresión de la fuerza que actúa entre un conductor por el cual circula una corriente y un polo magnético es muy complicada. En el caso, por ejemplo, de dos solenoides recorridos por corrientes eléctricas, hubiéramos tenido que realizar una investigación especial para descubrir la forma como actúan entre sí. En cambio, con la ayuda del campo se puede predecir la forma de acción recíproca tan pronto se ha descubierto la similitud entre los campos respectivos.
Tenemos entonces el derecho de considerar el campo como algo mucho más importante de lo que lo consideramos al principio. Las propiedades del campo resultan esenciales para la descripción de los fenómenos que estudiamos: sin importar las diferencias de origen. El concepto de campo revela su importancia al conducirnos al descubrimiento de nuevos hechos.
Este concepto resultó, pues, de gran utilidad. Nació como algo situado entre la fuente y la aguja magnética al tratar de describir la fuerza actuante. Se creyó que era un «agente» de la corriente, mediante el cual ésta transmitía su acción. Pero ahora resulta que el agente actúa como un intérprete, que traduce las leyes a un lenguaje claro y sencillo, fácilmente comprensible.
Este primer éxito de la descripción por intermedio del campo sugiere la conveniencia de considerar indirectamente todas las acciones de imanes, corrientes y cargas eléctricas, es decir, valiéndonos del campo como intérprete. Un campo magnético puede ser considerado como algo que siempre va asociado a una corriente eléctrica. Existe aun en ausencia de un polo magnético que ponga de manifiesto su existencia. Tratemos de desarrollar esta nueva idea de un modo consecuente.
El campo de un conductor cargado puede establecerse de manera análoga a la del campo gravitatorio o a la del de una corriente o el de un imán. Consideremos, otra vez, el caso más simple. Para trazar el campo de una esfera cargada positivamente tenemos que preguntarnos qué clase de fuerzas actúan sobre una pequeña carga positiva que se coloca en la proximidad de la fuente del campo, o sea de la esfera. El hecho de usar un cuerpo de prueba con una carga positiva y no una negativa es cuestión puramente convencional, que nos permite establecer el sentido de las líneas de fuerza, indicado en el diagrama por las flechas dibujadas sobre cada una de dichas líneas. El modelo así obtenido es análogo al del campo gravitatorio representado en la figura 1, página 370. A causa de la similitud entre la ley de Coulomb y la ley de Newton, la única diferencia entre ambas representaciones consiste en que las flechas apuntan en direcciones opuestas (véase figura 6), consecuencia, claro está, del hecho de que dos cargas positivas se repelen mientras que dos masas siempre se atraen. Sin embargo, el campo de una esfera con carga negativa será idéntico al campo gravitatorio, pues la pequeña carga positiva de prueba será atraída por la fuente del campo, lo que se representa en la figura 7, que es idéntica a la citada 1.
**Fig. 6**
Si la carga eléctrica y los polos magnéticos están en reposo, no se manifiesta acción alguna entre ellos; es decir, no se atraen ni se repelen. Expresando el mismo hecho por medio del concepto de campo, podemos decir: un campo electrostático no influye sobre un campo magnetostático, y recíprocamente. Las palabras «campo estático» significan un campo que no varía con el tiempo. Los imanes y las cargas eléctricas quedarían en reposo, los unos en la proximidad de las otras, eternamente, si no actuaran fuerzas exteriores sobre ellos. Los campos electrostático, magnetostático y gravitatorio son de distinta naturaleza. No se mezclan; cada uno conserva su individualidad aun en presencia de los otros.
**Fig. 7**
Volvamos a la esfera eléctrica que estaba en reposo y supongamos que comienza a moverse por la acción de cierta fuerza externa. En el lenguaje del campo, este hecho se expresa así: el campo de la carga eléctrica varía con el tiempo. Pero una esfera cargada en movimiento es, como ya sabemos por la investigación de Rowland, equivalente a una corriente. Además, toda corriente va acompañada de un campo magnético. Luego, el encadenamiento de nuestro razonamiento es:
carga en movimiento variación del campo
eléctrico corriente campo magnético asociado.
De acuerdo con lo que antecede deducimos que _la variación de un campo eléctrico producida por el desplazamiento de una carga eléctrica va siempre acompañada por un campo magnético_.
Esta conclusión se basa en el experimento de Oersted, pero implica mucho más. Sugiere el reconocimiento de que la asociación de un campo eléctrico variable en el tiempo con un campo magnético es esencial para el desarrollo ulterior de nuestro tema.
Mientras una carga eléctrica está en reposo, existe sólo un campo electrostático; pero aparece un campo magnético apenas la carga empieza a moverse. Se puede afirmar más aún: el campo magnético creado por el movimiento de una carga eléctrica será más intenso si la carga es mayor y si se desplaza más rápidamente. Esto es también una consecuencia, ya citada, del trabajo de Rowland. Una vez más recurriendo al lenguaje del campo podemos decir: cuanto más rápida sea la variación del campo eléctrico, tanto más intenso será el campo magnético engendrado.
Hemos tratado aquí de traducir hechos comunes, del lenguaje de los fluidos eléctricos ideados según el viejo punto de vista mecanicista al nuevo lenguaje del campo. Más adelante veremos qué claro, instructivo y de largo alcance es este nuevo lenguaje.
#### LOS DOS PILARES DE LA TEORÍA DEL CAMPO
«La variación de un campo eléctrico crea un campo magnético.» Si intercambiamos las palabras «magnético» y «eléctrico», esta afirmación se transforma en la siguiente: «La variación de un campo magnético crea un campo eléctrico.» Sólo el experimento puede decidir si esto último es cierto o no. La idea de formular este problema es sugerida por el uso del lenguaje del campo.
Hace precisamente cien años que Faraday llevó a cabo un experimento que lo condujo al gran descubrimiento de las corrientes inducidas.
La demostración de su producción es sencilla. Necesitamos para ello, solamente, un solenoide o algún otro circuito, una barra imantada y uno de los muchos tipos de aparatos indicadores de corriente eléctrica. Para empezar supongamos en reposo la barra imantada colocada en la proximidad del solenoide que forma un circuito cerrado, como se representa en la figura 8. Por el solenoide no circula corriente alguna, pues no hay ninguna fuente. Existe solamente el campo magnetostático de la barra imantada. Ahora acerquemos o alejemos rápidamente el imán del solenoide. Se nota, al instante, la aparición en el solenoide, de una corriente de corta duración. Y otra vez que la posición del imán varíe, reaparecerá la corriente, como puede demostrarse con un aparato suficientemente sensible. Pero una corriente, desde el punto de vista de la teoría del campo, significa la existencia de un campo eléctrico que fuerza el desplazamiento de la electricidad a través del conductor. La corriente —y por lo tanto, también, el campo eléctrico— desaparece cuando el imán vuelve al estado de reposo.
**Fig. 8**
Imaginemos, por un momento, no tener la noción de campo y tratemos de describir cualitativa y cuantitativamente los resultados del experimento de Faraday con los conceptos mecánicos anteriores a la introducción de aquél. Dicho experimento muestra que por el movimiento de un dipolo magnético se crea una nueva fuerza que hace desplazar los fluidos eléctricos en el conductor. La segunda cuestión sería ésta: ¿de qué depende esta fuerza? Para poder responder a esta pregunta tendríamos que investigar su dependencia respecto de la velocidad y de la forma del imán, así como de la forma de circuito. Además, este experimento, interpretado con el lenguaje mecanicista, no da ningún indicio sobre si una corriente inducida puede ser originada por el movimiento de otro circuito por el cual circula una corriente eléctrica, en lugar de ser originada por el movimiento de una barra imantada.
El asunto cambia de aspecto si nos valemos del concepto del campo y confiamos, una vez más, en el principio de que la fuerza está exclusivamente determinada por aquél. Se ve así, en el acto, que un solenoide por el cual circula una corriente producirá el mismo efecto que una barra imantada. En la figura 9 se ven dos solenoides, uno pequeño a través del cual circula una corriente, y otro mayor, en el cual se evidenciará la corriente inducida al mover el primero, como efectivamente se puede comprobar. Por otro lado, en lugar de desplazar dicho solenoide, se puede crear y anular el campo magnético al crear y anular la corriente, esto es, al cerrar y abrir el circuito eléctrico de dicho campo. Una vez más, nuevos hechos sugeridos por la teoría del campo resultan confirmados por la experiencia.
Consideremos un ejemplo sencillo. Supongamos un conductor cerrado y un campo magnético en su vecindad. No nos interesa saber si el origen de este campo magnético es un circuito eléctrico o un imán. La figura 10 muestra el circuito supuesto y las líneas de fuerza magnética. La descripción cualitativa y cuantitativa de los fenómenos de inducción es sencilla en el lenguaje del campo. Como se ve en la figura, algunas de las líneas de fuerza pasan por la superficie que limita al conductor. Las líneas de fuerza que debemos tener en cuenta son las que cortan la superficie enmarcada por el conductor. Cualquiera que sea la intensidad del campo magnético, no habrá manifestación de corriente inducida en tanto aquél no experimente alguna variación. Pero, apenas varía el número de líneas de fuerza que atraviesan la superficie considerada, se manifiesta, enseguida, una corriente inducida en el conductor que enmarca dicha superficie. La corriente se establece, pues, por el cambio del número de líneas de fuerza que cortan aquella superficie, independientemente de la causa de la variación de dicho número. Este cambio del número de líneas de fuerza es el único concepto esencial para la descripción cualitativa y cuantitativa de la corriente inducida. «El número de líneas cambia» quiere decir que la densidad de las líneas varía y esto, recordemos, significa que la intensidad del campo se modifica.
**Fig. 9**
Los puntos principales de la ilación de nuestro razonamiento son, pues, éstos: variación del campo magnético corriente inducida desplazamiento de carga existencia de un campo eléctrico. Luego: _la variación de un campo magnético va acompañada por un campo eléctrico_.
Hemos encontrado, así, los dos pilares más importantes sobre los cuales se apoya la teoría de los campos eléctricos y magnéticos. Constituye el primer pilar la relación que existe entre un campo eléctrico variable y el campo magnético, la cual tiene su origen en el experimento de Oersted sobre la desviación de una aguja magnética por una corriente eléctrica y que condujo a la conclusión: _la variación de un campo eléctrico va acompañada por un campo magnético_.
**Fig. 10**
El segundo pilar es la relación que existe entre un campo magnético variable y una corriente inducida, de acuerdo con la experiencia de Faraday.
Entre los dos dieron el fundamento para la formulación cuantitativa de la teoría que nos ocupa.
El campo eléctrico que acompaña a un campo magnético variable aparece como algo real. Ya tuvimos que suponer la existencia del campo magnético de una corriente, en ausencia del polo de prueba. Igualmente debemos sostener, aquí, la existencia del campo eléctrico, aun en ausencia del conductor que nos sirvió para poner de manifiesto la corriente inducida. De hecho los dos pilares que han servido para estructurar nuestra teoría pueden reducirse a uno: el que se basa en el experimento de Oersted. En efecto, el resultado del experimento de Faraday puede inferirse del de Oersted y del principio de conservación de la energía. Se usa la estructuración basada en los dos pilares porque es más clara y económica.
Hemos de mencionar, a continuación, otra consecuencia que resulta de la concepción del campo. Supongamos un circuito por el cual circula una corriente eléctrica que tiene como fuente, por ejemplo, una pila voltaica. Cortemos rápidamente la conexión entre el circuito y la batería. Hemos anulado, con ello, la corriente. Pero, durante el corto tiempo que dura el proceso de interrupción, se produce otro proceso complicado que pudo haberse previsto con la teoría del campo. En efecto, antes de la interrupción de la corriente existía un campo magnético en la proximidad del conductor, que desapareció al anularse la corriente. En otras palabras, interrumpiendo una corriente hemos hecho desaparecer un campo magnético. El número de líneas de fuerza que atravesaban la superficie que limita el conductor cerrado cambia, en consecuencia, rápidamente. Pero tal variación, cualquiera que sea la forma de producirla, debe crear una corriente inducida. Como lo que en realidad importa es la magnitud del cambio, cuanto más rápido sea éste, más intensa ha de ser la corriente inducida. Esta consecuencia es otra prueba para la teoría. La interrupción de una corriente (apertura del circuito) debe ir acompañada por la aparición de una corriente inducida momentánea e intensa. La experiencia confirma de nuevo esta predicción. Quien haya interrumpido alguna vez una corriente eléctrica, habrá notado, probablemente, la aparición de una chispa o un arco. Esto revela la aparición de una gran diferencia de potencial, causada por el cambio rápido del campo magnético.
El mismo proceso puede interpretarse desde un punto de vista distinto, el de la energía. En efecto, desapareció un campo magnético y apareció una chispa. Una chispa representa energía; luego, también el campo magnético representa energía.
Para ser consecuentes con el concepto de campo magnético debemos considerarlo como un depósito de energía. Sólo así podremos describir los fenómenos eléctricos y magnéticos de acuerdo con el principio de la conservación de la energía.
Empleado al principio como una representación auxiliar, el campo se ha hecho cada vez más real. Nos ayudó a explicar fenómenos conocidos y nos condujo al descubrimiento de nuevos hechos. El atribuirle energía al campo ha significado un progreso importante en la evolución de la física que al mismo tiempo que extendía, cada vez más, el concepto de campo, dejaba de lado los de sustancia, tan esenciales a la interpretación mecanicista.
#### LA REALIDAD DEL CAMPO
La descripción cuantitativa, matemática, de las leyes del campo está sintetizada en las llamadas ecuaciones de Maxwell. Los hechos hasta ahora citados condujeron a la obtención de estas ecuaciones, pero su contenido es mucho más rico. Su forma simple disimula su profundidad revelada sólo tras un estudio cuidadoso.
La formulación de estas ecuaciones es el acontecimiento más importante de la física desde el tiempo de Newton, no sólo por la riqueza de su contenido, sino porque representan un modelo o patrón para un nuevo tipo de ley.
Lo típico de las ecuaciones de Maxwell, común a todas las otras ecuaciones de la física moderna, se resume en una frase: Las ecuaciones de Maxwell son leyes que representan _la estructura_ del campo.
¿Por qué difieren las ecuaciones de Maxwell, en forma y carácter, de las ecuaciones de la mecánica clásica? ¿Qué quiere decir que estas ecuaciones describen la estructura del campo? ¿Cómo es posible que de los resultados de las experiencias de Oersted y Faraday podamos formular un nuevo tipo de ley, que resulte tan importante para el desarrollo ulterior de la física?
Hemos visto ya, según la experiencia de Oersted, cómo alrededor de un campo eléctrico variable se enrolla un campo magnético. Hemos visto, también, según la experiencia de Faraday, cómo alrededor de un campo magnético variable se enrosca un campo eléctrico. Para dar una idea de algunos de los rasgos característicos de la teoría de Maxwell, fijemos momentáneamente nuestra atención en una de dichas experiencias; sea ésta, la de Faraday. En la figura 11 repetimos el esquema correspondiente a una corriente inducida por un campo magnético variable. Ya sabemos que aparece una corriente inducida cuando varía el número de líneas de fuerza que pasan por la superficie limitada por el conductor. Es decir, aparecerá tal corriente si varía el campo magnético o si se deforma o se desplaza el circuito, o dicho de otra manera, si el número de líneas de fuerza que atraviesan la superficie se modifica, sin importar la manera como se ha originado esa modificación. Tener en cuenta todas esas posibilidades y sus influencias específicas nos llevaría necesariamente a una teoría muy complicada. ¿Será posible simplificar el problema? Tratemos de eliminar de nuestras consideraciones toda referencia a las características del circuito, como su forma, su longitud o la superficie que abarca el conductor. Imaginemos que el circuito de nuestra última figura disminuye gradualmente de tamaño hasta convertirse en un circuito pequeñísimo que encierra un punto del espacio. En este caso todo lo concerniente a forma y tamaño del mismo pierde importancia para nuestras consideraciones y obtenemos en el límite leyes que relacionan, en un instante dado, las variaciones de un campo magnético y de un campo eléctrico, en un punto arbitrario del espacio.
**Fig. 11**
Éste es uno de los pasos fundamentales que conducen a las ecuaciones de Maxwell. Se trata, otra vez, de un experimento ideal que consiste en repetir con la imaginación la experiencia de Faraday, con un circuito que reduce gradualmente su tamaño hasta convertirse en un punto. Debiéramos llamarlo, sin embargo, medio paso, más bien que un paso entero. En efecto, hasta ahora nos hemos fijado tan sólo en la experiencia de Faraday, pero el otro pilar de la teoría del campo, basado en la experiencia de Oersted, debe también ser tenido en cuenta. En esta experiencia las líneas magnéticas se enroscan alrededor de la corriente. Reduciendo a un punto las líneas circulares de fuerza magnética, damos la segunda mitad del paso; el paso completo conduce, entonces, a una relación entre las variaciones de los campos magnéticos y eléctricos, en un punto arbitrario del espacio y en un instante cualquiera del tiempo.
Es necesario dar aún otro paso esencial. Según la experiencia de Faraday, tiene que haber un conductor que revele la existencia del campo eléctrico, igual que resulta indispensable la presencia de un polo o de una aguja magnética para probar la existencia del campo magnético en la experiencia de Oersted. La nueva concepción teórica de Maxwell va más allá de los resultados de dichos experimentos. El campo eléctrico y magnético, o, en una palabra, el campo _electromagnético_ es, en la teoría de Maxwell, algo real. El campo eléctrico es creado por un campo magnético variable independientemente de la existencia de un conductor, y se crea un campo magnético por un campo eléctrico variable, haya o no un polo magnético.
En resumen, los dos pasos esenciales que conducen a la formulación de las leyes de Maxwell son: el primero, considerar que las líneas de fuerza del campo magnético que envuelven a la corriente y al campo eléctrico variables en las experiencias de Oersted y Rowland, se achican hasta reducirse a un punto, y que en la experiencia de Faraday, las líneas circulares del campo eléctrico, que envuelven al campo magnético variable, se han reducido también a un punto. El segundo consiste en la concepción del campo como algo real; el campo electromagnético una vez creado existe, actúa y varía según las leyes de Maxwell. Concluyendo, repetimos que las ecuaciones de Maxwell describen la estructura del campo electromagnético; su validez se extiende a todo el espacio contrariamente a las leyes de tipo mecánico, que valen tan sólo para aquellos lugares donde haya materia o cargas eléctricas o magnéticas.
Recordemos que en la mecánica, conociendo la posición y la velocidad de una partícula en un instante dado y las fuerzas actuantes, se puede calcular de antemano toda la trayectoria que describirá en el futuro dicha partícula. En la teoría de Maxwell, si conocemos el campo en un solo instante, se puede deducir de las ecuaciones de la teoría cómo variará todo el campo, en el espacio y el tiempo. Las ecuaciones de Maxwell nos permiten seguir la historia del campo, al igual que las ecuaciones de la mecánica nos permiten seguir la historia de las partículas materiales.
Hay otra diferencia esencial entre las leyes de la mecánica y las leyes de Maxwell. Una comparación entre la ley de la gravitación de Newton y las leyes del campo de Maxwell hará resaltar algunos de los caracteres distintivos de estas últimas.
Con la ayuda de las leyes de la mecánica, y teniendo en cuenta la fuerza que actúa entre el Sol y la Tierra, se puede deducir la forma del movimiento de ésta alrededor del primero. Las leyes de la mecánica relacionan el movimiento de la Tierra con la acción del lejano Sol. El Sol y la Tierra, aunque tan distantes entre sí, son los dos actores en el juego de las fuerzas.
En la teoría de Maxwell no hay actores materiales. Las ecuaciones matemáticas de esta teoría expresan las leyes que rigen el campo electromagnético. No relacionan, como las leyes de Newton, dos sucesos distantes; no reconocen la «acción a distancia». El campo «aquí» y «ahora» depende del campo que había en el entorno inmediato en un instante inmediatamente anterior. Las ecuaciones permiten predecir lo que pasará un poco más allá de un cierto lugar del espacio, un instante después, si conocemos lo que pasa «ahora» y «aquí». Estas ecuaciones permiten ampliar nuestro conocimiento del campo paso a paso, relacionando así, por un gran número de pequeños pasos, fenómenos distantes ocurridos en tiempos distintos. En cambio, en la teoría de Newton, la relación entre sucesos distantes, se efectúa mediante pocos y grandes saltos. Los resultados de las experiencias de Faraday y Oersted pueden ser deducidos de las ecuaciones de Maxwell, pero tan sólo por la suma o reunión de pequeños pasos o efectos a lo largo del conductor, cada uno de los cuales está determinado por las leyes electromagnéticas.
Un estudio matemático cuidadoso de las ecuaciones de Maxwell muestra que es posible sacar de ellas conclusiones nuevas y realmente inesperadas; estas conclusiones, a las que se llega por todo un encadenamiento lógico, son de carácter cuantitativo y permiten someter toda la teoría a una prueba decisiva.
Imaginemos nuevamente un experimento ideal. Una pequeña esfera cargada eléctricamente es forzada por cierta influencia exterior a oscilar rápida y rítmicamente como un péndulo. Teniendo en cuenta nuestro conocimiento de las variaciones del campo, ¿qué es lo que pasará y cómo lo describiremos en el lenguaje del campo? La oscilación de la carga produce un campo eléctrico variable. Éste viene siempre acompañado por un campo magnético variable. Si se coloca en su proximidad un conductor que forma un circuito cerrado, entonces el campo magnético variable inducirá en éste una corriente eléctrica. Todo esto es sencillamente la repetición de hechos conocidos, pero el estudio de las ecuaciones de Maxwell da una comprensión más profunda del problema de la carga eléctrica oscilante. Por deducciones matemáticas de las ecuaciones de Maxwell podemos llegar al conocimiento del carácter del campo que envuelve a la carga oscilante, su estructura y su variación con el tiempo. El resultado de tal deducción es la onda _electromagnética_. La energía irradiada por la carga oscilante viaja por el espacio con una velocidad definida; pero una transferencia de energía, es decir, el desplazamiento de un estado del medio, es característico a todos los fenómenos ondulatorios.
Hemos considerado ya distintos tipos de ondas: la onda longitudinal, causada por una esfera pulsante, que consiste en la propagación de variaciones de densidad a través del medio, y la onda transversal, que se propaga en un medio tipo gelatina, como una deformación causada por la rotación de la esfera en su seno. ¿Qué clase de variaciones son las que se propagan en el caso de la onda electromagnética? ¡Variaciones del campo electromagnético! En efecto, todo cambio de un campo eléctrico produce un campo magnético; toda variación de este último origina un campo eléctrico; y así sucesivamente. Como el campo representa energía, estas variaciones, al propagarse en el espacio con una velocidad determinada, producen una onda. Las líneas de fuerza eléctrica y magnética están siempre, según se deduce de la teoría, en planos perpendiculares a la dirección de propagación. La onda producida es, pues, transversal. Las características originales de la imagen del campo que nos formamos a partir de los experimentos de Oersted y Faraday son aún valederas, pero ahora vemos que esa imagen tiene un significado más profundo.
La onda electromagnética se desplaza en el vacío. Esto es también una consecuencia de la teoría. Si la carga oscilante cesa repentinamente en su movimiento, entonces su campo se hace electrostático. Pero la serie de ondas, creadas por la oscilación, continúa propagándose. Las ondas tienen una existencia independiente y la historia de sus variaciones puede ser seguida exactamente como la de cualquier objeto material.
Se comprende que nuestra imagen de una onda electromagnética, desplazándose con una cierta velocidad en el espacio y variando en el tiempo, es una consecuencia de las ecuaciones de Maxwell, pues éstas describen la estructura del campo electromagnético en todo punto del espacio y en todo instante.
Hay otro problema muy importante. ¿Con qué velocidad se propaga la onda electromagnética en el vacío? La teoría, con la ayuda de los datos de ciertas experiencias sencillas, que no tienen nada que ver en la propagación de las ondas, da una contestación precisa: _la velocidad de una onda electromagnética es igual a la velocidad de la luz_.
Las experiencias de Oersted y Faraday constituyeron la base sobre la cual se edificaron las leyes de Maxwell. Todos los resultados obtenidos hasta el presente proceden del estudio cuidadoso de estas leyes, expresadas en el lenguaje del campo. El descubrimiento teórico de las ondas electromagnéticas, que se propagan con la velocidad de la luz, es uno de los mayores logros de la historia de la ciencia.
La experiencia ha confirmado la predicción de la teoría. Hace unos cien años probó Hertz, por primera vez, la existencia de ondas electromagnéticas y confirmó, experimentalmente, que su velocidad es igual a la de la luz. Actualmente, con la generalización de la radiotelefonía, millones de personas comprueban la emisión y recepción de ondas electromagnéticas. Sus aparatos que detectan la presencia de ondas, a miles de kilómetros de las fuentes emisoras, son mucho más complicados que los usados por Hertz, que tan sólo denotaban la existencia de ondas a pocos metros de la fuente.
#### CAMPO Y ÉTER
La onda electromagnética es transversal y se propaga con la velocidad de la luz en el vacío. El hecho de la igualdad de esas velocidades sugiere la existencia de una estrecha relación entre los fenómenos electromagnéticos y la óptica.
Cuando tuvimos que elegir entre la teoría corpuscular y la teoría ondulatoria nos decidimos en favor de esta última. La difracción de la luz fue el argumento más poderoso para tomar esta decisión. No contradecimos ninguna de las explicaciones de los hechos ópticos suponiendo que _la onda luminosa es una onda electromagnética._ Por el contrario, se pueden deducir aún otras conclusiones adoptando esta hipótesis. Si esto es así, debe existir cierta conexión entre las propiedades ópticas y eléctricas de la materia, que puede ser deducida de la teoría. El hecho de que conclusiones de este tipo hayan podido realmente ser deducidas y que hayan sido confirmadas por la experiencia es una razón de peso en favor de la teoría electromagnética de la luz.
Esta consecuencia importante se debe a la teoría del campo. Dos ramas de la ciencia aparentemente sin relación son abarcadas por una misma teoría. Las mismas ecuaciones de Maxwell contienen la descripción de la inducción electromagnética y de la refracción óptica. Si el objeto de la ciencia es explicar todos los fenómenos acaecidos o que puedan ocurrir con la ayuda de una teoría, entonces la fusión de la óptica y de la electricidad es indudablemente un gran paso hacia delante. Desde el punto de vista físico, la única diferencia entre una onda electromagnética común y una onda luminosa está en su longitud de onda: ésta es muy pequeña para las ondas luminosas y grande para las ondas electromagnéticas ordinarias.
El clásico punto de vista mecanicista trataba de reducir todos los sucesos de la naturaleza a fuerzas que actuaban entre partículas materiales. En este punto de vista mecanicista se basaba la primera e ingenua teoría de los fluidos eléctricos. El campo no existía para el físico de principios del siglo XIX. Para él, tan sólo la sustancia y sus cambios eran lo real. Trató de describir la acción de dos cargas eléctricas sólo con la ayuda de conceptos que se referían directamente a esas dos cargas. El concepto de campo fue, en un principio, sólo un medio para facilitar la explicación de los fenómenos eléctricos desde un punto de vista mecánico. En el nuevo lenguaje del campo, es la descripción del campo entre las cargas, y no las cargas mismas, lo esencial para comprender la acción de las últimas. El valor de los nuevos conceptos se elevó gradualmente, llegando el campo a adquirir primacía sobre la sustancia. Se comprendió que algo de trascendental importancia se había producido en la física. Una nueva realidad fue creada, un concepto nuevo para el cual no había lugar en la descripción mecanicista. Lentamente, y a través de una verdadera lucha, el concepto de campo alcanzó un lugar de privilegio en la física y ha continuado siendo uno de los conceptos básicos de la misma. El campo electromagnético es para el físico moderno tan real como la silla sobre la cual se sienta.
Sería falso considerar que el nuevo punto de vista del campo libró a la ciencia de los errores de la teoría anterior de los fluidos eléctricos y que la nueva teoría destruye las adquisiciones de la teoría abandonada. La teoría nueva muestra tanto los méritos como las limitaciones de la anterior, y nos permite reconsiderar los viejos conceptos desde un nivel más elevado. Esto es cierto no solamente para las teorías de los fluidos eléctricos y del campo, sino también para todos los casos en que se modifiquen las teorías físicas, por más revolucionarias que parezcan estas modificaciones. En nuestro caso todavía encontramos el concepto de carga eléctrica en la teoría de Maxwell, a pesar de que la carga es aquí considerada únicamente como fuente del campo eléctrico. La ley de Coulomb es válida y está contenida en las ecuaciones de Maxwell, de las que puede ser deducida como una de tantas consecuencias. Podemos aplicar aún la vieja teoría cuando son investigados hechos que caen en la región de su validez, sin olvidar que también estos fenómenos son interpretados por la teoría nueva.
Buscando un símil no podríamos decir que crear una nueva teoría es algo análogo a echar abajo una casucha y erigir en su lugar un rascacielos. Es más bien algo parecido a escalar una montaña ganando nuevos y más amplios horizontes, descubriendo senderos inesperados entre nuestro punto de partida y sus hermosos alrededores. Pero el punto de partida sigue existiendo y puede ser observado, aunque aparece más pequeño, formando una parte reducida de nuestro amplio paisaje, adquirido venciendo los poderosos obstáculos encontrados en nuestra aventurera ascensión.
Pasó ciertamente un tiempo largo hasta que fue valorado todo el contenido de la teoría de Maxwell. Al principio se consideró el campo como algo que pudiera más tarde ser interpretado mecánicamente, con la ayuda del éter. Con el pasar del tiempo se vio que esto no era posible; las adquisiciones de la teoría del campo alcanzaron una importancia demasiado grande como para abandonarla por el dogma mecanicista. Por otra parte, el problema de idear un modelo mecánico del éter resultó cada vez más descorazonador a causa de la necesidad de aceptar, en los distintos intentos de solucionarlo, suposiciones forzadas y artificiosas.
Nuestra única salida parece ser la de dar por sentado el hecho de que el espacio tiene la propiedad física de transmitir ondas electromagnéticas y no preocuparnos demasiado del significado de esta afirmación. Podemos aún usar la palabra éter, pero sólo para expresar esta propiedad del espacio. El vocablo éter ha cambiado muchas veces de significado durante el desarrollo de la ciencia; ya no representa un medio formado por partículas. Su historia, de ninguna manera terminada, se continúa en la teoría de la relatividad.
#### EL ANDAMIAJE MECÁNICO
Al llegar a este punto de nuestra exposición debemos volver al principio de inercia de Galileo.
Todo cuerpo permanece en su estado de reposo o de movimiento uniforme y rectilíneo, a menos que obren sobre él fuerzas exteriores que le obliguen a modificarlo.
Entendida la idea de inercia, uno se pregunta: ¿qué más puede decirse al respecto? Aun cuando este problema ha sido ya discutido cuidadosamente, no está agotado en modo alguno.
Imaginemos a un científico serio que cree que el principio de inercia puede ser comprobado experimentalmente. Con tal objeto impulsa pequeñas esferas sobre un plano horizontal, tratando en lo posible de eliminar el rozamiento, y nota que el movimiento se hace más uniforme a medida que la mesa y las esferas se hacen más pulidas. En el preciso momento en que está por proclamar el principio de inercia alguien resuelve jugarle una broma pesada.
Nuestro físico trabaja en un laboratorio sin ventanas y sin comunicación alguna con el exterior. El bromista instala un mecanismo que puede hacer girar la sala de trabajo alrededor de un eje que pasa por su centro. Apenas comienza la rotación el físico adquiere nuevas e inesperadas experiencias. Las esferas que tenían un movimiento uniforme empiezan repentinamente a alejarse del centro de la sala. El mismo físico siente una fuerza extraña que lo empuja hacia la pared, es decir, experimenta la misma sensación que tenemos al describir rápidamente una curva, viajando en tren o en coche, o cuando estamos montados en un tiovivo. Todos sus resultados anteriores se desmoronan por completo.
Nuestro físico tendrá que descartar, junto con el principio de inercia, todas las leyes mecánicas. El principio de inercia era su punto de partida; si éste no vale, tampoco valdrán todas las conclusiones posteriores. Un observador, recluido toda su vida en la sala giratoria, y obligado por lo tanto a realizar en ella todas sus experiencias, llegaría a leyes de la mecánica diferentes de las nuestras. Si, por otra parte, entra en el laboratorio con un profundo conocimiento y una firme creencia en los principios de la física, su explicación de la aparente bancarrota de las leyes de la mecánica se basará en la suposición de que la habitación gira. Con experiencias mecánicas apropiadas el investigador podría determinar, inclusive, cómo gira la sala.
¿Por qué nos interesamos tanto por el observador en su laboratorio giratorio? Sencillamente porque nosotros, en nuestra Tierra, estamos en cierto sentido en las mismas condiciones. Desde el tiempo de Copérnico sabemos que la Tierra gira sobre su eje en su movimiento alrededor del Sol. Aun cuando esta simple idea, tan clara para todo el mundo, no haya permanecido intacta durante el progreso de la ciencia, dejémosla por ahora y aceptemos el punto de vista de Copérnico. Si nuestro observador en la sala giratoria no pudo confirmar las leyes de la mecánica, debiera pasarnos lo mismo a nosotros, sobre la Tierra; pero la rotación de la Tierra es comparativamente lenta, por lo cual el efecto no es muy pronunciado. No obstante, hay varios hechos que indican una pequeña desviación de las leyes de la mecánica, y la concordancia de estas discrepancias entre sí puede ser considerada precisamente como una prueba de la rotación de la Tierra.
Desafortunadamente, es imposible colocarnos entre la Tierra y el Sol, para probar la validez exacta del principio de inercia y tener una visión de la rotación de la Tierra. Esto se puede realizar únicamente en la imaginación. Todas nuestras experiencias tienen que ser realizadas sobre la Tierra. Este hecho se expresa a menudo más científicamente diciendo: _la Tierra es nuestro sistema de coordenadas_.
Para ver más claramente el significado de estas palabras, tomemos un ejemplo sencillo. Teniendo en cuenta las leyes de la caída de los cuerpos se puede predecir la posición, en cualquier instante, de una piedra arrojada desde una torre y confirmar esa predicción experimentalmente. Si se coloca al lado de la torre una escala métrica es posible, de acuerdo al párrafo anterior, predecir con qué punto de la escala coincidirá el cuerpo, en cualquier instante de su caída. La torre y la escala no deben estar hechas, evidentemente, de goma o de ningún otro material que pueda sufrir variaciones durante la experiencia. En realidad, todo lo que necesitamos, en principio, para realizar nuestra experiencia, es una escala perfectamente rígida y un buen reloj. En posesión de estos elementos podemos ignorar, no sólo la arquitectura de la torre, sino hasta su presencia. Las condiciones citadas son triviales y no se encuentran especificadas usualmente en las descripciones de tales experiencias. Este análisis nos muestra cuántas suposiciones implícitas subyacen a la más simple de nuestras afirmaciones. En el presente ejemplo suponíamos la existencia de una barra rígida y de un reloj ideal, sin los cuales sería imposible comprobar la ley de Galileo de la caída de los cuerpos. Con estos aparatos físicos, simples pero fundamentales, una barra y un reloj, nos es posible confirmar esta ley mecánica con cierto grado de exactitud. Verificada cuidadosamente, revela discrepancias entre la teoría y la experiencia debido a la rotación de la Tierra o, en otras palabras, a causa de que las leyes de la mecánica, tal como han sido formuladas aquí, no son estrictamente válidas en un sistema de coordenadas rígidamente unido a la Tierra.
En todos los experimentos mecánicos debemos determinar las posiciones de puntos materiales en un cierto instante, exactamente como en la experiencia anterior de un cuerpo que cae. La posición debe ser determinada, siempre, con respecto a algo, que en el caso anterior era la torre y la escala. Es decir, para poder determinar la posición de los cuerpos, debemos tener lo que se llama un _sistema de referencia,_ una especie de red o andamiaje mecánico, respecto al que se toman las distancias respectivas. Al describir las posiciones de objetos y personas en una ciudad, las calles y avenidas forman dicha red o sistema de referencia. Hasta el presente no nos habíamos preocupado de la descripción del sistema de referencia al hablar de las leyes de la mecánica porque tenemos la suerte de que sobre la Tierra no hay dificultad alguna de encontrar un sistema apropiado de referencia, en cada caso particular. Dicha red o andamiaje construido de material rígido e invariable, al cual referimos todas nuestras observaciones, se denomina _sistema de coordenadas_. Esta expresión deberá ser usada muy a menudo, por lo cual emplearemos la siguiente abreviatura: _SC_.
De todo lo que acabamos de exponer resulta que todos los enunciados físicos que hemos presentado hasta aquí eran incompletos. No nos habíamos percatado del hecho de que todas las observaciones deben ser realizadas en un cierto _SC_ y, en lugar de describir su estructura, hacíamos caso omiso de su existencia. Por ejemplo, cuando escribíamos «un cuerpo animado de movimiento uniforme» debíamos realmente haber escrito «un cuerpo animado de movimiento uniforme, respecto a un determinado _SC_...». El caso de la cámara giratoria nos enseñó que los resultados de las experiencias mecánicas pueden depender del _SC_ elegido.
Las mismas leyes de la mecánica no pueden ser válidas para dos sistemas de coordenadas que giran uno respecto al otro. Ejemplo: si la superficie del agua de una piscina, que define uno de los _SC,_ es horizontal, la superficie del agua de otra piscina en reposo que constituye el segundo _SC_ tomará la forma curva característica de un líquido, que se hace girar alrededor de un eje.
Al formular las leyes principales de la mecánica omitimos un punto importante. No especificamos para qué _SC_ eran válidas. Por esta razón toda la mecánica clásica está en el aire, pues no sabemos a qué andamiaje se refiere. Dejemos, sin embargo, esta dificultad por el momento. Haremos la suposición, algo incorrecta, de que las leyes de la mecánica valen en todo _SC_ rígidamente unido a la Tierra. Esto lo hacemos con el objeto de fijar un _SC,_ eliminando así la ambigüedad a que nos referíamos. Aun cuando nuestra afirmación de que la Tierra es un sistema de referencia apropiado no es del todo exacta, la aceptaremos por el momento.
Admitimos, pues, la existencia de un _SC para el cual las leyes_ de la mecánica son válidas. ¿Es éste el único? Imaginemos tener un _SC,_ tal como un tren, un vapor o un aeroplano, moviéndose con relación a la Tierra. ¿Valdrán las leyes de la mecánica para estos nuevos _SC?_ Sabemos positivamente que no son siempre válidas, como, por ejemplo, en el caso en que el tren toma una curva, en el que el vapor es sacudido por una tormenta o cuando el aeroplano hace un descenso en tirabuzón. Consideremos un _SC_ que se mueve uniformemente en relación al «buen» _SC_ , es decir, para aquel que son válidas las leyes de la mecánica. Por ejemplo, un tren o un vapor ideal moviéndose con una suavidad deliciosa según una línea recta y con velocidad constante. Sabemos por la experiencia diaria que ambos sistemas son «buenos», que experiencias físicas realizadas sobre un tren o un vapor con tal movimiento, dan exactamente los mismos resultados que si las realizáramos sobre la Tierra. Pero suceden cosas imprevistas si el tren detiene o acelera repentinamente su marcha o si el mar está agitado. En el tren, las valijas caen de sus estantes; en el vapor, las mesas y las sillas se desplazan de su sitio y los pasajeros se marean. Desde el punto de vista físico esto significa, sencillamente, que las leyes de la mecánica no pueden ser aplicadas a dichos _SC,_ es decir, que son _SC_ «malos».
Este resultado puede ser expresado por el llamado principio de _relatividad de Galileo,_ que dice: _si las leyes de la mecánica_ son _válidas en un SC, entonces también_ se _cumplen en cualquier SC que se mueva uniformemente_ con _relación al primero_.
Si tenemos dos _SC_ que se desplazan uno respecto del otro no uniformemente, entonces las leyes de la mecánica no pueden ser válidas para ambos.
Sistemas de coordenadas «buenos», esto es, como ya dijimos, para los que se cumplen las leyes de la mecánica, se denominan _sistemas inerciales._ El problema de si existe o no un sistema inercial lo dejamos, por ahora, de lado. Pero si admitimos la existencia de un sistema tal, entonces habrá un número infinito de ellos. En efecto, todo _SC_ que se mueva uniformemente respecto al primero, es también un _SC_ inercial.
Consideremos ahora el caso de dos _SC_ que se mueven uniformemente uno respecto al otro, con velocidad conocida y partiendo de una misma posición determinada. Aquel que prefiera imágenes concretas puede perfectamente pensar en un vapor o tren moviéndose en relación con la Tierra. Las leyes de la mecánica pueden ser confirmadas experimentalmente con el mismo grado de precisión sobre la Tierra o en el tren o vapor en movimiento uniforme. Las dificultades se presentan cuando un observador de un _SC_ empieza a analizar las observaciones que de un mismo suceso ha hecho otro, situado en el segundo _SC._ Cada uno de ellos desearía trasladar las observaciones del otro a su propia terminología. Tomemos otra vez un ejemplo sencillo: el mismo movimiento de una partícula es observado desde dos _SC,_ la Tierra y un tren con movimiento uniforme. Ambos sistemas son inerciales. ¿Será suficiente conocer lo observado en un _SC_ para poder deducir lo que se observa en el otro, si están dadas, en cierto instante, las posiciones y las velocidades relativas de los dos _SC?_ Sí. Ahora bien, ya que ambos _SC_ son equivalentes e igualmente adecuados para la descripción de los sucesos naturales, resulta esencial conocer cómo se puede pasar de un _SC_ al otro. Consideremos este problema algo más en abstracto, sin trenes ni vapores. Para simplificarlo investigaremos tan sólo el caso de movimientos rectilíneos. Tengamos, para ello, una barra rígida con una escala métrica y un buen reloj. La barra rígida representa, en el caso del movimiento rectilíneo, un _SC_ de la misma manera que lo era la escala métrica de la torre en la experiencia de Galileo. Resulta siempre más simple y mejor imaginar un _SC_ como una barra rígida en el caso del movimiento rectilíneo y un andamio o red rígida construida de barras paralelas y perpendiculares en el caso de un movimiento arbitrario en el espacio, dejando de lado paredes, torres, calles, etc. Supongamos tener dos _SC_ , esto es, dos barras rígidas que representaremos una encima de la otra y que llamaremos, respectivamente, _SC_ «superior» e «inferior». Supongamos que ambos _SC_ se mueven con cierta velocidad, uno respecto al otro, de manera que uno se desliza a lo largo del otro. Resulta cómodo suponer, también, que ambas barras se prolongan indefinidamente en un solo sentido. Es suficiente un reloj para los dos _SC;_ pues el paso del tiempo es el mismo para ambos. Al empezar nuestras observaciones, los extremos iniciales de las barras coinciden. La posición de un punto material está caracterizada en ese momento, evidentemente, por el mismo número en los dos _SC_ ; el punto material coincide con la misma lectura de la escala de cualquiera de esas barras, dándonos así un número, y solamente uno, que determina su posición. Si las barras se mueven en las condiciones antes especificadas, los números correspondientes a las posiciones del punto serán diferentes pasado un cierto tiempo. Consideremos un punto material en reposo sobre la barra superior. El número que determina su posición en el _SC_ superior no cambiará con el tiempo; pero el número correspondiente a su posición respecto a la barra inferior, sí cambiará (véase figura 12). En adelante, para abreviar, usaremos la expresión _coordenada de un punto_ para indicar el «número correspondiente a la posición del mismo, respecto a una de las barras».
La figura 12 aclara la siguiente expresión, que si en un principio parece algo oscura es, sin embargo, correcta y expresa algo muy simple: la coordenada de un punto en el _SC_ inferior es igual a su coordenada en el _SC_ superior más la coordenada del origen de este último _SC_ respecto al _SC_ inferior. Lo importante es que se puede calcular siempre la posición de una partícula en un _SC_ , si se conoce su posición en el otro. Para esto se deben conocer en todo momento las posiciones relativas de los dos _SC_ en cuestión. Aun cuando todo esto suena a muy rebuscado, es en realidad muy sencillo y difícilmente merecería una discusión tan detallada, si no fuera porque nos será útil más adelante.
Hagamos notar la diferencia entre la determinación de la posición de un punto y la del instante en que se produce un fenómeno. Cada observador tiene su barra propia que constituye su _SC,_ pero basta sólo un reloj para todos, pues el tiempo es algo «absoluto» que pasa igualmente para todos los observadores de cualesquiera _SC_.
**Fig. 12**
Tomemos otro ejemplo. Supongamos que un hombre se pasee con la velocidad de 5 km por hora sobre la cubierta de un transatlántico. Ésta es su velocidad relativa al barco, o en otras palabras, relativa a un _SC_ atado rígidamente al vapor. Si la velocidad del vapor es de 45 km por hora, relativa a la costa, y si las velocidades del pasajero y del vapor tienen ambas la misma dirección y sentido, entonces la velocidad del primero será de 50 km por hora, respecto a un observador situado en la costa. Podemos formular este hecho de una manera más general y abstracta: la velocidad de un punto material en movimiento relativa al _SC_ inferior es igual a su velocidad respecto al _SC_ superior más o menos la velocidad de este sistema respecto al inferior, según que las velocidades tengan o no igual sentido (véase figura 13). Siempre es posible por lo tanto transformar de un _SC_ a otro las posiciones y las velocidades de un punto, si conocemos las velocidades relativas de dichos sistemas. Las posiciones o coordenadas y las velocidades son ejemplos de magnitudes que al pasar de un _SC_ a otro, cambian según ciertas _leyes de transformación,_ que en este caso son muy simples. Existen magnitudes, sin embargo, que se conservan constantes en ambos _SC_ y para las cuales no se requieren leyes de transformación alguna. Tomemos como ejemplo dos puntos fijos sobre la barra superior y consideremos la distancia entre ellos. Esta distancia es la diferencia de las coordenadas de dichos puntos. Para encontrar las posiciones de dos puntos, relativas a dos _SC_ diferentes, tenemos que usar las leyes de transformación. Pero al calcular la diferencia de las dos posiciones, las contribuciones debidas a los _SC_ distintos se compensan, como resulta evidente de la figura 14.
**Fig. 13**
La distancia entre dos puntos es por lo tanto invariante, es decir, independiente del _SC_ elegido.
Otro ejemplo de magnitud independiente de la elección del _SC_ lo constituye el cambio de velocidad, concepto familiar en el estudio de la mecánica. Supongamos nuevamente que un punto material que se mueve en línea recta es observado desde dos _SC._ La variación de su velocidad es igual para ambos sistemas, pues en el cálculo de la diferencia entre las velocidades del móvil antes y después del cambio, no influye la diferencia constante de velocidades de los dos _SC._ El cambio de velocidad es, pues, también, un invariante; con la condición, desde luego, de que el movimiento relativo de nuestro _SC_ sea uniforme; en caso contrario, es evidente que el cambio de velocidad resultaría distinto para cada uno de los dos _SC_.
**Fig. 14**
Finalmente, consideremos dos puntos materiales entre los cuales actúan fuerzas que dependen, únicamente, de la distancia que las separa. En el caso de un movimiento rectilíneo, la distancia, y por lo tanto la fuerza, es también invariante. La ley de Newton, que relaciona la fuerza con el cambio de velocidad, será también válida en ambos _SC_ de acuerdo a lo visto en el párrafo anterior. Llegamos así a una conclusión, confirmada por la experiencia diaria, a saber: si las leyes de la mecánica son válidas en un _SC_ , entonces se cumplen en todos los _SC_ en movimiento uniforme respecto al primero. Aun cuando nuestro razonamiento se ha basado sobre simples casos de movimientos rectilíneos, las conclusiones tienen, en realidad, carácter general y pueden ser resumidas como sigue:
1. No conocemos regla alguna para encontrar un sistema inercial. Dado uno, resulta simple hallar un número infinito de ellos, pues todos los _SC_ en movimiento uniforme, con relación al primero, son sistemas inerciales.
2. El tiempo correspondiente a un suceso es el mismo en todos los _SC_ , pero las coordenadas y velocidades son diferentes y varían según las leyes de transformación.
3. Aun cuando las coordenadas y la velocidad cambian de valor al pasar de un _SC_ a otro, la fuerza y la variación de la velocidad, y por lo tanto las leyes de la mecánica, son invariantes con respecto a dichas leyes de transformación.
Las leyes de transformación formuladas para las coordenadas y velocidades serán llamadas leyes de transformación de la mecánica clásica o, más brevemente, la _transformación clásica_.
#### ÉTER Y MOVIMIENTO
El principio de relatividad de Galileo, que es válido para los fenómenos mecánicos, afirma, pues, que las mismas leyes de la mecánica son válidas en todos los sistemas inerciales que se mueven los unos con relación a los otros. Ahora bien, ¿valdrá este principio para fenómenos no mecánicos, especialmente para aquellos en los cuales el concepto de campo resultó ser tan importante? Todos los problemas relativos a esta cuestión nos llevan de inmediato al punto inicial de la teoría de la relatividad.
Recordemos que la velocidad de la luz _en el vacío_ o, en otras palabras, en el éter, es de 300.000 km por segundo y que la luz es una onda electromagnética que se propaga a través del éter. El campo electromagnético transporta energía consigo, la que, una vez emitida por la fuente, adquiere una existencia independiente. Por el momento, continuaremos admitiendo que el éter es un medio a través del cual se propagan las ondas electromagnéticas, y, por lo tanto, también las luminosas, aun cuando tengamos plena conciencia de todas las dificultades que entraña la estructuración mecánica del éter.
Imaginémonos estar sentados en una habitación cerrada, aislada de tal manera del mundo exterior que sea imposible la entrada o salida de aire. Si en tal caso pronunciamos una palabra, desde el punto de vista físico, esto significa que hemos creado ondas sonoras que se propagan en todas direcciones, con la velocidad del sonido en el aire. Si en la habitación no hubiera aire u otro medio material nos sería imposible oír la palabra pronunciada. Se ha probado experimentalmente que la velocidad del sonido en el aire es la misma en todas las direcciones si no hay viento y el aire está en reposo en el _SC_ elegido.
Imaginemos ahora que nuestra habitación se mueve uniformemente por el espacio. Un hombre del exterior ve por las paredes, que suponemos transparentes, todo lo que ocurre en el interior de la habitación. De las medidas efectuadas por el observador interior, el observador exterior puede deducir la velocidad del sonido respecto a su _SC_ con relación al cual la habitación está en movimiento.
Nos encontramos de nuevo ante el viejo problema, ya comentado, de determinar la velocidad con respecto a un _SC_ si es conocida en otro _SC_.
El observador de la habitación sostiene: la velocidad del sonido es, para mí, igual en todas las direcciones.
El observador exterior proclama: la velocidad del sonido, que se propaga en la habitación móvil y que he determinado en mi _SC_ , no es igual en todas las direcciones. Es mayor que la velocidad normal del sonido en el sentido del movimiento de la habitación y es menor en el sentido opuesto.
Estas conclusiones son consecuencia de la transformación clásica y pueden ser confirmadas por la experiencia. La habitación transporta consigo el medio material, el aire, por el cual se propagan las ondas sonoras y la velocidad del sonido será, por ello, diferente para ambos observadores.
Se pueden sacar algunas conclusiones más de la teoría que considera el sonido como una onda que se propaga a través de un medio material. Una de las maneras, aunque no la más sencilla, de no oír lo que alguien nos dice, sería la de alejarnos con una velocidad mayor que la del sonido, con relación al aire que rodea al que habla, pues las ondas sonoras, en este caso, nunca podrían alcanzar nuestros oídos. Por otra parte, si estuviéramos interesados en captar una palabra importante, dicha con anterioridad y que nunca será repetida, tendríamos que desplazarnos con una velocidad superior a la del sonido para alcanzar la onda correspondiente. No hay nada irracional en ninguno de estos dos ejemplos, excepto de que en ambos casos habría que moverse con una velocidad de unos 350 metros por segundo; podemos muy bien imaginar que el futuro desarrollo técnico hará posible tales velocidades. Una bala de cañón se mueve, en efecto, con una velocidad inicial superior a la del sonido y un hombre, colocado sobre ese proyectil, no oiría nunca el estampido del cañonazo.
Todos estos ejemplos son de un carácter puramente mecánico, pero es posible plantearnos una importante pregunta: ¿podríamos repetir para el caso de una onda luminosa lo que acabamos de decir respecto al sonido? ¿Serán aplicables el principio de relatividad de Galileo y la transformación clásica tanto a los fenómenos mecánicos como a los eléctricos y ópticos? Sería aventurado afirmarlo o negarlo sin antes profundizar su significado.
En el caso de la onda sonora que se propaga en una habitación que se mueve uniformemente con relación al observador exterior, resulta esencial destacar lo siguiente:
1. La cámara móvil arrastra el aire en el que se propaga la onda sonora.
2. Las velocidades observadas en dos _SC_ que se mueven uniformemente uno respecto a otro están relacionadas por la transformación clásica.
El problema correspondiente para la luz debe ser formulado de una manera ligeramente distinta. Los observadores de la habitación ya no están hablando, sino haciendo señales luminosas en todas direcciones. Supongamos además que las fuentes que emiten las ondas luminosas están en reposo permanente en el interior de la cámara. En este caso las ondas luminosas se mueven a través del éter exactamente de la misma manera que las ondas sonoras se propagan en el aire.
¿Es arrastrado el éter, con la habitación, igual que ocurría con el aire? Como no poseemos una imagen mecánica del éter es extremadamente difícil responder a esta pregunta. Si la habitación está cerrada, el aire de su interior se ve forzado a moverse con ella. No tiene sentido, evidentemente, pensar lo mismo para el éter, pues admitimos que toda la materia está sumergida en el mismo y que dicho medio penetra en todas partes. No hay puertas que se cierren para él. En este caso, una habitación en movimiento significa solamente un _SC_ móvil al cual está rígidamente unida la fuente luminosa. No es, sin embargo, imposible imaginar que la habitación que se mueve con su fuente luminosa arrastre al éter consigo, igual que eran transportadas la fuente sonora y el aire por la habitación cerrada. Pero podemos igualmente imaginar lo contrario: que la habitación viaja a través del éter como lo hace un barco por un mar perfectamente tranquilo, sin arrastrar parte alguna del medio por el cual se mueve. En el caso de que la fuente y la habitación arrastren el éter, la analogía con las ondas sonoras sería evidente y se podrían deducir conclusiones similares a las obtenidas en los ejemplos anteriores. En la suposición de que la habitación y la fuente luminosa no arrastren el éter, no existe analogía con las ondas sonoras y las conclusiones a que llegamos para el sonido no valdrán para las ondas luminosas. Éstos constituyen los dos casos extremos, pero podríamos imaginar otra posibilidad más complicada, en la cual se considere que el éter sólo es parcialmente arrastrado por la habitación y la fuente luminosa en movimiento. No hay por qué, sin embargo, discutir las suposiciones más complicadas antes de investigar por cuál de los dos casos extremos y más simples se inclina la experiencia.
Empezaremos con la primera de las imágenes y manifestaremos, por lo tanto, que el éter es arrastrado por la habitación en movimiento, a la que está rígidamente unida la fuente luminosa. Si creemos en la validez del sencillo principio de transformación de la velocidad de las ondas sonoras, podremos aplicar nuestras conclusiones anteriores a las ondas luminosas. En efecto, no parece existir motivo aparente alguno para dudar de la ley de transformación mecánica que establece que las velocidades deben ser sumadas en ciertos casos y restadas en otros. Por el momento admitiremos ambas suposiciones, a saber: la del éter arrastrado por la habitación y su fuente, y la validez de la transformación clásica.
Si encendemos la luz cuya fuente está rígidamente unida a nuestra habitación, entonces la velocidad de la señal luminosa tendrá el valor experimental, bien conocido, de 300.000 km por segundo. En cambio un observador exterior notará el movimiento de la habitación y, por lo tanto, el de la fuente; como el éter es arrastrado, su conclusión será: «la velocidad de la luz en mi _SC_ exterior es diferente en distintas direcciones. Es mayor que la velocidad corriente de la luz en el sentido del movimiento de la habitación y menor en el sentido opuesto». En conclusión: si el éter es arrastrado con la habitación móvil y si son válidas las leyes de la mecánica, la velocidad de la luz debe depender de la velocidad de la fuente luminosa. La luz que llega a nuestros ojos de una fuente en movimiento tendrá mayor o menor velocidad si aquélla se acerca o aleja de nosotros.
Si nuestra velocidad fuera mayor que la de la luz, podríamos evitar que nos alcanzase; también nos sería posible ver sucesos pasados tratando de alcanzar ondas luminosas emitidas con anterioridad. Las alcanzaríamos en orden inverso al cual fueron emitidas y la sucesión de los acontecimientos sobre la Tierra se nos aparecería como el de una cinta pasada al revés, empezando por el final feliz... Todas estas conclusiones son consecuencia de la suposición de que el _SC_ móvil arrastra el éter consigo y de que son válidas las leyes de la transformación mecánica. Si esto fuera así, la analogía entre la luz y el sonido sería perfecta.
Lamentablemente no hay indicación alguna en favor de estas conclusiones. Por el contrario, son contradichas por todas las observaciones hechas con el propósito de verificarlas. No hay la menor duda sobre la claridad de este veredicto, aun cuando es obtenido por experiencias más bien indirectas, a causa de las graves dificultades técnicas asociadas a la enorme velocidad de la _luz. La velocidad de la luz es, siempre, la misma en todos los SC, independientemente de si la fuente se mueve, o no, y de cómo se mueve_.
No entraremos en una descripción detallada de todas las investigaciones de las cuales se deduce esta importante conclusión. Nos es, sin embargo, posible usar algunos argumentos sencillos, que hacen dicha afirmación comprensible, aun cuando no la prueben.
En nuestro sistema planetario, la Tierra y los otros planetas giran alrededor del Sol. No sabemos si existen otros sistemas planetarios similares al nuestro. Hay, sin embargo, muchísimos sistemas de estrellas dobles que consisten en dos astros que se mueven alrededor de un mismo punto llamado su centro de gravedad. La observación del movimiento de dichas estrellas revela la validez de la ley de la gravitación de Newton. Supongamos ahora que la velocidad de la luz dependiera de la velocidad del cuerpo que la emite. Entonces el mensaje, o sea el rayo luminoso procedente de la estrella, viajaría más o menos rápidamente según fuera la velocidad de la estrella en el momento de la emisión. En este caso todo el movimiento del sistema se complicaría y sería imposible confirmar, en el caso de estrellas dobles distantes, la validez de la misma ley de gravitación que rige en nuestro sistema planetario.
Consideremos otro ejemplo de una experiencia basada en una idea muy sencilla. Imaginemos una rueda girando muy rápidamente. De acuerdo con nuestra suposición el éter es arrastrado por el movimiento de la rueda y participa de él. Una onda luminosa que pasara cerca de la rueda tomaría una velocidad distinta según si la rueda estuviera en reposo o en movimiento. En otras palabras, la velocidad de la luz en el éter en reposo debería ser distinta de la correspondiente a un éter animado de un rápido movimiento giratorio por efecto de la rueda, de la misma manera que varía la velocidad del sonido entre los días calmosos y los ventosos. ¡Nunca se ha observado tal diferencia! Cualquiera que sea el ángulo desde el cual enfoquemos el asunto, o el experimento crucial que ideemos, el veredicto es siempre contrario a la suposición de que el éter es arrastrado por el movimiento. Luego, los resultados de nuestras consideraciones basadas en argumentos más detallados y técnicos son:
La velocidad de la luz no depende del movimiento de la fuente emisora.
No se debe suponer que los cuerpos en movimiento arrastran el éter consigo.
Nos vemos obligados entonces a abandonar la analogía entre ondas luminosas y sonoras, y volver a la segunda suposición: toda la materia se mueve a través del éter, no participando éste en el movimiento. Esto significa suponer la existencia de un mar de éter con todos los _SC_ en reposo en él o moviéndose con relación al mismo. Dejemos por ahora la cuestión de si la experiencia confirma o desecha esta teoría. Convendrá primero familiarizarse con el significado de esta nueva suposición y con las conclusiones que de ella se derivan.
Según la misma, se puede imaginar un _SC_ en reposo respecto al mar de éter. En la mecánica ninguno de los _SC_ en movimiento uniforme puede ser distinguido de los demás. Todos estos _SC_ eran igualmente «buenos» o «malos». Si tenemos dos _SC_ en movimiento uniforme el uno respecto al otro, carece de sentido en mecánica preguntarse cuál de ellos está en movimiento y cuál está en reposo. Sólo se pueden observar movimientos uniformes relativos. No podemos hablar de movimiento uniforme absoluto, a causa del principio de relatividad de Galileo. ¿Qué se quiere significar con la expresión de que existe movimiento _absoluto_ y no únicamente movimiento _relativo_? Simplemente, que existe un _SC_ en el cual algunas de las leyes de la naturaleza son distintas a las de todos los demás _SC._ También quiere decir que todo observador puede decidir si su _SC_ está en reposo o en movimiento, al comparar las leyes válidas para él con aquellas que rigen en el único sistema que tiene el privilegio absoluto de hacer de _SC_ patrón. Estamos, pues, aquí, frente a un estado de cosas diferente al de la mecánica clásica, donde, como consecuencia del principio de inercia de Galileo, el movimiento uniforme absoluto no tiene sentido.
¿Qué conclusiones pueden deducirse en el dominio de los fenómenos del campo, si se admite la posibilidad de un movimiento a través del éter? Esto significaría la existencia de un _SC_ distinto de todos los demás y en reposo respecto al mar de éter. Está claro que algunas leyes de la naturaleza deben ser diferentes en este _SC;_ de lo contrario, la expresión anterior «movimiento a través del éter» no tendría sentido. Si el principio de relatividad de Galileo es válido, entonces no se puede hablar de movimiento a través del éter. Resulta imposible, como se ve, reconciliar estas dos ideas. Si en cambio existe un _SC_ especial, fijo en el éter, tiene un significado bien definido hablar de «movimiento absoluto» o «reposo absoluto».
En realidad, no tenemos mucho que elegir. Hemos tratado de salvar el principio de relatividad de Galileo, suponiendo que los sistemas físicos en movimiento arrastran el éter consigo; pero esto nos condujo a una contradicción con la experiencia. La única solución es abandonar ese principio y probar la hipótesis de que todos los cuerpos se desplazan a través de un mar de éter en reposo.
Aceptado esto, nuestro primer paso es poner a prueba por la experiencia ciertas conclusiones que contradigan el principio de relatividad de Galileo, pero que favorezcan el punto de vista del movimiento a través del éter. Tales experiencias son bastante fáciles de imaginar, pero muy difíciles de llevar a la práctica. Como aquí nos interesan especialmente las ideas, dejaremos de lado dichas dificultades técnicas.
Volvamos, pues, a nuestra habitación móvil y a dos observadores, uno interior y otro exterior a la misma. El observador exterior representa el _SC_ fijo en el éter, constituyendo por lo tanto el _SC_ especial, respecto al cual la velocidad de la luz tiene siempre el mismo valor. Y todas las fuentes luminosas, en movimiento o en reposo, en el mar de éter, emiten luz que se propaga con la misma velocidad. Ya hemos dicho que la cámara y su observador se mueven respecto al éter. Imaginemos que en el centro de aquélla se encienda y apague una fuente luminosa y que además sus paredes sean transparentes de manera que ambos observadores, interior y exterior, puedan medir la velocidad de la luz. Preguntemos ahora a cada uno de los observadores los resultados que esperan obtener de sus medidas y sus respuestas serán, más o menos, las siguientes:
_El observador exterior:_ Mi _SC_ está determinado por el mar de éter. En este _SC_ el valor de la velocidad de la luz será siempre el mismo. No tengo necesidad de preocuparme si la fuente está en reposo o en movimiento, pues no arrastra el éter consigo. Mi _SC_ es distinto de todos los demás y la velocidad de la luz debe tener su valor normal en este _SC,_ con independencia de la dirección del haz luminoso y del movimiento de la fuente.
_El observador interior:_ Mi habitación se mueve a través del éter. Una de sus paredes se aleja de la onda luminosa y la otra se le aproxima. Si mi habitación viajara con la velocidad de la luz con relación al éter, la onda luminosa emitida desde su centro jamás alcanzaría la pared que se aleja. La pared que se mueve hacia la onda luminosa sería alcanzada por ésta antes que la pared opuesta. Por lo tanto, a pesar de que la fuente luminosa esté rígidamente unida a mi _SC,_ la velocidad de la luz no será la misma en todas direcciones. Será menor en el sentido del movimiento de la cámara y mayor en el sentido opuesto.
La velocidad de la luz será, pues, igual en todas las direcciones, únicamente respecto al _SC_ que representa el éter fijo. Para todos los otros _SC_ , es decir, en movimiento respecto al éter, la velocidad de la luz dependerá de la dirección en la que la midamos.
Lo que acabamos de exponer constituye la base de un experimento crucial de la teoría del éter fijo. La naturaleza pone a nuestra disposición, en efecto, un sistema que se mueve con una velocidad relativamente grande: la Tierra en su movimiento de traslación alrededor del Sol. Si nuestra hipótesis del éter fijo es correcta, entonces la velocidad de la luz en el sentido del movimiento de la Tierra diferirá de su velocidad en el sentido opuesto. Se puede calcular la diferencia entre ambas velocidades e idear un dispositivo experimental capaz de ponerla de manifiesto. Tratándose, de acuerdo con la teoría, de diferencias muy pequeñas, hubo que construir dispositivos experimentales muy ingeniosos. Esto lo realizaron Michelson y Morley en sus famosas experiencias. El resultado fue un veredicto de «muerte» para la hipótesis del éter en reposo a través del cual se moverían todos los cuerpos. No pudo observarse ninguna dependencia entre la velocidad de la luz y la dirección de su propagación.
No sólo la velocidad de la luz sino también otros fenómenos del campo debieran mostrar una dependencia de la dirección en el SG móvil si admitimos la hipótesis del éter en reposo. Todas las experiencias han dado el mismo resultado negativo que las de Michelson-Morley, no revelando dependencia alguna, con respecto a la dirección del movimiento de la Tierra.
La situación se pone cada vez más grave. Hemos ensayado dos suposiciones. La primera, que los cuerpos en movimiento arrastran el éter consigo. El hecho de que la velocidad de la luz no dependa del movimiento de la fuente contradice esta hipótesis. La segunda, que existe un _SC_ distinto a todos los demás y que los cuerpos en movimiento no arrastran el éter consigo, sino que viajan a través del mar de éter en reposo. Si esto fuera así, ya hemos visto que el principio de relatividad de Galileo no sería válido, y la velocidad de la luz no podría ser la misma en todos los _SC._ Nuevamente estamos en contradicción con la experiencia.
Han sido propuestas otras teorías más complicadas, basadas sobre la idea de un arrastre parcial del éter por los cuerpos en movimiento. ¡Pero todas han fallado! Todos los intentos de explicar los fenómenos electromagnéticos en _SC_ móviles, suponiendo el éter en movimiento, el éter en reposo o el éter arrastrado parcialmente, resultaron infructuosos.
Esto originó una de las situaciones más dramáticas en la historia de la ciencia. Todas las hipótesis referentes al éter conducían a contradicciones con la experiencia. Mirando hacia el pasado de la física, vemos que el éter apenas nacido se transformó en el _«enfant terrible»_ de la familia de las sustancias físicas. Primero tuvo que ser descartada, por imposible, la concepción de una imagen mecánica sencilla del éter. Esto causó, en gran parte, la bancarrota del punto de vista mecanicista. Hubo que abandonar la esperanza de descubrir un _SC_ distinto a los demás, fijo en el mar de éter y con ello la posibilidad de existencia del movimiento absoluto. Sólo esta posibilidad, agregada a la de ser el transmisor de las ondas electromagnéticas, justificaría y señalaría la existencia del éter. Han fallado todos los intentos de convertir el éter en una realidad; no se llegó a descubrir ningún indicio sobre su constitución mecánica ni se demostró jamás el movimiento absoluto. Es decir, nada quedó de todas las propiedades del éter, excepto aquella para la cual fue inventado: la de transmitir las ondas electromagnéticas, y, es más, las tentativas de descubrir las propiedades del éter condujeron a dificultades y contradicciones insalvables. Ante una experiencia tan amarga, parece preferible ignorar completamente el éter y tratar de no mencionar más su nombre. Con el objeto de omitir la palabra que hemos decidido evitar, diremos: nuestro espacio tiene la propiedad física de transmitir las ondas electromagnéticas.
La supresión de una palabra de nuestro vocabulario no constituye, naturalmente, una solución. ¡Las dificultades son demasiado graves para ser solucionadas de esta manera!
Enumeramos a continuación los hechos que han sido confirmados por la experiencia sin preocuparnos del problema del: _«e-r»_.
1. La velocidad de la luz en el vacío tiene siempre el mismo valor, con independencia del movimiento de la fuente o del observador.
2. En dos _SC_ en movimiento uniforme relativo, todas las leyes de la naturaleza son idénticas, no habiendo manera alguna de descubrir un movimiento uniforme y absoluto.
Muchos experimentos confirman estos dos principios y ninguno los contradice. El primero expresa la constancia de la velocidad de la luz y el segundo generaliza a todos los sucesos de la naturaleza el principio de relatividad de Galileo, formulado inicialmente para los fenómenos mecánicos.
En la mecánica hemos visto que si la velocidad de un punto material tiene un cierto valor respecto a un _SC_ , tendrá un valor distinto en otro _SC_ que se mueve con movimiento uniforme en relación con el primero. Esto es una consecuencia, como ya vimos, de las transformaciones de la mecánica clásica, que son reveladas directamente por la intuición, y, aparentemente, nada erróneo hay en ellas. Debemos agregar, pues, el siguiente principio:
3. Las posiciones y las velocidades se transforman de un sistema inercial a otro, según la transformación clásica.
Pero esta transformación está en flagrante contradicción con la constancia de la velocidad de la luz. ¡Resulta imposible combinar los enunciados 1.°, 2.° y 3.°!
La transformación clásica parece demasiado evidente y sencilla para intentar modificarla. Ya hemos tratado de cambiar el 1.° o el 2.° y en ambos casos llegamos a un desacuerdo con la experiencia. Todas las teorías referentes al movimiento del _«e-r»_ requirieron una alteración del 1.° y del 2.°. Esto no dio un resultado satisfactorio. Una vez más nos damos cuenta de la seriedad de nuestras dificultades. Ante ellas, se impone una nueva orientación. Ésta se consigue _aceptando las suposiciones fundamentales_ 1.0 y 2.1 y, aun cuando parezca rarísimo, rechazando la 3.1. La nueva orientación se origina en un análisis de los conceptos más primitivos y fundamentales; a continuación mostraremos cómo este análisis nos obliga a cambiar los antiguos puntos de vista, y elimina todas nuestras dificultades.
#### TIEMPO, DISTANCIA, RELATIVIDAD
Las nuevas premisas que adoptamos son:
1. _La velocidad de la luz en el vacío es la misma en todos los SC en movimiento uniforme relativo_.
_2. Las leyes de la naturaleza son las mismas en todos los SC en movimiento uniforme relativo_.
La teoría de la relatividad empieza con estas dos suposiciones. De aquí en adelante ya no usaremos más la transformación clásica porque está en contradicción con estas premisas.
Resulta esencial, aquí como siempre en la ciencia, librarnos de prejuicios arraigados y a menudo repetidos sin una crítica previa. Como hemos visto que los cambios en 1° y _2.°_ conducen a contradicciones con la experiencia, hemos de tener la valentía de enunciar su validez con toda claridad y atacar el único punto débil, que es la ley de transformación de las posiciones y velocidades de un _SC_ a otro. Nos proponemos deducir conclusiones de 1.1 y _2.° y_ ver dónde y cómo estas suposiciones contradicen la transformación clásica, para encontrar el significado físico de los resultados así obtenidos.
Usaremos una vez más el ejemplo de la habitación en movimiento con los observadores interior y exterior. Supongamos que, como antes, una señal luminosa es emitida en el centro de la habitación y preguntemos nuevamente a los dos hombres: ¿qué esperan observar, admitiendo nuestros dos principios y olvidando lo dicho previamente respecto al medio a través del cual se propaga la luz? Citamos sus respuestas:
_El observador interior:_ La señal luminosa que parte del centro de la habitación alcanzará _simultáneamente_ las paredes de la misma, ya que dichas paredes están a igual distancia de la fuente luminosa y la velocidad de la luz es igual en todas las direcciones.
_El observador exterior:_ En mi sistema la velocidad de la luz es exactamente la misma que en el sistema del observador que se mueve con la habitación. No importa que la fuente luminosa se mueva o no respecto a mi _SC,_ ya que su movimiento no influye en la velocidad de la luz. Lo que veo es una señal luminosa desplazándose con la misma velocidad en todas las direcciones. Una de las paredes trata de escaparse y la opuesta de acercarse a la señal luminosa. Por lo tanto, la señal luminosa alcanzará a la pared que se aleja algo más tarde que a aquella que se acerca. Aun cuando la diferencia será muy pequeña si la velocidad de la habitación es pequeña comparada con la de la luz, la señal luminosa no llegará simultáneamente a las paredes que son perpendiculares a la dirección del movimiento.
Comparando las predicciones de los dos observadores nos encontramos con un resultado muy sorprendente que contradice de plano los aparentemente bien fundados conceptos de la física clásica. Dos sucesos, esto es, los dos rayos luminosos que llegan a las paredes opuestas son simultáneos para el observador interior pero no para el observador exterior. En la física clásica teníamos un único reloj, un solo flujo de tiempo para todos los observadores en todos los _SC._ El tiempo y, por tanto, conceptos tales como «simultaneidad», «antes», «después», tenían un significado absoluto, independiente del _SC._ Dos sucesos que ocurrían al mismo tiempo en un _SC_ eran necesariamente simultáneos en todos los demás _SC_.
Las suposiciones 1.a y 2.a, esto es, la teoría de la relatividad, nos obligan a abandonar ese punto de vista. Hemos descrito dos sucesos que acaecen al mismo tiempo en un _SC_ pero en tiempos distintos en otro _SC_. Nuestra labor, ahora, es tratar de entender el significado de la expresión siguiente: «dos sucesos simultáneos en un _SC_ pueden no serlo en otro».
¿Qué queremos decir por «dos sucesos simultáneos en un _SC_ »? Intuitivamente, todo el mundo cree conocer el significado de esta frase. Pero pongámonos en guardia y tratemos de dar definiciones rigurosas, sabiendo lo peligroso que es sobreestimar la intuición. Contestemos primero a una pregunta sencilla.
¿Qué es un reloj?
La sensación subjetiva primaria del fluir del tiempo nos permite ordenar nuestras impresiones y afirmar que tal suceso tiene lugar antes y aquel otro después. Pero para demostrar que el intervalo de tiempo entre dos sucesos es 10 segundos, necesitamos un reloj. Mediante el uso del reloj, el concepto de tiempo se hace objetivo. Cualquier fenómeno físico puede ser usado como un reloj, con tal de que pueda ser repetido exactamente tantas veces como se desee. Tomando el intervalo entre el principio y el fin de tal suceso como una unidad de tiempo, se pueden medir intervalos de tiempo arbitrarios mediante la repetición de dicho fenómeno. Todos los relojes, desde el simple reloj de arena a los instrumentos más perfeccionados, están basados en esta idea. En el reloj de arena la unidad de tiempo es el intervalo que la arena emplea en pasar de uno a otro recipiente. El mismo proceso físico se puede repetir invirtiendo la posición de los recipientes.
Supongamos que en dos puntos distantes tenemos dos relojes perfectos que indican exactamente la misma hora. Esto debería ser verdad independientemente del método empleado para verificarlo. ¿Pero qué significa esto realmente? ¿Cómo es posible cerciorarse de que relojes distantes entre sí marcan siempre y exactamente la misma hora? El uso de la televisión podría ser un método satisfactorio. Entiéndase que utilizamos la televisión sólo a título de ejemplo y no como algo esencial para nuestro objetivo. Podríamos estar en la proximidad de uno de los relojes y observar por televisión la imagen del otro, comprobando si marcan o no el mismo tiempo simultáneamente. Pero esto no constituiría una prueba exacta. La imagen que obtenemos por televisión es transmitida por ondas electromagnéticas que se propagan con la velocidad de la luz. Con la televisión se ve una imagen de algo que se produjo un momento antes, mientras que lo que vemos en el reloj real es lo que tiene lugar en el momento presente. Esta dificultad puede ser fácilmente evitada, observando, por televisión, imágenes de los dos relojes en un punto equidistante de ambos. Entonces si las señales son emitidas simultáneamente llegarán a dicho punto en el mismo instante. Si dos relojes buenos, observados desde un punto equidistante, indican siempre la misma hora, podrán ser usados para indicar la hora de dos sucesos que tienen lugar en dos puntos distantes.
En la mecánica clásica usábamos un reloj solamente. Pero este proceder no resultaba conveniente porque en tal caso teníamos que realizar todas nuestras medidas en la vecindad del único reloj. Observando el reloj desde cierta distancia no debemos olvidar que lo visto en un determinado instante, realmente sucedió algo antes; así, al contemplar una puesta de sol presenciamos un hecho sucedido 8 minutos antes. Por esta razón tendríamos que corregir todas las determinaciones del tiempo, según fuera nuestra distancia al reloj.
Como conocemos un método que nos permite determinar si dos o más relojes marcan el mismo tiempo simultáneamente, nos es posible evitar el inconveniente del uso de un solo reloj, imaginando cuantos relojes deseemos en un _SC_ dado. Cada uno de éstos nos servirá para determinar el tiempo de los sucesos que se producen en su vecindad. Todos los relojes están en reposo en relación al _SC_ en cuestión. Son relojes «buenos» y están _sincronizados,_ es decir, indican la misma hora simultáneamente.
No hay nada extraño o sorprendente en la disposición de dichos relojes. Usamos muchos relojes sincronizados en lugar de uno, para poder así determinar fácilmente cuándo dos sucesos distantes son simultáneos en un cierto _SC_ ; lo que se comprobará, si los relojes sincronizados indican la misma hora en el instante en que se producen los sucesos. Decir que uno de los sucesos distantes se produjo antes que otro tiene ahora un significado definido. Todo esto puede ser juzgado con la ayuda de los relojes sincronizados, en reposo, en nuestro _SC_.
Esto está de acuerdo con la física clásica, no habiendo aparecido hasta ahora ninguna contradicción con la transformación clásica.
Para la definición de sucesos simultáneos, los relojes son sincronizados por intermedio de señales luminosas o electromagnéticas en general, que viajan con la velocidad de la luz, velocidad que juega un papel tan fundamental en la teoría de la relatividad.
Si nos proponemos tratar el importante problema de dos _SC_ en movimiento uniforme uno respecto al otro, tenemos que considerar dos barras provistas, cada una, de sus relojes. El observador de cada uno de los _SC_ en movimiento relativo tiene su barra y su conjunto de relojes rígidamente unidos a ella.
Al discutir las medidas en la mecánica clásica, usábamos un solo reloj para todos los _SC._ Aquí tenemos muchos relojes en cada _SC_. Esta diferencia no tiene importancia. Un reloj era suficiente, pero nadie objetaría el uso de muchos mientras todos se comportaran como deben hacerlo los buenos relojes sincronizados.
Nos estamos aproximando al punto esencial que muestra dónde la transformación clásica se contradice con la teoría de la relatividad. ¿Qué sucede cuando dos grupos de relojes se mueven uniformemente, uno en relación al otro? El físico clásico contestaría: nada; siguen con el mismo ritmo y da lo mismo usar los relojes en movimiento o en reposo para indicar el tiempo. Según la física clásica dos sucesos simultáneos en un _SC_ serán, también, simultáneos respecto a cualquier otro _SC_.
Pero ésta no es la única contestación posible. Podemos también imaginar que un reloj en movimiento tenga una marcha distinta a la de otro en reposo. Discutamos ahora esta posibilidad sin decidir por el momento si los relojes realmente cambian su marcha cuando están en movimiento. ¿Qué significa la expresión: un reloj en movimiento modifica su marcha? Supongamos, para simplificar, que en el _SC_ superior hay un reloj y muchos en el _SC_ inferior. Todos los relojes tienen el mismo mecanismo, estando los inferiores sincronizados, esto es, marcan la misma hora simultáneamente. En la figura 15 hemos representado tres posiciones consecutivas de los dos _SC_ en movimiento relativo. En (a) las posiciones de las agujas de todos los relojes, superior e inferiores, son las mismas: por convención las disponemos así. Todos los relojes indican, pues, la misma hora. En (b) observamos las posiciones relativas de los dos _SC_ algo más tarde. Todos los relojes en el _SC_ inferior marcan el mismo tiempo, pero el reloj superior tiene su marcha alterada. La marcha ha cambiado y señala una hora distinta porque está en movimiento respecto al _SC_ inferior. En (c), habiendo pasado más tiempo, la diferencia en la posición de las agujas es mayor que en (b).
**Fig. 15**
Según esto, un observador en reposo en el _SC_ inferior encontraría que un reloj en movimiento cambia su marcha. Al mismo resultado llegaría un observador en reposo en el _SC_ superior, al mirar un reloj que se moviera respecto a su sistema; en este caso harían falta muchos relojes en el _SC_ superior y uno solo en el inferior. Las leyes de la naturaleza deben ser las mismas en ambos _SC_ en movimiento relativo.
En la mecánica clásica se suponía tácitamente que un reloj en movimiento no cambia su marcha. Esto parecía tan evidente que no valía la pena comentarlo. Pero nada debiera ser considerado demasiado evidente; si queremos ser realmente cuidadosos debemos analizar todos los conceptos presupuestos hasta ahora en la física.
Una suposición no debe ser considerada como carente de sentido por el mero hecho de estar en desacuerdo con la física clásica. Es perfectamente posible imaginar que un reloj en movimiento modifique su marcha, mientras la ley de tal cambio sea la misma para todos los _SC_ inerciales.
Otro ejemplo. Tomemos una barra de un metro; esto es, una barra cuya longitud es un metro, en un _SC_ en el cual está en reposo. Supongamos que esta barra adquiera un movimiento uniforme, deslizándose a lo largo de la barra que representa nuestro _SC._ ¿Seguirá siendo su longitud un metro? Para responder a esta pregunta es necesario saber de antemano cómo determinarla. Mientras dicha barra esté en reposo, sus extremos coincidirán con puntos del _SC_ separados por la distancia de un metro. De aquí concluimos: la longitud de la barra en reposo es de un metro. ¿Pero cómo mediremos su longitud estando en movimiento? Podría hacerse, por ejemplo, como sigue. En un momento dado dos observadores toman simultáneamente fotografías instantáneas, una del origen y otra del extremo de la barra. Como las fotografías fueron tomadas simultáneamente podemos comparar las marcas del _SC_ con las que coinciden con los dos extremos de la barra en movimiento. La distancia entre estas dos marcas nos dará su longitud. Luego se necesitan dos observadores que tomen nota de sucesos simultáneos en diferentes lugares del _SC_ dado. No hay por qué creer que tal medida nos dé el mismo valor que el obtenido cuando la barra está en reposo. Como las fotografias deben ser tomadas simultáneamente, que es, como sabemos, un concepto que depende del _SC_ , es muy posible que el resultado de esta medida sea diferente en diferentes _SC_ en mutuo movimiento relativo.
No sólo nos es posible imaginar que un reloj en movimiento modifique su marcha sino que una barra en movimiento cambie su longitud, siempre que las leyes que rijan dichas variaciones sean las mismas para todos los _SC_ inerciales.
Acabamos de exponer ciertas posibilidades nuevas, sin haber dado justificación alguna para admitirlas.
Recordemos: la velocidad de la luz es la misma en todos los _SC_ inerciales. Es imposible reconciliar este hecho con la transformación clásica. El círculo debe romperse en alguna parte. ¿No podría ser precisamente aquí? ¿No podremos suponer cambios tales en la marcha de un reloj en movimiento y en la longitud de una barra móvil, que resulte, como consecuencia directa, la constancia de la velocidad de la luz? ¡Ciertamente que sí! Estamos frente al primer ejemplo en que la teoría de la relatividad y la mecánica clásica difieren radicalmente. Nuestra argumentación puede ser invertida: si la velocidad de la luz es la misma en todos los _SC_ , entonces las barras en movimiento deben cambiar su longitud y los relojes modificar su marcha, y las leyes que rigen estos cambios están rigurosamente determinadas.
No hay nada misterioso ni irracional en todo esto. En la física clásica se había supuesto siempre que los relojes tienen la misma marcha en movimiento que en reposo y que las barras poseen la misma longitud en reposo que en movimiento. Pero si la velocidad de la luz es la misma en todos los _SC_ , si la teoría de la relatividad es válida, debemos sacrificar estas suposiciones de la física clásica. Es difícil librarse de prejuicios profundamente arraigados, pero no tenemos otra salida. Desde el punto de vista de la teoría de la relatividad, los viejos conceptos parecen arbitrarios. ¿Por qué creer, como hemos expuesto unas páginas antes, en un fluir absoluto del tiempo, idéntico para los observadores de todos los _SC_? ¿Por qué creer en distancias inalterables? El tiempo se determina con relojes; las coordenadas espaciales con varas de medir, y el resultado de su determinación puede depender del comportamiento de dichos relojes y varas cuando están en movimiento. Nada nos autoriza a creer que han de comportarse como a nosotros nos gustaría. La observación indica indirectamente, por los fenómenos del campo electromagnético, que al estar en movimiento, se modifica efectivamente la marcha de un reloj y la longitud de una barra, cosa que no podíamos prever basándonos en los fenómenos mecánicos. Tenemos que aceptar el concepto de un tiempo relativo a cada _SC,_ porque es la mejor manera de resolver nuestras dificultades. El progreso científico posterior basado en la teoría de la relatividad indica que este nuevo aspecto no debe ser considerado como un _malum necessarium,_ pues los méritos de la teoría son demasiado notorios.
Hasta aquí hemos tratado de mostrar qué hechos condujeron a las hipótesis fundamentales de la teoría de la relatividad y cómo esta teoría nos forzó a revisar y reemplazar la transformación clásica, al considerar el tiempo y el espacio bajo una nueva luz. Nuestro objeto es indicar las ideas que forman la base de un nuevo punto de vista físico y filosófico. Son ideas simples, pero la manera como las hemos formulado es insuficiente para poder llegar a conclusiones cuantitativas. Nos conformamos, como antes, con explicar solamente las ideas principales, exponiendo algunas otras sin probarlas.
Para aclarar la diferencia entre el punto de vista de un físico clásico (a quien llamaremos C) que cree en la transformación clásica, y un físico moderno (a quien llamaremos M) que conoce la teoría de la relatividad, imaginaremos un diálogo entre ambos.
C. — Yo creo en el principio de la relatividad de la mecánica de Galileo porque sé que las leyes de la mecánica son las mismas en dos _SC_ en movimiento uniforme relativo; en otras palabras, que estas leyes son invariantes con respecto a la transformación clásica.
M. — Pero el principio de relatividad debe aplicarse a todos los sucesos del mundo exterior; no sólo a las leyes de la mecánica, sino que todas las leyes de la naturaleza deben ser las mismas en los distintos _SC_ en movimiento uniforme y relativo entre sí.
C. — Pero ¿cómo es posible que todas las leyes de la naturaleza sean las mismas en _SC_ en movimiento uniforme relativo entre sí? Las ecuaciones del campo, esto es, las ecuaciones de Maxwell, no son invariantes respecto a la transformación clásica. Esto resulta claro considerando el ejemplo de la velocidad de la luz; pues según la transformación clásica, esta velocidad no debe ser la misma en dos _SC_ en movimiento relativo entre ellos.
M. — Esto indica sencillamente que la transformación clásica no vale; que la relación entre dos _SC_ debe ser diferente; es decir, que no podemos relacionar las coordenadas y las velocidades según dichas leyes de transformación. Nos vemos obligados, en consecuencia, a sustituirlas por nuevas transformaciones que se deducen de las hipótesis fundamentales de la teoría de la relatividad. Pero no nos preocupemos de la forma matemática de las nuevas leyes de transformación y contentémonos con saber que son diferentes de las clásicas. Las llamaremos, brevemente, _la transformación de Lorentz_. Se puede demostrar que las ecuaciones de Maxwell, es decir, las leyes del campo electromagnético, son invariantes con respecto a la transformación de Lorentz, como las leyes de la mecánica lo son respecto a la transformación clásica. Recordemos cuál era la situación en la física pre-relativista. Teníamos unas leyes de transformación para las coordenadas, otras para las velocidades, pero las leyes de la mecánica eran las mismas en dos _SC_ en movimiento uniforme y relativo entre sí. Teníamos leyes de transformación para el espacio pero no para el tiempo, porque éste era el mismo en todos los _SC._ En la teoría de la relatividad el panorama es distinto. Tenemos leyes de transformación para el espacio, el tiempo y la velocidad, distintas de las leyes clásicas; pero las leyes de la naturaleza, también aquí, deben ser las mismas en todos los _SC_ con movimiento uniforme relativo. Dicho de otra manera, estas leyes deben ser invariantes, no respecto a la transformación clásica, sino respecto a un nuevo tipo de transformación, la llamada transformación de Lorentz. O sea, en todos los _SC_ inerciales valen las mismas leyes y la transición de un _SC_ a otro está determinada por la transformación de Lorentz.
C. — Creo en su palabra, pero me interesaría saber la diferencia entre las dos transformaciones.
M. — Contestaré de la siguiente manera a su pregunta: Cite algunas de las cualidades características de la transformación clásica y yo trataré de explicar si se conservan en la transformación de Lorentz y, en caso negativo, cómo cambian.
C. — Si se produce un fenómeno en cierto punto y en determinado instante en mi _SC_ , un observador de otro _SC_ en movimiento uniforme, con relación al mío, asigna un número diferente a la posición en la cual el fenómeno ocurre, pero le asigna, naturalmente, el mismo tiempo. Nosotros usamos el mismo reloj para todos los _SC_ y no tiene ninguna relevancia que se mueva o no; ¿es esto cierto para usted también?
M. — No, no lo es. Cada _SC_ tiene que ser equipado con sus propios relojes en reposo, pues el movimiento modifica su marcha. Los observadores en dos _SC_ distintos asignarán no sólo números diferentes a la posición, sino distintos valores al instante en el cual se produce el fenómeno en cuestión.
C. — Esto quiere decir que el tiempo no es ya un invariante. En la transformación clásica, el tiempo es idéntico en todos los _SC_.
En cambio en la transformación de Lorentz varía y se comporta de una manera análoga al de una coordenada en la transformación clásica. Y yo me pregunto, ¿qué sucede con la distancia? Según la mecánica clásica la longitud de una barra rígida es la misma, esté en movimiento o en reposo. ¿Vale esto también en la teoría de la relatividad?
M. — No. En efecto, de la transformación de Lorentz se deduce que una barra móvil se contrae en la dirección de su movimiento y esta contracción aumenta con la velocidad. Cuanto más rápidamente se mueve una barra, tanto más corta aparece. Pero esto sucede sólo en la dirección del movimiento. En la figura 16 se ve cómo una barra reduce su longitud a la mitad, al moverse con una velocidad aproximadamente igual al 90 % de la velocidad de la luz. En la figura 17, está ilustrado el hecho de que no hay contracción en la dirección perpendicular al movimiento.
C. — Bien. Esto quiere decir que la marcha de un reloj y la longitud de una barra en movimiento dependen de su velocidad. ¿Pero cómo?
M. — Estos cambios se hacen más notables al aumentar la velocidad. De la transformación de Lorentz se deduce que la longitud de una barra se anularía si su velocidad alcanzara la de la luz. De modo análogo, la marcha de un «buen» reloj en movimiento se haría más lenta en comparación con la de los relojes que fuera encontrando a lo largo del _SC_ en reposo y se detendría al alcanzar la velocidad de la luz.
C. — Esto parece contradecir toda nuestra experiencia. Sabemos, en efecto, que un vehículo en movimiento no reduce su longitud y que el conductor puede comparar su «buen» reloj con los que encuentra en el camino habiendo entre ellos un acuerdo perfecto, contrariamente a lo que usted afirma.
**Fig. 16**
**Fig. 17**
M. — Eso es verdaderamente cierto. Pero estas velocidades mecánicas son todas muy pequeñas en comparación con la de la luz y por ello es ridículo aplicar la relatividad a dichos fenómenos. Todo viajero puede aplicar, con seguridad, la física clásica, aun si pudiera aumentar su velocidad en 100.000 veces su valor. Hay que esperar discrepancias entre la experiencia y la transformación clásica solamente para velocidades próximas a la de la luz. Es decir, que la transformación de Lorentz puede ser puesta a prueba, únicamente, para velocidades muy grandes.
C. — Aún encuentro otra dificultad. Según la mecánica puedo imaginar cuerpos animados de velocidades superiores a las de la luz. Un cuerpo que se mueve con la velocidad de la luz respecto a un barco en movimiento, tiene una velocidad mayor que la de la luz respecto a la costa. ¿Qué le ocurrirá a una barra cuya longitud se reduce a cero cuando va a la velocidad de la luz? Parece imposible imaginar una longitud negativa si la velocidad es mayor que la de la luz.
M. — No hay en realidad razón para tal sarcasmo. Desde el punto de vista de la teoría de la relatividad un cuerpo material no puede tener una velocidad superior a la de la luz. Esta velocidad constituye un límite insuperable. Si la velocidad de un cuerpo respecto al barco es igual a la de la luz, tendrá el mismo valor respecto a la costa. La sencilla ley mecánica de adición y sustracción de velocidades ya no es válida o, dicho de otra manera, es solamente aplicable al caso de velocidades pequeñas. El número que expresa la velocidad de la luz aparece explícitamente en la transformación de Lorentz y representa el límite, análogo al de la velocidad infinita en la mecánica clásica. La relatividad no contradice ni la transformación ni la mecánica clásicas. Al contrario, se recuperan los conceptos clásicos como un caso límite al considerar velocidades pequeñas. Desde el punto de vista de la nueva teoría está claro en qué casos es aplicable la física clásica y en cuáles no. Sería tan ridículo aplicar la teoría de la relatividad al movimiento de autos, barcos o trenes como usar una máquina de calcular donde fuera suficiente una tabla de multiplicar.
#### RELATIVIDAD Y MECÁNICA
La teoría de la relatividad surgió por necesidad, debido a las profundas y serias contradicciones de la teoría clásica que aparecían como irresolubles. La fuerza de la nueva teoría reside en la consistencia y simplicidad con que se resuelven aquellas dificultades admitiendo, solamente, unas pocas y muy convincentes hipótesis.
Aun cuando esta teoría surgió del problema del campo, debe abarcar todas las leyes físicas. Aquí parece que hay una dificultad. En efecto, las leyes del campo electromagnético son de naturaleza completamente diferente a las leyes de la mecánica. Las ecuaciones del campo electromagnético son invariantes con respecto a la transformación de Lorentz, mientras que las ecuaciones de la mecánica son invariantes respecto a la transformación clásica. Pero la teoría de la relatividad pretende que todas las leyes de la naturaleza sean invariantes respecto a la transformación de Lorentz y no con respecto a la clásica. Esta última transformación es sólo un caso límite especial de la transformación de Lorentz cuando la velocidad relativa de los dos _SC_ en consideración es muy pequeña. Si esto es así, la mecánica clásica debe cambiar para poder satisfacer la condición de invariancia respecto a la transformación de Lorentz. O, en otras palabras, la mecánica clásica no puede ser válida cuando las velocidades se aproximan a la de la luz. Sólo puede existir una transformación de un _SC_ a otro, a saber, la transformación de Lorentz.
Fue tarea simple modificar la mecánica clásica para ponerla de acuerdo con la teoría de la relatividad sin contradecir por ello el caudal de datos experimentales explicados por aquélla. La mecánica antigua es válida para velocidades pequeñas y constituye un caso límite de la nueva mecánica. Es interesante considerar un ejemplo en que la teoría de la relatividad introduce una modificación en la mecánica clásica. Esto puede, tal vez, conducirnos a ciertas conclusiones que permitan ser puestas a prueba por la experiencia.
Supongamos que sobre un cuerpo de masa determinada y en movimiento rectilíneo actúe una fuerza exterior en la dirección del movimiento. Como sabemos, la fuerza es proporcional a la variación de la velocidad. O, para ser más explícitos, resulta indiferente que un cuerpo dado aumente su velocidad en un segundo de 100 a 101 metros por segundo, o de 100 kilómetros a 100 kilómetros y un metro por segundo, o de 300.000 kilómetros a 300.000 kilómetros y un metro por segundo. La fuerza que actúa sobre un cuerpo es siempre la misma para un cambio de velocidad dado, en igual tiempo.
¿Vale esta ley también para la teoría de la relatividad? ¡De ninguna manera! Esta ley vale únicamente para pequeñas velocidades. Pero según la teoría de la relatividad, ¿qué ley se cumple para velocidades próximas a las de la luz? La respuesta es: si la velocidad es grande, se requieren fuerzas extremadamente grandes para aumentarla. No es lo mismo, en modo alguno, aumentar en un metro por segundo una velocidad de 100 metros por segundo, que hacerlo a una velocidad próxima a la de la luz. Cuanto más se acerque la velocidad de un cuerpo a la velocidad de la luz, tanto más difícil será aumentarla. Cuando una velocidad es igual a la de la luz todo aumento ulterior resulta imposible. Esta modificación introducida por la teoría de la relatividad no nos ha de sorprender, ya que la velocidad de la luz es un límite insuperable para todas las velocidades. Ninguna fuerza finita, por grande que sea, puede provocar un aumento de velocidad más allá de dicho límite. En lugar de la ley de la mecánica clásica que relaciona la fuerza con el cambio de velocidad, aparece en la relatividad una ley más complicada. Desde el nuevo punto de vista, la mecánica clásica resulta simple, porque en casi todas nuestras observaciones nos encontramos con velocidades mucho menores que la de la luz.
Un cuerpo en reposo tiene una masa perfectamente definida, llamada _masa en reposo._ La mecánica nos ha enseñado que todo cuerpo se resiste a cambiar su movimiento; cuanto mayor es la masa tanto más grande es esta resistencia; a menor masa menor resistencia. Pero en la teoría de la relatividad hay que considerar, además, que esa resistencia aumenta con la velocidad. Cuerpos con velocidades próximas a la de la luz ofrecerían una resistencia muy grande a la acción de fuerzas exteriores. En la mecánica clásica la resistencia de un cuerpo dado es una constante caracterizada por su masa solamente. En la teoría de la relatividad depende de la masa en reposo y de la velocidad. La resistencia se hace infinitamente grande al alcanzar la velocidad de la luz.
Los resultados que acabamos de citar nos permiten someter esta teoría a la prueba de la experiencia. ¿Resistirán los proyectiles con velocidades próximas a la de la luz la acción de fuerzas exteriores, en la medida prevista por la teoría de la relatividad? Como las conclusiones de esta teoría en este aspecto son de carácter cuantitativo, podríamos confirmarla o refutarla si nos fuera posible lanzar cuerpos con semejantes velocidades.
La naturaleza, por suerte, nos ofrece proyectiles con tales velocidades. Los átomos de los cuerpos radiactivos, como los del radio, por ejemplo, actúan como baterías que disparan proyectiles con velocidades enormes. Sin entrar en detalles citemos uno de los conceptos fundamentales de la física y de la química modernas. Todas las sustancias del Universo están formadas por una pequeña variedad de _partículas elementales._ Esta idea de la constitución de la materia recuerda la construcción de los edificios de una ciudad, de distinto tamaño y arquitectura; pero tanto la casucha como el rascacielos, todos ellos han sido edificados con una escasa variedad de ladrillos. Así, todos los elementos conocidos de nuestro mundo material, desde el hidrógeno que es el más liviano al uranio que es el más pesado, están constituidos por los mismos ladrillos, esto es, por la misma clase de partículas elementales. Los elementos más pesados, las construcciones más complicadas, son inestables y se desintegran: decimos que son _radiactivos._ Algunos ladrillos, es decir, las partículas elementales que forman los átomos radiactivos, son a veces expulsados del interior del átomo con velocidades próximas a la de la luz. Un átomo de un elemento como el radio, según los conocimientos actuales, confirmados por numerosas experiencias, es una estructura complicada y la desintegración radiactiva es uno de los fenómenos que ponen de manifiesto que los átomos están formados de un cierto número de partículas elementales.
Por experiencias ingeniosas e intrincadas se pudo estudiar la forma como las partículas emitidas por los átomos radiactivos resisten a la acción de fuerzas exteriores. Estas experiencias confirman las predicciones de la teoría de la relatividad. También en otros casos donde la influencia de la velocidad sobre la resistencia al cambio de movimiento ha podido ser estudiada, se encontró un completo acuerdo entre la teoría y la experiencia. Aquí vemos, una vez más, las características esenciales del trabajo científico creativo: la predicción de ciertos hechos por la teoría y su confirmación por la experiencia.
La consecuencia anterior sugiere una generalización importante. Un cuerpo en reposo tiene masa pero no energía cinética, es decir, energía de movimiento. Un cuerpo en movimiento tiene masa y energía cinética, y se resiste al cambio de movimiento más fuertemente que el cuerpo en reposo. Todo sucede, pues, como si la energía cinética de un cuerpo aumentara su resistencia. De dos cuerpos con la misma masa en reposo, el de mayor energía cinética se resiste más intensamente a la acción de fuerzas exteriores.
Imaginemos una caja en reposo que contiene un cierto número de pequeñas esferas en su interior, también en reposo en nuestro _SC_. Para ponerla en movimiento y para aumentar su velocidad se requiere la acción de una fuerza. ¿Pero la misma fuerza aumentará su velocidad en la misma medida durante un tiempo igual, si las esferitas de su interior se hallan en rápido movimiento en todas las direcciones, como las moléculas de un gas, con velocidades que se aproximan a la de la luz? No. En este caso se requerirá la acción de una fuerza mayor para producir el mismo efecto, debido a la mayor energía cinética de las esferitas que hace aumentar la resistencia de la caja al cambio de movimiento. Como se ve, la energía cinética resiste al cambio de movimiento igual que la materia ponderable. ¿Vale esto también para todas las formas de la energía?
La teoría de la relatividad deduce de sus suposiciones fundamentales una respuesta clara y convincente a esta pregunta, una respuesta, nuevamente, de carácter cuantitativo: toda forma de energía se resiste al cambio de movimiento; es decir, la energía se comporta como la materia. Un pedazo de hierro calentado al rojo pesa más que el mismo cuando está frío. La radiación, emitida por el Sol y que se propaga por el espacio, contiene energía y por lo tanto tiene masa; el Sol y todas las estrellas radiantes pierden masa al emitir su radiación. Esta conclusión, de carácter completamente general, constituye una importante consecución de la teoría de la relatividad y responde a todas las experiencias en que ha sido puesta a prueba.
La física clásica introduce dos tipos de sustancias: materia y energía; ponderable la primera e imponderable la segunda. En la física clásica hay dos principios de conservación: uno para la materia y otro para la energía. Ya nos hemos preguntado si este punto de vista sigue siendo válido en la física moderna. La respuesta es negativa. En efecto, para la teoría de la relatividad no existe una diferencia esencial entre masa y energía. La energía tiene masa y la masa representa energía. En lugar de dos principios de conservación tenemos uno solo, el de la conservación de la masa-energía. Esta nueva concepción resultó muy útil y de gran importancia para el desarrollo ulterior de la física.
¿Cómo es posible que esta equivalencia entre energía y masa haya permanecido tanto tiempo ignorada? ¿Un trozo de hierro caliente pesa realmente más que cuando está frío? La respuesta es ahora «sí», pero un poco más arriba había sido «no». Las páginas que median entre estas dos respuestas no ignoran esta contradicción.
La dificultad presente es del mismo tipo que otras encontradas anteriormente. La variación de la masa, predicha por la teoría de la relatividad, es inmensamente pequeña y no puede ser revelada por pesadas directas ni aun con las balanzas más precisas y sensibles. La comprobación de que la energía es ponderable ha podido ser realizada de maneras muy concluyentes, pero indirectas.
La razón que hace imposible una comprobación directa de dicha equivalencia reside en la excesiva pequeñez del coeficiente de intercambio entre materia y energía. En comparación con la masa, la energía es como una moneda depreciada respecto a otra de gran valor. Aclaremos esto con un ejemplo. ¡La cantidad de calor necesaria para convertir 30.000 toneladas de agua en vapor, pesaría aproximadamente un gramo! La energía ha sido considerada durante tanto tiempo como imponderable porque la masa que representa es muy pequeña. El antiguo concepto de energía como sustancia imponderable es la segunda víctima de la teoría de la relatividad. La primera fue el medio por el cual se suponía que se propagaban las ondas luminosas.
¡La influencia de la teoría de la relatividad va mucho más lejos del problema que la originó! Soluciona las diversas dificultades y contradicciones de la teoría del campo; formula unas leyes de la mecánica más generales; reemplaza dos principios de conservación por uno solo y modifica nuestro concepto clásico del tiempo absoluto. Su validez no se limita a un dominio de la física sino que constituye un marco general que abarca todos los fenómenos de la naturaleza.
#### EL CONTINUO ESPACIO-TIEMPO
«La Revolución francesa empezó en París, el 14 de julio de 1789.» En esta frase se registra el lugar y el tiempo en que se produjo un suceso. Para una persona que no sabe lo que significa «París», la oración podría aclararse enseñándole que París es una ciudad de nuestra Tierra situada a 2° de longitud Este del meridiano de Greenwich y a 49° de latitud Norte. Estos dos números caracterizan el lugar, y la frase «14 de julio de 1789» determina el tiempo en el cual tuvo lugar dicho acontecimiento. En la física, mucho más que en la historia, la determinación exacta del lugar y el tiempo en que se produjo un suceso es de la mayor importancia, pues estos datos forman la base para una descripción cuantitativa.
Para simplificar, consideremos primeramente sólo movimientos rectilíneos. Nuestro _SC_ es, nuevamente, una barra rígida que se prolonga indefinidamente en uno de los sentidos. Tomemos diferentes puntos de la barra; sus posiciones quedan perfectamente caracterizadas con sólo dar un número, la coordenada de cada uno. Decir que la coordenada de un punto es 7.586 metros significa que su distancia al extremo, origen de la barra, es de 7.586 metros. Inversamente, dado un número cualquiera y una unidad se puede siempre encontrar un punto de la barra que corresponda a ese número. Podemos, en consecuencia, afirmar que: a todo número corresponde un punto determinado de la barra y a todo punto de la misma corresponde un número. Este hecho lo expresan los matemáticos con el siguiente enunciado: la totalidad de los puntos de una barra constituyen un _continuo unidimensional_. Esto quiere decir que al lado de cualquier punto de la barra existen otros tan cercanos al mismo como se quiera. De otra manera, es posible unir dos puntos distantes de la barra por pasos arbitrariamente pequeños. Así pues, la pequeñez arbitraria de los pasos que unen puntos distantes es una característica esencial del continuo.
Consideremos ahora un plano, o si se prefiere algo más concreto, la superficie rectangular de una mesa (figura 18). La posición de un punto de la misma puede ser determinada por dos números y no, como antes, por uno solo. Estos dos números miden las distancias del punto a dos bordes perpendiculares de la mesa. Es decir, a cada punto del plano le corresponde no uno sino un par de números, y recíprocamente, a todo par de números le corresponde un determinado punto del plano. En otras palabras: el plano es un _continuo bidimensional._ Existen puntos arbitrariamente cercanos a todo punto del plano. Dos puntos distantes pueden ser unidos por una curva dividida en segmentos tan pequeños como se quiera. Luego la pequeñez arbitraria de los pasos que unen dos puntos distantes, cada uno de los cuales puede ser representado por dos números, es lo que caracteriza fundamentalmente un continuo bidimensional.
**Fig. 18**
Imaginemos que pretendemos considerar nuestra habitación como nuestro _SC._ Esto significa que deseamos describir todas las posibles posiciones de un punto como el de la figura 19, respecto a las paredes rígidas de la habitación. Por ejemplo, la posición de la base de la lámpara, supuesta en reposo, puede ser fijada por tres números: dos de ellos determinan las distancias a dos paredes perpendiculares y el tercero la distancia al techo o al suelo. Es decir, a cada punto del espacio le corresponden tres números definidos; y, recíprocamente, cada tres números determinan un punto del espacio. Esto se expresa diciendo: nuestro espacio es un continuo tridimensional. Existen puntos muy próximos a todo punto del espacio. Luego lo que caracteriza un continuo tridimensional es la pequeñez arbitraria de los pasos con los cuales se puede cubrir la distancia entre dos puntos cualesquiera del mismo, cada uno de los cuales está representado por tres números.
**Fig. 19**
Pero todo esto es más geometría que física. Para volver a ésta tenemos que considerar el movimiento de las partículas materiales. En la observación y predicción de los fenómenos naturales debemos tener en cuenta, además del lugar, el tiempo en que suceden. Tomemos nuevamente un ejemplo muy sencillo.
Dejemos caer desde una torre una piedra pequeña que puede ser considerada como una partícula material. Imaginemos que la torre tenga la altura de 78,4 metros. Desde Galileo nos es posible predecir la coordenada de la piedra en cualquier instante de su caída. En la tabla de la página siguiente mostramos las posiciones de la piedra a los 0, 1, 2, 3 y 4 segundos de su recorrido.
En la tabla están registrados cinco sucesos, estando representado cada uno por dos números, el tiempo y la coordenada espacial correspondientes. El primero es la iniciación de la caída de la piedra desde una altura de 78,4 metros, en el instante 0 del tiempo.
_Tiempo de caída en segundos_ | _Altura desde el suelo, en metros_
---|---
0 | 78,4
1 | 73,5
2 | 58,8
3 | 34,3
4 | 0
**Fig. 20**
El segundo suceso es la coincidencia de la piedra con nuestra barra rígida (la torre) a 73,5 metros de altura. Esto ocurre al final del primer segundo. El último acontecimiento registrado en la tabla lo constituye la coincidencia (altura 0) de la piedra con la tierra, al cuarto segundo de caída.
Se pueden representar los datos de esta tabla de una manera diferente, haciendo corresponder a cada par de números un punto de una superficie. Para ello debemos establecer primero una escala (figura 20): un segmento corresponderá a 30,50 metros y otro, a un segundo de tiempo.
A continuación trazamos dos líneas perpendiculares entre sí, llamando eje de tiempo a la horizontal y eje de espacio a la vertical. Inmediatamente se ve que nuestra tabla puede ser representada por cinco puntos del plano espacio-tiempo (figura 21).
Las distancias de los puntos al eje de espacio dan las coordenadas temporales según la primera columna de la tabla y las distancias al eje de tiempo, las coordenadas espaciales.
La misma cosa ha sido expresada de dos maneras diferentes: por una tabla y por puntos de un plano; cada una puede ser deducida de la otra. La elección entre estos dos tipos de representaciones es, simplemente, una cuestión de preferencia, ya que, de hecho, son equivalentes.
Demos otro paso. Imaginemos una tabla más completa que nos dé las posiciones no cada segundo, sino cada centésima o cada milésima de segundo. Tendremos entonces, una gran cantidad de puntos en nuestro gráfico espacio-tiempo. Finalmente, si la posición de la partícula está dada para cada instante o, como dicen los matemáticos, si la coordenada espacial está dada como una función del tiempo, entonces nuestro conjunto de puntos se transforma en una línea continua. La figura 22 representa, por lo tanto, el conocimiento completo del movimiento de la partícula y no, como antes, de una pequeña fracción del mismo.
**Fig. 21**
El movimiento a lo largo de la barra rígida (la torre), es decir, en un espacio unidimensional, está representado en la citada figura 22 por una curva, en el continuo bidimensional espacio-tiempo. A cada punto de nuestro continuo espacio-tiempo le corresponde un par de números, uno de los cuales da el tiempo y el otro la coordenada del espacio. Recíprocamente: a cada par de números que caracterizan cierto suceso, le corresponde un punto determinado del plano espacio-tiempo. Dos puntos adyacentes representan dos sucesos, dos acontecimientos, ocurridos a corta distancia y separados por un intervalo pequeño de tiempo.
**Fig. 22**
Se podría argüir contra nuestra representación de la siguiente manera: no tiene sentido representar la unidad de tiempo por un segmento, combinándolo mecánicamente con el espacio para formar el continuo bidimensional a partir de los dos continuos unidimensionales. Pero en tal caso habría que protestar con igual energía contra los gráficos que representan, por ejemplo, la temperatura de la ciudad de Barcelona durante el último verano; así como contra aquellos que representan las variaciones del coste de la vida durante un cierto número de años, pues en ellos usamos, exactamente, el mismo método. En los gráficos de temperatura se combinan el continuo unidimensional de la temperatura con el continuo unidimensional del tiempo, en un continuo bidimensional temperaturatiempo.
Volvamos ahora al caso del cuerpo o partícula arrojado desde lo alto de la torre de 78,4 metros. Nuestra representación gráfica del movimiento es un convenio útil, pues caracteriza la posición de la partícula en un instante arbitrario del tiempo.
Esta representación gráfica se puede interpretar de dos maneras distintas. Una de ellas es imaginar el movimiento como una serie de sucesos en el continuo unidimensional del espacio, sin mezclarlo con el tiempo, usando una imagen dinámica, según la cual las posiciones del cuerpo _cambian_ con el tiempo. La otra consiste en formarnos una imagen _estática_ del movimiento, considerando la curva en el continuo bidimensional espacio-tiempo. Según esta manera de interpretarlo el movimiento está representado como algo que es, que existe, en el continuo bidimensional espacio-tiempo y no como algo que cambia en el continuo unidimensional del espacio.
Ambas imágenes son exactamente equivalentes y preferir una de ellas es cuestión de convención o de gusto.
Nada de lo que acabamos de exponer respecto de las dos imágenes del movimiento tiene relación alguna con la teoría de la relatividad. Se pueden usar ambas representaciones con igual derecho, aun cuando la física clásica se inclinó más bien por la imagen dinámica que describe al movimiento como una serie de sucesos en el espacio, y no como algo ya existente en el espacio-tiempo. Pero la teoría de la relatividad modifica este punto de vista. Se ha declarado abiertamente en favor de la imagen estática, encontrando en esta representación del movimiento una imagen más objetiva y conveniente de la realidad. Pero, ¿por qué estas dos imágenes son equivalentes desde el punto de vista de la física clásica y no desde el punto de vista de la teoría de la relatividad?
Para contestar esta pregunta consideremos nuevamente dos _SC_ en movimiento uniforme el uno con respecto al otro.
De acuerdo con la física clásica, los observadores de cada uno de dichos _SC_ asignarán a cierto suceso distintas coordenadas espaciales, pero la misma coordenada del tiempo. En nuestro ejemplo, la coincidencia de la partícula con la tierra está caracterizada, en el _SC_ que hemos elegido, por la coordenada «4» del tiempo y por la coordenada «0» del espacio. Según la mecánica clásica, el cuerpo que cae alcanzará la superficie de la tierra al 4.° segundo, también para un observador que se mueva uniformemente respecto al _SC_ anterior. Pero este observador referirá la distancia a su _SC y_ atribuirá, en general, diferentes coordenadas de espacio al fenómeno de la colisión, aun cuando la coordenada tiempo será la misma para él, y para todos los observadores que se muevan uniformemente los unos respecto a los otros. La física clásica admite sólo un fluir «absoluto» del tiempo. Para cada _SC_ , el continuo bidimensional se puede descomponer en dos continuos unidimensionales: tiempo y espacio. Debido al carácter «absoluto» del tiempo, la transición de la imagen «estática» del movimiento a la «dinámica» tiene un significado objetivo en la física clásica.
Hemos visto ya que la transformación clásica no debe ser usada siempre en la física. Desde un punto de vista práctico, esta transformación sirve para cuando se trata de pequeñas velocidades; pero no cuando se trata de resolver cuestiones fundamentales de la física.
Según la teoría de la relatividad, el instante en que se produce la colisión entre el cuerpo y la tierra, no será el mismo para todos los observadores. Tanto las coordenadas de tiempo como las de espacio serán diferentes en los dos _SC_ y la variación de la primera será tanto más notable cuanto más próxima a la velocidad de la luz sea la velocidad relativa. Por ello el continuo bidimensional no puede ser partido en dos continuos unidimensionales como en la física clásica. No se puede considerar separadamente el tiempo y el espacio, al determinar las coordenadas espacio-tiempo, al pasar de un _SC_ a otro. La división del continuo bidimensional en dos unidimensionales es, desde el punto de vista de la teoría de la relatividad, un procedimiento arbitrario carente de significado objetivo.
Resulta fácil generalizar lo expuesto hasta ahora para el caso de un movimiento que no está limitado a una línea recta. Para describir los sucesos de la naturaleza debemos usar, en realidad, cuatro y no dos números. Nuestro espacio físico, concebido a partir de los objetos y sus movimientos, tiene tres dimensiones, y las posiciones quedan determinadas por tres números. El instante en que se produce el suceso es el cuarto número. Todo suceso queda caracterizado por cuatro números; y a cada cuatro números corresponde, recíprocamente, un suceso. Por eso, el mundo de los sucesos es un _continuo de cuatro dimensiones._ No hay nada misterioso en esto y la última afirmación es tan cierta para la física clásica como para la teoría de la relatividad. La diferencia aparece cuando se consideran dos _SC_ en movimiento relativo. Consideremos una habitación en movimiento y dos observadores, uno interior y otro exterior, determinando las coordenadas del tiempo y del espacio, para los mismos sucesos. La física clásica descompone también, aquí, el continuo tetradimensional en el espacio tridimensional y el continuo unidimensional del tiempo. El físico clásico se preocupa sólo de la transformación espacial ya que el tiempo lo considera absoluto, encontrando, en consecuencia, natural y conveniente la subdivisión del continuo tetradimensional de su mundo, en espacio y tiempo. Pero desde el punto de vista de la teoría de la relatividad, tanto el tiempo como el espacio varían al pasar de un _SC_ a otro mediante la transformación de Lorentz.
El Universo de los sucesos puede ser descrito dinámicamente por una imagen que cambia con el tiempo, proyectada sobre un fondo constituido por el espacio tridimensional. Pero también se puede describir por una imagen estática proyectada en el continuo tetradimensional espacio-tiempo. Para la física clásica ambas imágenes son equivalentes. En cambio, para la física relativista, resulta más conveniente y más objetiva la imagen estática.
Aun en la relatividad se puede usar la imagen dinámica, si así lo preferimos. Pero debemos recordar que esta división en tiempo y espacio no tiene significado objetivo, pues el tiempo ya no es «absoluto». Teniendo en cuenta sus limitaciones, en las siguientes páginas utilizaremos todavía el lenguaje «dinámico» y no el «estático».
#### LA RELATIVIDAD GENERAL
Nos queda todavía un punto por aclarar. Una de las cuestiones más fundamentales no ha sido resuelta aún: ¿existe un sistema inercial? Hemos aprendido ciertas leyes de la naturaleza, su invariancia frente a la transformación de Lorentz y su validez para todos los sistemas inerciales en movimiento uniforme relativo. Tenemos las leyes pero no conocemos el marco al cual referirlas.
A fin de destacar esta dificultad hagamos una entrevista a un físico clásico y planteémosle algunas preguntas sencillas:
—¿Qué es un sistema inercial?
—Es un _SC_ en el cual son válidas las leyes de la mecánica. Un cuerpo sobre el cual no actúan fuerzas exteriores se mueve uniformemente en tal _SC_. Esta propiedad nos permite distinguir un _SC_ inercial de cualquier otro.
—¿Pero... qué entiende cuando dice que sobre el cuerpo no actúan fuerzas exteriores?
—Significa, simplemente, que el cuerpo se mueve uniformemente en un sistema inercial.
Aquí podríamos preguntar de nuevo: «¿Qué es un sistema inercial?». Pero como hay pocas esperanzas de obtener una respuesta distinta a la anterior, tratemos de conseguir alguna información más concreta cambiando nuestra pregunta:
—¿Es inercial un _SC_ rígidamente unido a la Tierra?
—No; porque las leyes de la mecánica no son rigurosamente válidas en la Tierra debido a su rotación. Un _SC_ rígidamente unido al Sol puede ser considerado en muchos casos como inercial; pero cuando se habla del Sol en rotación, se entiende que un _SC_ fijo en el mismo no puede considerarse estrictamente inercial.
—Entonces, ¿qué es, concretamente, su sistema inercial y de qué manera podemos encontrar uno?
—Es meramente una ficción útil y no tengo ni idea de cómo llevarla a la práctica. Salvo que pudiera alejarme suficientemente de todo cuerpo material y librarme de todas las influencias exteriores; mi _SC_ sería, entonces, inercial.
—¿Pero qué entiende usted por un _SC_ libre de toda influencia exterior?
—Un _SC_ que es inercial.
¡Hemos vuelto, otra vez, a nuestra pregunta inicial!
Nuestra entrevista pone de manifiesto una gran dificultad de la física clásica. Tenemos leyes pero no conocemos el sistema de referencia en el que son válidas; toda la física parece edificada sobre la arena.
Se puede llegar a la misma dificultad por otro camino. Imaginemos que exista un solo cuerpo en todo el Universo, y que constituya nuestro _SC_. Este cuerpo empieza a girar. Según la mecánica clásica, las leyes físicas para un cuerpo en rotación son diferentes de las leyes respecto a un cuerpo sin rotación. Si el principio de inercia es válido en un caso, no lo es en el otro. Pero todo esto resulta sospechoso. ¿Tiene acaso sentido hablar del movimiento de un cuerpo que suponemos único en todo el Universo? No; porque se considera que un cuerpo está en movimiento, cuando cambia su posición respecto a otro. Por esto resulta contrario al sentido común hablar del movimiento de un cuerpo aislado. La mecánica clásica y el sentido común se nos presentan, aquí, en violento desacuerdo. El remedio que da Newton para resolver este entredicho es el siguiente: si el principio de inercia es válido, el _SC_ está en reposo o en movimiento uniforme; si este principio no se cumple, el cuerpo está en movimiento no uniforme. Luego, se puede decidir si un cuerpo está en movimiento o en reposo, según las leyes de la física sean o no aplicables a un _SC_ unido rígidamente al mismo.
Tomemos dos cuerpos, el Sol y la Tierra por ejemplo. El movimiento que observamos es relativo. Se puede describir uniendo el _SC_ a la Tierra o al Sol. Desde este punto de vista, la gran contribución de Copérnico reside en el hecho de haber transferido el _SC_ de la Tierra al Sol. Pero como el movimiento es relativo y se puede hacer uso de cualquier sistema de referencia, parece no existir ninguna razón para preferir uno de los dos _SC_.
La física interviene y modifica el punto de vista del sentido común. El _SC_ unido al Sol se parece más a un sistema inercial que el _SC_ vinculado a la Tierra. Por eso las leyes físicas deben ser aplicadas al _SC_ de Copérnico y no al de Tolomeo. La grandeza del descubrimiento de Copérnico solamente puede ser apreciada desde el punto de vista físico. Ilustra la gran ventaja que resulta de usar un _SC_ rígidamente unido al Sol en la descripción del movimiento de los planetas.
En la física clásica no existe el movimiento uniforme absoluto. Si dos _SC_ se mueven uniformemente el uno respecto al otro, no tiene sentido decir «este _SC_ está en reposo y el otro en movimiento». Pero si dos _SC_ se mueven no uniformemente uno respecto al otro, hay muy buenas razones para decir: «este cuerpo se mueve y el otro está en reposo» (o se mueve uniformemente). El movimiento absoluto tiene, en este último caso, un significado bien concreto. Hay en este punto, como dijimos arriba, un profundo abismo entre el sentido común y la física clásica. Las dificultades mencionadas referentes a la existencia de un sistema inercial y a la del movimiento absoluto, están sólidamente relacionadas entre sí. El movimiento absoluto se hace posible si admitimos la existencia de un sistema inercial.
Pudiera parecer que no hay solución a estas dificultades, que ninguna teoría física es capaz de eludirlas. Su raíz reside en el hecho de haber postulado que las leyes de la naturaleza sólo tienen validez para un tipo especial de _SC,_ el inercial. La posibilidad de resolver estas dificultades depende de la respuesta a la siguiente pregunta. ¿Podemos formular las leyes físicas de manera que sean válidas para todos los _SC_ , es decir, no solamente para los que se mueven uniformemente, sino también para aquellos que se mueven arbitrariamente unos respecto de los otros? Si esto es posible, habremos resuelto nuestras dificultades. En tal caso seremos capaces de aplicar las leyes de la naturaleza a cualquier _SC_. Y la lucha tan violenta, en los comienzos de la ciencia, entre las ideas de Tolomeo y las de Copérnico, perderá sentido, pudiendo emplearse, con igual justificación, cualquiera de los dos _SC._ Las dos afirmaciones «el Sol está en reposo y la Tierra se mueve» o «el Sol se mueve y la Tierra está en reposo» significarían, simplemente, dos convenciones distintas que se referirían a dos _SC_ diferentes.
¿Podríamos realmente construir una física relativista válida en todos los _SC;_ una física en la que no haya lugar para el movimiento absoluto, sino sólo para los movimientos relativos? ¡Esto es efectivamente posible!
Poseemos al menos una indicación, aunque muy débil, que nos ayudará a edificar la nueva física. Una verdadera física relativista debe ser aplicable a todo _SC_ y, por lo tanto, también al caso especial de los _SC_ inerciales. Ya conocemos las leyes referentes a los _SC_ inerciales. Las nuevas leyes generales que han de cumplirse en todos los _SC_ deben, en el caso particular de los _SC_ inerciales, reducirse a las leyes de la mecánica clásica.
El problema de formular unas leyes físicas válidas para todo _SC_ fue resuelto por la llamada _teoría general de la relatividad,_ llamándose _teoría de la relatividad restringida_ la que se aplica solamente a sistemas inerciales. Las dos no pueden, naturalmente, contradecirse, pues las leyes de la relatividad restringida han de estar contenidas en las de la relatividad general. Pero así como antes las leyes de la física fueron formuladas únicamente para los sistemas inerciales, ahora éstos constituirán un caso límite especial de todos los _SC_ que se mueven arbitrariamente.
Éste es el programa de la teoría general de la relatividad. Pero al esbozar el camino por el cual se realizó debemos ser aún más vagos de lo que fuimos hasta aquí. Las nuevas dificultades que aparecen en el desarrollo de la ciencia hacen que la teoría sea cada vez más abstracta. Aventuras insospechadas nos esperan. Nuestro objeto final es un entendimiento mejor de la realidad. Nuevos eslabones se agregan siempre a la concatenación lógica entre la teoría y la observación. Para limpiar de suposiciones innecesarias y artificiales el camino que conduce de la teoría a la experiencia; para abarcar dominios de la realidad cada vez mayores, hemos de alargar la cadena cada vez más. Cuanto más simples y fundamentales son nuestras hipótesis, tanto más intrincado resulta nuestro instrumento matemático de razonamiento; la ruta que conduce de la teoría a la observación se hace más larga, más sutil y más intrincada. Aun cuando parezca paradójico, podríamos decir: la física moderna es más simple que la física clásica y parece, por lo tanto, más difícil y más complicada. Cuanto más simple es nuestra imagen del mundo exterior y cuanto mayor es el número de hechos que abarca, con tanta mayor fuerza refleja en nuestra conciencia la armonía del Universo.
La nueva idea es sencilla: construir una física válida en cualquier _SC_. Su realización trae aparejadas complicaciones formales y nos obliga a emplear procedimientos matemáticos hasta ahora no usados en la física. En lo que sigue expondremos tan sólo la conexión entre la realización de dicho programa y dos problemas importantes: la gravitación y la geometría.
#### FUERA Y DENTRO DEL ASCENSOR
El principio de inercia marca, en realidad, el verdadero comienzo de la física. Fue adquirido, como sabemos, imaginando el experimento ideal de un cuerpo en movimiento perenne, sin rozamiento ni bajo la acción de ninguna otra fuerza exterior. Con este ejemplo y después con otros más, hemos podido aquilatar la importancia de la introducción del experimento ideal. Aquí vamos a discutir, también, experimentos ideales. Aunque éstos puedan parecer demasiado fantásticos, nos ayudarán, sin embargo, a comprender todo lo que se pueda de la teoría de la relatividad, con las limitaciones inherentes a los métodos simples que estamos utilizando.
Ya hemos tratado de experiencias ideales con una habitación en movimiento uniforme. Aquí, para cambiar, tendremos un ascensor que cae.
Imaginemos que un gran ascensor está en lo alto de un rascacielos mucho más alto que cualquier rascacielos real. Supongamos que el cable que lo sostiene se rompe y el ascensor empieza, entonces, a caer libremente. Dentro del mismo hay observadores que realizan diversas experiencias durante la caída. Al describirlas no debemos preocuparnos de la resistencia del aire ni del rozamiento, pues despreciamos sus efectos en las condiciones ideales. Uno de los observadores saca un reloj y un pañuelo de su bolsillo y los suelta. ¿Qué les sucede? Para una persona exterior, que ve lo que pasa en el interior del ascensor, por una ventana por ejemplo, el pañuelo y el reloj caen exactamente de la misma manera, con la misma aceleración. Recordemos que la aceleración de la caída es completamente independiente de la masa del cuerpo que cae y que fue este hecho el que reveló la igualdad entre la masa inerte y la gravitatoria. No olvidemos, sin embargo, que según la mecánica clásica esta igualdad es absolutamente accidental y que no tuvo influencia alguna sobre su estructura. Aquí, por el contrario, esa igualdad, reflejada en que la aceleración de la caída de todos los cuerpos es la misma, es esencial y constituye la base de nuestra argumentación.
Volvamos al pañuelo y al reloj que están cayendo con la misma aceleración que el ascensor, con sus paredes, techo y suelo. Por esta misma razón la distancia de esos dos objetos al suelo no variará. Para el observador interior permanecen exactamente donde los soltó; además, puede ignorar la existencia del campo gravitatorio ya que su causa está fuera de su _SC_. Él encuentra que sobre los dos cuerpos no actúa fuerza alguna y que están en reposo, exactamente como si estuvieran en un _SC_ inercial. ¡Curioso!, ¿verdad? Si uno de los observadores empuja el reloj o el pañuelo en cualquier dirección, para arriba o para abajo, por ejemplo, éste adquiere cierta velocidad, que conserva después de cesar la acción de empujar, es decir, que continúa moviéndose rectilíneamente hasta alcanzar el techo o el suelo respectivamente. En resumen, las leyes de la mecánica clásica se cumplen para el observador interior, pues todos los cuerpos se comportan según uno esperaría del principio de inercia. El _SC_ rígidamente unido al ascensor durante su caída libre difiere de un sistema inercial en un solo aspecto. En un _SC_ inercial, un móvil sobre el que actúan fuerzas seguirá moviéndose eternamente con velocidad constante. Un _SC_ inercial de la física clásica no está limitado ni en el tiempo ni en el espacio. El caso del observador dentro del ascensor es, sin embargo, diferente. El carácter inercial de su _SC_ sí está limitado en espacio y tiempo. Tarde o temprano el objeto con movimiento uniforme chocará con la pared del ascensor, destruyéndose, así, este movimiento. Tarde o temprano todo el ascensor chocará con la tierra destruyendo a los observadores mismos y sus instrumentos. Es decir, este _SC_ es solamente una «edición de bolsillo» de un _SC_ inercial verdadero.
El carácter local de este _SC_ es completamente esencial. Si nuestro ascensor imaginario se extendiera del ecuador al polo con el pañuelo suelto sobre el primero y el reloj sobre el último, entonces, para el observador exterior, los dos cuerpos no tendrían la misma aceleración; no estarían en reposo el uno respecto al otro. ¡Y toda nuestra argumentación fallaría! Las dimensiones del ascensor tienen que ser limitadas de tal modo que un observador exterior vea caer con idéntica aceleración a todos los cuerpos de dentro.
Con esta restricción, el _SC_ toma un carácter inercial para un observador situado en su interior. Por fin estamos en condiciones de indicar un _SC_ en el cual todas las leyes de la física son válidas, aun cuando esté limitado en el tiempo y el espacio. Si nos imaginamos otro _SC_ , otro ascensor en movimiento uniforme relativamente al primero, éste constituirá, también, un _SC_ inercial local y todas las leyes serán exactamente iguales en ambos. El paso de uno a otro no está dado por la transformación de Lorentz.
Veamos cómo describen ambos observadores lo que ocurre en el ascensor.
El observador exterior nota el movimiento del ascensor y de todos los cuerpos que están dentro del mismo y comprueba que se mueven según la ley de la gravitación de Newton. Para él, el movimiento no es uniforme, sino acelerado, debido precisamente a la acción de la fuerza de gravedad.
Pero una generación de físicos nacidos y criados en el interior del ascensor razonaría de una manera totalmente distinta. Ellos se creerían en posesión de un sistema inercial y referirían todas las leyes de la naturaleza al ascensor, afirmando, con justificación, que estas leyes toman una forma especialmente simple en su _SC_. Resultaría natural que supusieran su ascensor en reposo y que su _SC_ es inercial.
Es, en efecto, imposible liquidar las diferencias entre un observador interior y uno exterior. Cada uno de ellos podría proclamar su derecho de referir los sucesos a su _SC_. Ambas descripciones resultarían igualmente consistentes.
Del análisis que acabamos de hacer se ve que es posible efectuar una descripción consistente de los fenómenos físicos en dos _SC_ diferentes aun cuando no se desplacen con movimiento uniforme el uno respecto al otro. Pero en tal caso hemos de tener en cuenta la gravitación, que constituye, por así decirlo, el «puente» que permite pasar de un _SC_ al otro. El campo gravitatorio existe para el observador exterior, pero no para uno de dentro. El movimiento acelerado y el campo gravitatorio existen para el observador exterior, y hay reposo y ausencia de dicho campo para uno interior al ascensor. Pero el «puente» del campo gravitatorio que hace posible la descripción en ambos _SC_ descansa sobre un pilar muy importante, a saber: la equivalencia entre la masa gravitatoria y la inerte. Sin esta clave, que pasó inadvertida para la mecánica clásica, nuestro presente razonamiento fracasaría por completo.
Consideraremos enseguida una variante de nuestro experimento ideal. Supongamos que tenemos un _SC_ inercial, en el que vale la ley de inercia. Ya hemos descrito lo que sucede en un ascensor en reposo respecto a un _SC_ tal. Pero alguien de afuera ha atado un cable a nuestro ascensor y tira del mismo con una fuerza constante, en el sentido indicado por la flecha de la figura 23. No importa cómo lo haga. Como las leyes de la mecánica son válidas en este _SC_ , el ascensor se moverá con una aceleración constante hacia arriba. Atendamos, aquí también, a las descripciones de lo que sucede en el interior del ascensor, dadas por un observador interior y otro exterior.
_El observador exterior:_ Mi _SC_ es inercial. El ascensor se mueve con una aceleración constante porque una fuerza constante está actuando sobre él. Los viajeros del ascensor están en movimiento absoluto, pues para ellos no son válidas las leyes de la mecánica. En efecto, un observador interior no observa que los cuerpos sobre los que no actúan fuerzas permanezcan en reposo: si se suelta un cuerpo éste choca pronto con el suelo del ascensor ya que éste se mueve hacia arriba. Esto le ocurre tanto a un reloj como a un pañuelo. Me parece rarísimo el hecho de que el observador de adentro
**Fig. 23**
pueda separarse del «suelo» del ascensor, porque en cuanto da un salto, el piso lo alcanza enseguida.
_El observador interior:_ No veo razón alguna —dice— para creer que mi ascensor esté en movimiento absoluto. Estoy de acuerdo con que un _SC_ rígidamente unido a mi ascensor no es realmente un _SC_ inercial, pero no pienso que ello signifique que esté en movimiento absoluto. Mi reloj, mi pañuelo, como el resto de los cuerpos del ascensor, caen cuando los suelto, porque todo el ascensor está en un campo gravitatorio. Yo constato exactamente la misma clase de movimiento de caída que el que encuentra un habitante de la Tierra, quien lo explica muy simplemente por la acción de un campo de gravitación. Lo mismo vale, evidentemente, en mi caso.
Ambas interpretaciones, la del observador de dentro por una parte y la del de fuera del ascensor por la otra, son igualmente coherentes y no existe, aparentemente, posibilidad alguna de decidir cuál de ellos tiene razón. En resumen, que estamos en nuestro derecho de admitir cualquiera de ellas para explicar el comportamiento de los objetos dentro del ascensor, a saber: movimiento no uniforme y ausencia de un campo gravitatorio según el observador exterior, o reposo y presencia de un campo gravitatorio de acuerdo con el observador interior.
El observador exterior puede suponer que el ascensor está en movimiento «absoluto» no uniforme. Pero un movimiento que puede ser cancelado por la suposición de que actúa un campo gravitatorio no puede tener las pretensiones de «absoluto».
Parece haber, sin embargo, un recurso para salir de la ambigüedad en que nos encontramos. Imaginemos que por una ventana lateral entra en el ascensor un rayo de luz horizontal, alcanzando la pared opuesta al cabo de un tiempo muy corto. Veamos cómo sería predicha por nuestros observadores la trayectoria de la luz.
_El observador exterior,_ que cree en el movimiento acelerado del ascensor, argüirá: el haz luminoso penetra por la ventana desplazándose horizontalmente y en línea recta hacia la pared opuesta. Pero el ascensor se mueve hacia arriba, cambiando de posición durante el tiempo que tarda la luz al pasar de una a la otra pared. El rayo no iluminará, por lo tanto, el punto exactamente opuesto al de su entrada, sino un poco más abajo. La diferencia, aunque muy pequeña, es real y en consecuencia resulta que la luz se desplaza respecto al ascensor sobre una línea curva, como la de la figura 24, y no sobre la recta punteada de la misma figura.
_El observador interior,_ que cree en la presencia de un campo gravitatorio que actúa sobre todos los objetos de su ascensor, diría: el ascensor no tiene tal movimiento acelerado; en su interior simplemente actúa un campo gravitatorio. Un haz luminoso que es imponderable no será afectado por la gravedad. Si se propaga en dirección horizontal alcanzará un punto exactamente opuesto al de su entrada.
Parece resultar de esta discusión que existe una posibilidad de decidir entre ambos puntos de vista, ya que el resultado sería distinto según fuera cierta una u otra afirmación. Si no hay nada ilógico en ninguno de los razonamientos que acabamos de exponer, entonces toda nuestra argumentación previa cae por tierra, resultando, pues, imposible describir consistentemente todos los fenómenos en cuestión por dos caminos distintos: con o sin campo gravitatorio.
Pero hay, afortunadamente, un error grave en el razonamiento del observador interior, que salva nuestra conclusión. Éste decía: un haz luminoso que es imponderable, no será afectado por la gravedad». ¡Esto no es cierto! Un haz de luz posee energía y la energía tiene masa. Pero toda masa inerte es atraída por un campo gravitatorio ya que la masa inerte y la masa gravitatoria son equivalentes. Un haz luminoso se curvará en un campo gravitatorio exactamente igual que lo haría la trayectoria de un cuerpo lanzado horizontalmente con una velocidad igual a la de la luz. Así vemos que si el observador interior hubiera razonado correctamente, tomando en cuenta la curvatura de un haz luminoso en un campo gravitatorio, habría llegado al mismo resultado que el observador exterior.
El campo gravitatorio terrestre es, naturalmente, demasiado pequeño para que la curvatura que adquieren los rayos luminosos en él pueda ser demostrada directamente por la experiencia. En cambio, las famosas investigaciones realizadas durante los eclipses solares prueban de modo concluyente, aunque indirecto, la influencia de un campo gravitatorio sobre la trayectoria de un rayo de luz.
**Fig. 24**
De los ejemplos expuestos resulta que hay una esperanza bien fundada de formular una física relativista. Pero para ello hay que atacar primero el problema de la gravitación.
Del análisis que hemos hecho de lo que ocurre en el ascensor de nuestro ejemplo, se ve la posibilidad de edificar una física nueva, relativista, eliminando por completo los fantasmas clásicos del movimiento absoluto y de los _SC_ inerciales. Nuestros experimentos idealizados muestran cuán íntimamente relacionados entre sí están la teoría general de la relatividad y el problema de la gravitación universal y por qué la equivalencia entre las masas inerte y gravitatoria juega un papel esencial en esta relación. Está claro que la solución del problema de la gravitación proporcionada por la teoría general de la relatividad ha de ser diferente de la de Newton. Las leyes de la gravitación deben ser formuladas para todos los _SC_ posibles, como todas las leyes naturales, mientras que las leyes de la mecánica clásica de Newton se cumplen únicamente en los _SC_ inerciales.
#### GEOMETRÍA Y EXPERIENCIA
Nuestro próximo ejemplo será aún más fantástico que el del ascensor que cae. Nos vemos obligados a plantear un problema nuevo: el de la relación entre la teoría general de la relatividad y la geometría. Empecemos con la descripción de un mundo en el que sólo viven entes bidimensionales y no, como el nuestro, que es habitado por seres tridimensionales. El cine nos ha acostumbrado a las criaturas bidimensionales que actúan sobre una pantalla bidimensional. Bueno, permítasenos imaginar ahora que esas sombras e imágenes de la pantalla tengan una existencia verdadera, real; que se trate de seres que piensan y que crean su propia ciencia, y que el telón bidimensional constituya su espacio geométrico. Estas criaturas son incapaces de imaginar, de un modo concreto, un espacio tridimensional, igual que nosotros no podemos imaginar un mundo de cuatro dimensiones. Pueden curvar una línea recta, saben lo que es una circunferencia; pero son incapaces de construir una esfera, porque esto significaría salirse de su pantalla bidimensional. Nosotros estamos en una situación similar. Somos capaces de flexionar y curvar líneas y superficies, pero no tiene sentido para nuestra imaginación la idea de espacios tridimensionales curvos.
Viviendo, pensando y experimentando, nuestros entes-imágenes podrían, con el tiempo, llegar al conocimiento de la geometría bidimensional de Euclides. Podrían probar, por ejemplo, que la suma de los ángulos de un triángulo es 180 grados. Les sería fácil construir círculos concéntricos, unos pequeños y otros muy grandes, y demostrar que la relación entre las circunferencias de dos cualesquiera de ellos es igual a la relación entre sus radios respectivos, lo que constituye un resultado también característico de la geometría euclidiana. Si la pantalla fuera infinitamente grande, esas criaturas-imágenes descubrirían que caminando en una dirección y sentido determinados, nunca vuelven al lugar de partida.
Imaginemos ahora que alguien exterior a la pantalla, de la «tercera dimensión», los traslada del telón a una superficie esférica de radio muy grande. Si nuestras criaturas bidimensionales son muy pequeñas en relación con la superficie de la esfera y no poseen los medios necesarios para desplazarse a distancias muy grandes ni tienen medios de comunicación entre puntos demasiado alejados, entonces no se podrán dar cuenta del cambio de la naturaleza de su espacio. La suma de los ángulos de triángulos pequeños será, aún, 180 grados. Dos circunferencias concéntricas, pequeñas, todavía mostrarán que el cociente entre sus longitudes y el cociente de sus radios son iguales. Un paseo por un camino rectilíneo y siempre en el mismo sentido, nunca los conducirá al punto de partida.
Pero supongamos que esos entes de dos dimensiones desarrollen, con el tiempo, una ciencia y una técnica avanzadas; que encuentren medios de comunicación que les permitan cubrir largas distancias rápidamente. Descubrirán, entonces, que siguiendo siempre «hacia delante», podrán finalmente volver a su posición de partida. Siempre «hacia delante» y sin desviación significa desplazarse sobre una circunferencia máxima de la esfera. También encontrarán que el cociente entre los radios de dos círculos no es igual al cociente de sus circunferencias si uno de los radios es pequeño y el otro muy grande.
Si dichos seres bidimensionales son conservadores, si han estudiado la geometría euclidiana por generaciones y generaciones cuando no poseían aún los veloces medios modernos de comunicación, es decir, cuando esta geometría estaba de acuerdo con la experiencia, es casi seguro que harán todo esfuerzo posible para sostener la geometría de Euclides a pesar de la evidente contradicción con sus medidas. Podrían hacer cargar a la física con la culpa de las discrepancias encontradas, buscando ciertas razones como, por ejemplo, diferencias de temperatura que deformen las líneas y hagan que aparentemente no valga la geometría euclidiana. Pero, tarde o temprano, se convencerán de que hay un modo más lógico y conveniente de describir esos hechos. Y comprenderán al fin que su mundo es finito obedeciendo a principios geométricos distintos de los que conocían. Entenderán que a pesar de su incapacidad para imaginarlo, su mundo es la superficie bidimensional de una esfera. Pronto encontrarán nuevos fundamentos para la geometría de su espacio que, aun siendo distinta de la de Euclides, puede ser, sin embargo, formulada con igual lógica y coherencia. Para las generaciones posteriores, educadas en la nueva geometría de la esfera, la geometría de Euclides parecerá más complicada y artificial, pues no se ajusta a los hechos observados.
Volvamos ahora a criaturas tridimensionales.
¿Qué implica la afirmación de que nuestro espacio de tres dimensiones es de naturaleza euclidiana? Quiere decir que todas las consecuencias lógicas, deducidas de la geometría de Euclides, son confirmadas por la experiencia. Se puede construir, con cuerpos rígidos o con rayos luminosos, objetos cuyas formas correspondan a las de los objetos ideales de la geometría. La arista de una regla o un rayo luminoso corresponden a una línea recta; la suma de los ángulos de un triángulo, construido con barras rígidas y finas, es igual a 180 grados; la relación entre los radios de dos circunferencias con centro común, hechas de alambre delgado y rígido, es igual a la de dichas circunferencias. Interpretada de esta manera, la geometría euclidiana se transforma en un sencillo capítulo de la física.
Pero se puede imaginar que se hubieran descubierto discrepancias; verbigracia, que la suma de los ángulos de un triángulo muy grande, construido con barras que por muchas razones hubiera que considerar rígidas, no fuera de 180 grados. Como estamos acostumbrados a la idea de una representación concreta de las figuras en la geometría euclidiana por cuerpos rígidos, buscaríamos seguramente alguna causa física que afectara nuestras barras de tal manera que se pudiera dar una explicación de su comportamiento anormal. Para salvar a la geometría euclidiana, acusaríamos a nuestros objetos de no ser verdaderamente rígidos, de no corresponder exactamente a los de dicha geometría. Trataríamos, también, de descubrir la naturaleza de las fuerzas a las que atribuimos las deformaciones y sus influencias sobre otros fenómenos, y buscaríamos una representación más perfecta de las figuras de nuestra geometría. Si no consiguiéramos combinar esta geometría con la física en una imagen simple y consistente, nos veríamos obligados a abandonar la idea de la naturaleza euclidiana de nuestro espacio y buscar una representación más apropiada de la realidad, adoptando unas hipótesis más generales acerca de su carácter geométrico.
**Fig. 25**
La necesidad de ello se puede ilustrar con un experimento ideal que demuestre que una física realmente relativista no puede estar basada en la geometría de Euclides. Nuestro razonamiento implicará resultados ya conocidos, respecto a _SC_ inerciales y a la teoría de la relatividad restringida.
Imaginemos un enorme disco sobre el que se trazaron dos círculos concéntricos, uno muy pequeño y el otro muy grande (figura 25). El disco gira rápidamente respecto a un observador exterior. Admitamos que el _SC_ de este observador sea inercial y que ha trazado dos circunferencias que permanecen en reposo en su _SC_ , pero que coinciden con las dos del disco en rotación. En su _SC_ , que es inercial, vale la geometría euclidiana, de manera que comprobará la igualdad de los cocientes entre los radios y las respectivas circunferencias. ¿Qué dice al respecto un observador en reposo sobre el disco? Desde el punto de vista de la física clásica y también del de la relatividad restringida, su _SC_ es un sistema prohibido. Si pretendemos encontrar nuevas formas para las leyes físicas, válidas en cualquier _SC_ , debemos tratar al observador del disco con igual seriedad que al de fuera. Nosotros, del exterior, estamos, pongamos por caso, siguiendo al observador interior en su tarea de medir las longitudes de las dos circunferencias y sus radios. Éste emplea la misma barra métrica usada por el observador exterior al efectuar sus determinaciones. «La misma barra quiere decir, realmente, la que usó y le fue entregada por el observador de afuera o una de un par de barras que tienen la misma longitud en reposo en el _SC_ exterior.»
El observador que está sobre el disco empieza por determinar las longitudes de la circunferencia pequeña y la de su radio. El resultado que encuentra es el mismo que halló el otro observador, el exterior. En efecto, ante todo suponemos que el eje de rotación del disco coincide con el centro de los círculos, como se ve en la figura. Las partes próximas al eje tienen velocidades pequeñas. Por lo tanto, si el círculo es suficientemente pequeño es perfectamente factible aplicar la mecánica clásica. Esto significa que la barra tiene la misma longitud para ambos observadores; en consecuencia el resultado de las medidas será igual para los dos. Terminada esta operación el observador del disco se dispone a medir la longitud del radio del círculo grande. Colocada sobre el radio, la barra se mueve respecto del observador exterior, pero su longitud permanece invariable, es decir, igual para ambos observadores, pues la dirección del movimiento es normal al radio. Así pues, tres medidas resultan iguales para los dos experimentadores, a saber: las de los dos radios y la de la circunferencia menor. ¡Pero no sucede lo mismo con la cuarta medida! La longitud de la circunferencia mayor será diferente. La barra puesta sobre la circunferencia (trazo grueso de la figura 25) en la dirección del movimiento, aparecerá, ahora, contraída para el observador en reposo. La velocidad sobre esta circunferencia es mucho mayor que la velocidad sobre el círculo interior y la contracción de longitud debe ser tenida en cuenta. Por eso, si se aplica la teoría de la relatividad restringida llegamos a la conclusión siguiente: la longitud de la circunferencia mayor será diferente según la determine uno u otro de nuestros dos observadores. Como sólo una de las cuatro longitudes medidas por ambos experimentadores no es la misma para los dos, los cocientes entre los dos radios y las dos circunferencias no pueden ser iguales para un observador del disco si, como sabemos, lo son para el otro. Esto significa que un hombre sobre el disco en rotación no puede comprobar la validez de la geometría euclidiana en su _SC_.
Ante este resultado, el observador del disco podría decir que no desea considerar _SC_ en los que no valga la geometría de Euclides. En efecto, la bancarrota de esta geometría se debe a la rotación absoluta del disco, al hecho de que su _SC_ es inadecuado, prohibido. Pero al expresarse de esta manera rechaza la idea principal de la teoría general de la relatividad. Si, por el contrario, estamos decididos a descartar la posibilidad del movimiento absoluto y conservar la idea de una teoría general de la relatividad, entonces la física debe ser edificada sobre la base de una geometría más general que la de Euclides. No hay manera de eludir esta consecuencia si todos los _SC_ son permitidos.
Los cambios que trae aparejada la relatividad general no se limitan al concepto de espacio. En la relatividad restringida teníamos relojes en reposo, sincronizados en cada _SC_ y con idéntica marcha, es decir, que indican el mismo tiempo simultáneamente. ¿Qué pasa con un reloj en un _SC_ que no sea inercial? Para responder a esta pregunta, utilizaremos otra vez el experimento ideal del disco giratorio. El observador exterior tiene, en su _SC_ inercial, relojes perfectos que tienen la misma marcha y están sincronizados entre sí. El experimentador del disco toma dos de esos relojes y coloca uno sobre la circunferencia pequeña y el otro en la periferia del círculo mayor. El reloj situado sobre la circunferencia interior tiene una velocidad muy pequeña en relación con el observador exterior; por ello se puede aceptar que su marcha será la misma que uno de los relojes en reposo fuera del disco. Pero el reloj puesto sobre la circunferencia grande tendrá una velocidad considerable, por lo cual su marcha será diferente de la de los relojes exteriores y también de la del otro reloj colocado sobre el círculo pequeño del disco. Luego, los dos relojes en rotación tendrán marchas distintas, y aplicando las consecuencias de la teoría de la relatividad restringida se ve, de nuevo, que en el _SC_ giratorio no se pueden tomar las mismas disposiciones que en un _SC_ inercial.
Para esclarecer las conclusiones que se pueden alcanzar de esta y anteriores experiencias ideales, registremos una vez más un diálogo entre un físico viejo C, que cree en la física clásica, y uno moderno M, que conoce la teoría general de la relatividad. C es el observador exterior, sobre el _SC_ inercial; mientras que M está sobre el disco giratorio.
C. — En su _SC_ , no es válida la geometría euclidiana. He observado sus mediciones y estoy de acuerdo en que, según ellas, la razón de las dos circunferencias no es igual a la razón de sus dos radios. Pero esto indica solamente que su _SC_ es un sistema inadecuado, prohibido. En cambio, mi _SC_ es de carácter inercial y puedo aplicar en él, con seguridad, la geometría euclidiana. Su disco está en movimiento absoluto y desde el punto de vista de la física clásica constituye un _SC_ prohibido, en el cual no se cumplen las leyes de la mecánica.
M. — No me hable de movimiento absoluto. Mi _SC_ es tan bueno como el suyo. Lo que yo vi fue que su _SC_ giraba con respecto a mi disco. Nadie puede prohibirme referir todos los movimientos a mi _SC_.
C. — ¿Pero no sintió usted una extraña fuerza que lo trataba de alejar del centro del disco? Si éste no estuviera girando como un tiovivo, no habría usted observado esta fuerza radial ni la diferencia de cocientes de que hemos hablado anteriormente. ¿No son suficientes estos hechos para convencerle de que su _SC_ está en movimiento absoluto?
M. — ¡De ninguna manera! He notado, es cierto, los dos hechos que usted menciona; pero creo que sobre mi disco actúa un extraño campo gravitatorio que es el causante de ambos. Este campo, dirigido hacia la periferia del disco, deforma mis barras rígidas y modifica la marcha de mis relojes. El campo gravitatorio, la geometría no euclidiana, relojes con marchas diferentes, son para mí hechos estrechamente relacionados. Al aceptar cualquier _SC,_ debo, al mismo tiempo, suponer la existencia de un campo gravitatorio apropiado.
C. — ¿Se da usted cuenta de las dificultades causadas por la teoría general de la relatividad? Desearía hacerme entender claramente tomando un simple ejemplo no físico. Imagine una ciudad americana ideal, formada por calles paralelas y por avenidas también paralelas entre sí, pero perpendiculares a las calles. La distancia entre las calles y las avenidas es siempre la misma; luego, las manzanas son todas de igual área. De esta manera puedo individualizar cualquiera de ellas. Esta construcción sería imposible sin la geometría euclidiana. Así, por ejemplo, no podemos cubrir toda la Tierra con una sola y gran ciudad ideal, tipo americano. Un vistazo a un globo terráqueo le convencerá. Y tampoco sería posible cubrir vuestro disco con una ciudad de dicho tipo. Usted sostiene que sus barras son deformadas por un campo gravitatorio. El hecho de que usted no pudiera confirmar el teorema de la proporcionalidad entre los radios y las circunferencias respectivas, demuestra claramente que si usted lleva suficientemente lejos el plan de construcción de las calles y avenidas perpendiculares entre sí, tarde o temprano encontrará dificultades insalvables. En su disco giratorio la geometría se parece a la de una superficie curva, donde, naturalmente, no se puede llevar a cabo la construcción de dichas calles y avenidas, perpendiculares entre sí, sobre una parte suficientemente grande de la superficie. Para dar un ejemplo más físico, tomemos un plano irregularmente calentado, es decir, a temperaturas diferentes en distintas partes de su superficie. ¿Podría usted, con pequeños listones de hierro que se dilatan con los cambios de temperatura, efectuar la construcción reticular representada en la figura 26? ¡Naturalmente que no! Su «campo gravitatorio» les juega las mismas tretas a las barras de su _SC_ que la variación de temperatura a los listoncitos de hierro.
**Fig. 26**
M. — No me asusta todo esto. Su construcción de calles y avenidas perpendiculares entre sí hace falta para determinar las posiciones de los cuerpos y necesitamos los relojes para ordenar los acontecimientos en el tiempo. La ciudad no tiene que ser el tipo geométrico americano de la figura 26, puede ser, perfectamente, del tipo de la antigua ciudad europea. Imagine su ciudad construida sobre un material plástico al que después deformamos. Aun así, sería posible numerar las manzanas y distinguir las diversas calles y avenidas, aunque éstas no sean ya equidistantes ni rectas (figura 27). Análogamente, sobre la Tierra, la longitud y latitud de un punto determinan su posición, aun cuando no haya una estructura del tipo, varias veces referido, de la «ciudad americana».
C. — Pero aún veo una dificultad en el uso de su estructura tipo «antigua ciudad europea». Estoy de acuerdo con que usted puede ordenar los puntos o los sucesos, pero la construcción embrollará las mediciones de las distancias. No le dará las _propiedades métri_ cas del espacio como ocurre con mi subdivisión. Tomemos un ejemplo. Yo sé que, en mi «ciudad americana», para caminar diez manzanas tengo que recorrer una distancia doble a la de cinco manzanas. Sabiendo que todas las manzanas son iguales, la determinación de distancias me resulta inmediata.
**Fig. 27**
M. — Esto es verdad. En mi «ciudad europea» no se pueden medir las distancias, directamente, por el número de manzanas deformadas. Debo conocer algo más; debo conocer las propiedades geométricas de la superficie sobre la cual se construyó la hipotética «ciudad europea». Todo el mundo sabe que de 0° a 10° de longitud en el ecuador, no hay la misma distancia que entre 0° y 10° cerca del polo; todo navegante sabe cómo hallar las distancias entre dos de esos puntos de la Tierra porque conoce las propiedades geométricas de nuestro planeta; lo puede hacer mediante cálculos basados en la trigonometría esférica o experimentalmente, recorriendo con su barco dichas distancias a igual velocidad. En su caso todo ese problema resulta trivial porque las calles y las avenidas están igualmente separadas. En el caso de nuestra Tierra el asunto se complica, pues los meridianos 0° y 10° se cruzan en el polo y tienen el máximo de separación en el ecuador. Análogamente, en mi estructura tipo «ciudad europea» debo conocer algo más que usted en su estructura tipo «ciudad americana» para determinar distancias. Puedo adquirir este conocimiento adicional estudiando las propiedades geométricas de mi continuo en cada caso particular.
C. — Pero todo esto sirve, precisamente, para mostrar cuán complicado e inconveniente resulta reemplazar la estructura simple de la geometría euclidiana por la intrincada armazón que usted se ve obligado a usar. ¿Es esto realmente necesario?
M. — Me temo que sí, si queremos aplicar la física a cualquier _SC_ , sin tener que depender del misterioso _SC_ inercial. Admito que mi instrumento matemático es más complejo que el suyo; pero mis suposiciones físicas son más simples y más naturales.
La discusión se limitó a continuos bidimensionales. El asunto es más complicado en la teoría de la relatividad general, pues en ella debemos tratar con el continuo de cuatro dimensiones. No obstante, las ideas son las mismas que las que hemos esbozado con motivo del continuo bidimensional. En la relatividad general no se puede usar el andamio construido con barras rectas, paralelas y perpendiculares entre sí y relojes sincronizados como nos era permitido en la teoría de la relatividad restringida, pudiendo, sin embargo, ordenar puntos y sucesos con esas barras no euclidianas y con los relojes de marcha desigual. Pero medidas que requieran barras rígidas y relojes sincronizados y con marcha perfecta pueden sólo llevarse a cabo en un _SC_ inercial de carácter local. Para éste es válida la teoría relativista restringida; pero nuestro _SC_ «bueno» es sólo local, estando su naturaleza inercial confinada a un pequeño espacio y a un tiempo corto. Es factible predecir desde un _SC_ arbitrario los resultados de las medidas y observaciones efectuadas en un _SC_ inercial local; pero para esto es imprescindible conocer el carácter geométrico del continuo espacio-tiempo.
Los experimentos ideales que citamos sólo nos indican el carácter general de la nueva física relativista. Nos muestran que nuestro problema fundamental es el de la gravitación y que la relatividad generalizada conduce a una generalización muy amplia de los conceptos de tiempo y espacio.
#### LA RELATIVIDAD GENERAL Y SU VERIFICACIÓN
La teoría general de la relatividad intenta formular las leyes físicas para todos los _SC,_ indistintamente. La gravitación es el problema fundamental de esta teoría. La relatividad constituye el primer esfuerzo serio de reforma de la ley de la gravitación desde el tiempo de su descubrimiento por Newton. ¿Será esto realmente necesario? Recapitulemos.
Ya hemos expuesto los éxitos de la teoría de Newton, que dio lugar al tremendo desarrollo de la astronomía basado sobre su ley de gravitación. Esta ley de Newton continúa aún siendo la base de todos los cálculos astronómicos. Pero recordemos también las objeciones hechas a esta teoría. En efecto, la ley de Newton es válida únicamente en los _SC_ inerciales de la física clásica; _SC_ definidos por la condición de que para ellos deben valer las leyes de la mecánica. La fuerza entre dos masas depende de la distancia que las separa. La relación entre la fuerza y la distancia es, como sabemos, un invariante con respecto a la transformación clásica. Pero esta ley no se ajusta al marco de la relatividad restringida, pues la distancia no es invariante respecto de la transformación de Lorentz. Podríamos tratar, como hicimos con tanto éxito con las leyes del movimiento, de generalizar la ley de la gravitación de manera que se ajuste a la teoría especial de la relatividad; o, en otras palabras, formularla de tal modo que resulte invariante respecto a la transformación de Lorentz y no a la transformación de Galileo. Pero esta ley de Newton resistió obstinadamente todos los esfuerzos hechos para simplificarla y adaptarla a la teoría de la relatividad restringida. Aun cuando hubiéramos salido airosos de esta empresa, nos quedaría por dar, aún, un paso importante: el paso del _SC_ inercial al _SC arbitrario_ de la teoría de la relatividad general. Por otra parte, las experiencias ideales del ascensor muestran claramente que no sería posible la formulación de una teoría general de la relatividad sin resolver el problema de la gravedad. Y por esto vemos, asimismo, por qué la solución relativista del problema de la gravitación ha de ser distinta de la interpretación clásica.
Hemos tratado de señalar, una vez más, el camino que conduce a la teoría de la relatividad general y las razones que nos fuerzan a modificar nuestro punto de vista anterior. Sin entrar en la estructura formal de la teoría, expondremos ciertos rasgos distintos de la nueva teoría de la gravitación en relación con la newtoniana. No debiera resultar muy difícil ver la naturaleza de estas diferencias, teniendo en cuenta lo expuesto hasta el momento:
1. Las ecuaciones gravitatorias de la teoría de la relatividad general pueden ser aplicadas a cualquier _SC_. La elección de un determinado _SC_ para un caso dado es sólo una cuestión de conveniencia práctica. Teóricamente todos los _SC_ son permitidos. En los casos en que la gravitación pueda ser despreciada encontraremos automáticamente las leyes de la relatividad restringida.
2. La ley de gravitación de Newton relaciona el movimiento de un cuerpo en un cierto lugar del espacio y en un determinado instante del tiempo, con la acción simultánea de otro cuerpo a cierta distancia (grande o pequeña) del primero. Ésta es la ley que constituyó un verdadero modelo de todo el sistema conceptual mecanicista. Pero el punto de vista mecanicista se vino abajo. Con las leyes de Maxwell se creó un nuevo modelo de ley natural. Las ecuaciones de Maxwell son estructurales. Como sabemos, relacionan sucesos que se producen «aquí» y «ahora» con sucesos que acontecerán un poco más tarde en el entorno inmediato. Son las leyes que describen las variaciones del campo electromagnético. Las nuevas ecuaciones gravitatorias son también leyes estructurales que describen los cambios del campo gravitatorio. Hablando esquemáticamente, podríamos decir: la transición de la ley de la gravitación de Newton a la relatividad general recuerda en algo el pasaje de la teoría de los fluidos eléctricos y de la ley de Coulomb a la teoría de Maxwell.
3. Nuestro mundo no es euclidiano. Su naturaleza geométrica está determinada por la distribución de la materia y de su velocidad. Las ecuaciones gravitatorias de la teoría general de la relatividad tratan de revelar las propiedades geométricas del mundo.
Supongamos, por el momento, que hubiéramos conseguido desarrollar el programa de la relatividad general. ¿No estamos en peligro de llevar la especulación demasiado lejos de la realidad? Sabemos con qué exactitud la teoría clásica explica las observaciones astronómicas. ¿Existe la posibilidad de tender un puente entre la nueva teoría y la observación? Toda especulación tiene que ser controlada por la experiencia, y la más hermosa de las teorías tiene que ser rechazada si no se ajusta a los hechos. ¿Cómo resistió la nueva teoría la prueba experimental? Esta pregunta se puede responder con una sola frase: la teoría de la gravitación de Newton es un caso particular de la relativista. Si las fuerzas de gravitación son relativamente débiles, la antigua teoría newtoniana resulta una buena aproximación a las nuevas leyes de gravitación. Luego, todas las observaciones que confirman la teoría clásica confirman también la teoría relativista. Recuperamos la teoría anterior desde el nivel más elevado de la nueva.
Aun cuando no se pudieran encontrar observaciones adicionales en favor de la teoría relativista, si su explicación fuera sólo tan buena como la anterior, deberíamos decidirnos por ella. Las ecuaciones de esta teoría son más complicadas desde el punto de vista formal, pero sus hipótesis fundamentales son mucho más simples. En ellas han desaparecido los dos fantasmas terribles: el tiempo absoluto y el sistema inercial. La clave de la equivalencia entre la masa gravitatoria y la masa inerte no pasa inadvertida aquí. No hacen falta hipótesis sobre la dependencia de la fuerza de gravitación respecto de la distancia. Las ecuaciones gravitatorias tienen la forma de leyes de estructura, forma requerida a toda ley física desde el gran descubrimiento de la teoría del campo.
Se pueden deducir, sin embargo, ciertas consecuencias nuevas de la teoría relativista de la gravitación. Una de ellas, la desviación de los rayos luminosos en un campo gravitatorio, ha sido citada ya. Vamos a mencionar a continuación otras dos consecuencias más.
Como ya dijimos, las ecuaciones relativistas se reducen a la ley de la gravitación de Newton para campos débiles; luego, para que aparezcan discrepancias con las leyes clásicas, deberemos considerar campos gravitatorios muy intensos. Tomemos nuestro sistema solar. Los planetas, la Tierra entre ellos, se mueven en órbitas elípticas alrededor del Sol. La atracción entre éste y Mercurio es mayor que la que existe entre él mismo y cualquier otro planeta, pues Mercurio es el planeta más cercano al astro central. Si existe alguna esperanza de encontrar una desviación de la ley de Newton, es en este planeta donde hay mayor probabilidad de hallarla. Según la gravitación universal clásica, la trayectoria de Mercurio debe ser de igual naturaleza que las de los demás planetas, con excepción de que está más próxima al Sol. La teoría de la relatividad predice, en cambio, que su trayectoria debe ser algo diferente, a saber: además del movimiento de traslación elíptico de Mercurio alrededor del Sol, la elipse, que constituye su trayectoria newtoniana, debería girar lentamente, respecto al _SC_ unido rígidamente al Sol, dibujando como resultante la pintoresca trayectoria en roseta de la figura 28. Esta rotación de la elipse constituye el efecto nuevo predicho por la relatividad, que da también su magnitud. ¡La elipse de Mercurio efectuaría, según los cálculos relativistas, una rotación completa en tres millones de años! Se ve que el efecto es muy débil y pocas esperanzas habría de descubrirlo en planetas más alejados del Sol que Mercurio.
**Fig. 28**
La desviación del movimiento de Mercurio respecto a la elipse newtoniana era en realidad conocida con anterioridad a la formulación de la teoría de la relatividad, pero no tenía explicación alguna. Por otra parte, la teoría general de la relatividad fue desarrollada sin tener en cuenta este problema particular. No fue hasta después de formulada esta teoría cuando se dedujo de sus ecuaciones gravitatorias la rotación de la elipse newtoniana alrededor del Sol. En el caso de Mercurio la teoría explicó con éxito la discrepancia entre el movimiento real y el movimiento predicho por la ley de Newton.
Hay una conclusión más, deducida de la teoría general de la relatividad, que fue puesta a prueba por la experiencia. Antes hemos visto que un reloj, colocado sobre la circunferencia grande del disco giratorio, tiene una marcha distinta de otro igual colocado más cerca del eje de rotación. Análogamente, de la teoría relativista se deduce que un reloj situado en el Sol tiene una marcha diferente a la de otro reloj idéntico en la Tierra, pues el campo gravitatorio es más intenso en el Sol que en nuestro planeta.
Ya hemos indicado que el sodio incandescente emite una luz amarilla homogénea, o sea de una longitud de onda determinada. En esta radiación, el átomo revela uno de sus posibles ritmos; el átomo representa, por así decir, un reloj, y la longitud de onda emitida, uno de sus ritmos. Según la teoría de la relatividad generalizada, la longitud de onda de la luz emitida por el átomo de sodio, por ejemplo, colocado en el Sol, debiera ser algo mayor que la de la luz emitida por el mismo elemento sobre la Tierra.
El problema de comprobar experimentalmente las consecuencias de la teoría de la relatividad general es complicado y no está aún resuelto. Como en esta exposición nos ocupamos sólo de las ideas principales, no intentaremos ir más lejos; y sólo nos resta decir que el veredicto experimental parece confirmar las nuevas conclusiones obtenidas de la teoría de la relatividad generalizada.
#### CAMPO Y MATERIA
Hemos visto cómo y por qué se vino abajo el punto de vista mecanicista. Fue imposible explicar todos los fenómenos basados en la acción de sencillas fuerzas de atracción y repulsión entre partículas inalterables. Nuestros primeros intentos de ir más allá de la concepción mecanicista, introduciendo el concepto de campo, tuvieron su mayor éxito en el dominio de los fenómenos electromagnéticos. Fueron así formuladas las leyes estructurales del campo electromagnético; leyes, volvemos a recordar, que relacionan sucesos muy próximos entre sí en el espacio y el tiempo. Estas leyes caben en el marco de la teoría de la relatividad restringida, pues son invariantes respecto de la transformación de Lorentz. Más tarde, la teoría general de la relatividad formuló las leyes de la gravitación, que también son estructurales, y que describen el campo gravitatorio entre partículas materiales. Se pudieron, también, generalizar las leyes de Maxwell, de manera que valieran en cualquier _SC_ como sucede con las leyes relativistas de la gravitación.
Tenemos dos realidades: _materia y campo._ No hay duda de que en la actualidad no se puede concebir toda la física como edificada sobre el concepto de materia, tal como lo creían los físicos de principios del siglo pasado. Por el momento tenemos que aceptar ambos conceptos. ¿Pero podemos pensar que la materia y el campo son dos realidades completamente diferentes? Dada una pequeña partícula de materia podríamos, de una manera simplista, formarnos la imagen de la misma, suponiendo que existe una superficie bien definida donde la partícula deja de existir y donde aparece su campo gravitatorio. En esta imagen, la región en la cual son válidas las leyes del campo es separada bruscamente de la región en la que está presente la materia. Pero ¿cuáles son los criterios que distinguen la materia del campo? Antes de haber estudiado la teoría de la relatividad pudiéramos haber intentado la respuesta siguiente: la materia tiene masa y el campo no. O de otra manera: el campo representa energía y la materia representa masa. Pero ya sabemos que estas respuestas son insuficientes a la luz del conocimiento posteriormente adquirido. De la teoría de la relatividad sabemos que la materia representa enormes depósitos de energía y que la energía representa materia. No se puede, por este camino, distinguir cualitativamente entre materia y campo, pues la diferencia entre masa y energía tampoco es cualitativa. La materia es, con mucho, el mayor depósito de energía; pero el campo que envuelve la partícula representa también energía, aunque en una cantidad incomparablemente menor. Por esto se podría decir: la materia es el lugar donde la concentración de energía es muy grande y el campo es donde la concentración de energía es pequeña. Pero si éste es el caso, entonces la diferencia entre materia y campo es sólo cuantitativa. No hay razón, entonces, para considerar la materia y el campo como dos cualidades esencialmente diferentes entre sí. No se puede imaginar una superficie nítida que separe el campo de la materia.
La misma dificultad se presenta para la carga eléctrica y su campo. Parece imposible dar un criterio cualitativo obvio para distinguir entre materia y campo o entre carga y campo.
Las leyes estructurales, es decir, las leyes de Maxwell y las gravitatorias, dejan de ser válidas para concentraciones de energías muy grandes; es decir, donde existen fuentes del campo o sea cargas eléctricas y materia. Pero ¿no podríamos modificar nuestras ecuaciones de modo que valieran en todas partes, incluso en regiones donde la energía esté enormemente concentrada?
No podemos edificar la física únicamente sobre la base del concepto de materia. Pero la división entre materia y campo es, desde el descubrimiento de la equivalencia entre masa y energía, algo artificial y no claramente definido. ¿No sería factible desechar el concepto de materia y estructurar una fisica fundamentada sólo en el concepto del campo? Según esta concepción lo que impresiona nuestros sentidos como materia es, realmente, una enorme concentración de energía dentro de un volumen relativamente muy reducido. Podríamos considerar materia las regiones donde el campo es extremadamente intenso. De esta manera se crearía un nuevo panorama filosófico. Su misión y objetivo último sería la explicación de todos los fenómenos de la naturaleza por medio de leyes estructurales, válidas siempre y en todas partes. Desde este punto de vista, una piedra que cae sería un campo variable en el que los estados de máxima energía se desplazan por el espacio con la velocidad de la piedra. En una física tal no habría lugar para ambos conceptos, materia y campo; este último sería la única realidad. Esta nueva concepción nos es sugerida por el triunfo, sin precedente, de la física del campo, por el éxito alcanzado al expresar las leyes de la electricidad, magnetismo y gravitación en forma de leyes estructurales y, finalmente, por el descubrimiento de la equivalencia entre masa y energía. Nuestro problema último sería modificar las leyes del campo de tal modo que no dejen de valer en las regiones de concentración energética singular.
Pero todavía no se ha conseguido cumplir convincente y consistentemente con este programa. La decisión definitiva de su posibilidad corresponde al futuro. Hoy debemos admitir en todas nuestras construcciones teóricas las dos realidades: campo y materia.
Quedan aún ante nosotros problemas fundamentales. Sabemos que toda materia está edificada sobre una pequeña variedad de partículas. ¿Cómo son las diversas formas de la materia construida a partir de esas partículas elementales? ¿Cómo interaccionan esas partículas elementales con el campo? En la busca de una respuesta a estas cuestiones se han introducido en la física nuevas ideas, las cuales constituyen los principios de la _teoría de los cuantos_.
RESUMEN
Un nuevo concepto aparece en la física, la invención más importante a partir de la época de Newton: el campo. Fue precisa una aguda imaginación científica para darse cuenta de que no eran las cargas ni las partículas, sino el campo existente entre ellas, lo esencial en la descripción de los fenómenos físicos. El concepto de campo resulta de una eficacia inesperada, dando origen a la formulación de las ecuaciones de Maxwell, que describen la estructura del campo electromagnético, gobernando al mismo tiempo los fenómenos eléctricos y los ópticos.
La teoría de la relatividad se origina en los problemas del campo. Las contradicciones e inconsistencias de las teorías clásicas nos obligan a adjudicar nuevas propiedades al continuo espacio-tiempo, al escenario de todos los acontecimientos de nuestro mundo físico.
La teoría de la relatividad se desarrolla en dos etapas. La primera conduce a la llamada teoría de la relatividad restringida o especial que se aplica sólo a sistemas inerciales de coordenadas, esto es, a sistemas en los que es válido el principio de inercia como lo formulara Newton. Esta teoría relativista restringida se basa sobre dos suposiciones fundamentales, a saber: las leyes físicas son las mismas en todos los sistemas de coordenadas en movimiento uniforme relativo entre sí; y la velocidad de la luz tiene siempre el mismo valor. De estos postulados, completamente confirmados por las experiencias, han sido deducidas las propiedades de barras y relojes en movimiento, su cambio de longitud y de marcha en función de la velocidad. Esta teoría modifica las leyes de la mecánica. Las leyes clásicas no se cumplen si la velocidad de la partícula móvil se aproxima a la de la luz. Las nuevas leyes relativistas del movimiento de los cuerpos han sido espléndidamente confirmadas por la experiencia. Otra consecuencia de la teoría (especial) de la relatividad es la relación entre masa y energía. La masa es energía y la energía tiene masa. Los dos principios de conservación de masa y de energía son combinados por la teoría de la relatividad en un solo principio, el de la conservación de la masa-energía.
La teoría general de la relatividad da un análisis aún más profundo del continuo espacio-tiempo. La validez de esta teoría ya no está restringida a los sistemas inerciales de coordenadas. Ataca el problema de la gravitación y formula nuevas leyes que dan la estructura del campo gravitatorio. Nos induce a analizar el papel que desempeña la geometría en la descripción del mundo físico. Considera la equivalencia entre la masa inerte y la masa gravitatoria como una clave esencial y no como una coincidencia accidental, según era considerada en la mecánica clásica. Las consecuencias experimentales de la teoría de la relatividad generalizada difieren sólo levemente de la mecánica clásica y han concordado con la experiencia cada vez que se pudo establecer la prueba. Pero el valor de la teoría reside en su coherencia interna y en la simplicidad de sus hipótesis fundamentales.
La teoría de la relatividad acentúa la importancia del concepto del campo en la física. Pero todavía no se ha conseguido formular una física de campos pura. Por ahora debemos admitir, aún, la existencia de ambos: campo y materia.
### 2
### LOS CUANTOS
#### CONTINUIDAD Y DISCONTINUIDAD
Supongamos que tenemos ante nosotros un mapa de la ciudad de Barcelona y sus alrededores. Nos preguntamos: ¿a qué puntos de este mapa puede llegarse en tren? Con una guía de ferrocarril a mano, nos será fácil hallarlos y marcarlos en el mapa. Preguntémonos ahora: ¿a qué puntos se podrá llegar viajando en coche? Si se trazan, sobre el mismo mapa, líneas que representen todos los caminos que desembocan en Barcelona, puede llegarse en automóvil a cada uno de sus puntos. En ambos casos tenemos conjuntos de puntos. En el primero, los puntos señalados están separados entre sí y representan estaciones de ferrocarril; en el segundo, son todos los puntos de las líneas que representan caminos. Ahora bien, quisiéramos saber a qué distancia de Barcelona está cada uno de esos puntos o, para ser más exactos, deseamos conocer su distancia respecto de determinado lugar de la ciudad. Estas distancias pueden hallarse fácilmente en el mapa si viene acompañado de la escala a que fue dibujado. Obtendremos, así, en el caso de las estaciones, números que representarán la distancia de cada una de ellas al lugar en cuestión. Estos números cambian de valor de manera irregular, por saltos o tramos finitos. Lo cual se expresa diciendo: las distancias de Barcelona a los lugares accesibles en tren varían de manera _discontinua._ Los lugares a que es posible llegar en automóvil cambian en cantidades tan pequeñas como se quiera; es decir, varían de manera _continua._ El aumento o disminución del camino recorrido se puede hacer tan pequeño como se quiera yendo en automóvil, pero no viajando en tren.
La producción de una mina de carbón puede variar de modo continuo; es decir, es posible aumentar o disminuir el total de carbón producido en cantidades arbitrariamente pequeñas. Pero el número de empleados puede sólo cambiar discontinuamente. No tiene, evidentemente, sentido decir: «desde ayer, el número de obreros ha aumentado en 3,78».
Si se le pregunta a una persona cuánto dinero lleva consigo, podrá dar un número que contenga únicamente dos decimales. Una suma de dinero puede sólo variar por saltos, discontinuamente. En España la moneda mínima o, como lo llamaremos, el «cuanto elemental» del dinero español, es «un céntimo». El «cuanto elemental» del dinero francés es «un céntimo», cuyo valor es actualmente algo menos de veinte veces más del cuanto español. En este ejemplo tenemos dos cuantos elementales cuyos valores pueden compararse entre sí. La relación de sus valores tiene un sentido preciso, pues uno de ellos vale unas veinte veces más que el otro.
Se puede afirmar, entonces, que ciertas magnitudes cambian de una manera continua y otras discontinuamente, o sea, por cantidades que no se pueden reducir indefinidamente. Estos pasos indivisibles, mínimos, se llaman _los cuantos elementales_ de la magnitud en cuestión.
Al pesar grandes cantidades de arena, se pueden considerar sus masas como continuas aunque su composición granular es evidente. Pero si la arena se hiciera muy cara, y las balanzas empleadas para pesarla fueran muy sensibles, nos veríamos obligados a tener en cuenta el hecho de que su masa tiene que cambiar indefectiblemente por un número entero de granos. La masa de uno de estos granos sería en este caso el cuanto elemental. De este ejemplo se ve cómo al aumentar la precisión de nuestras medidas, se puede descubrir que cierta magnitud, considerada hasta el momento como continua, tiene en realidad una estructura discontinua.
Si tuviéramos que sintetizar la idea principal de la teoría de los cuantos en una sola frase, diríamos: _se debe admitir que ciertas magnitudes físicas consideradas hasta el presente como continuas están compuestas de cuantos elementales_.
El número de hechos que abarca la teoría de los cuantos es tremendamente grande. Estos hechos han sido descubiertos por la técnica altamente refinada de la experimentación moderna. Como no nos será posible mostrar ni describir siquiera los experimentos básicos, tendremos que citar a menudo sus resultados dogmáticamente. Nuestro objeto es explicar solamente las ideas fundamentales.
#### LOS CUANTOS ELEMENTALES DE MATERIA Y ELECTRICIDAD
Según la teoría cinética de la materia, todos los elementos están compuestos de un gran número de moléculas. Tomemos el caso más sencillo, el del elemento más liviano, el hidrógeno. Más arriba vimos cómo el estudio del movimiento browniano llevó a la de terminación de la masa de una molécula de hidrógeno. Su valor es:
0,000.000.000.000.000.000.000.0033 gramos.
Esto significa que la masa es discontinua. La masa de una porción de hidrógeno puede, según esto, variar únicamente en un número entero de cierta cantidad mínima que corresponde a la masa de una molécula de este gas. Pero los procesos químicos enseñan que la molécula de hidrógeno puede ser dividida en dos partes, o en otras palabras, que la molécula de hidrógeno está compuesta de dos átomos. En los procesos químicos, es el átomo, y no la molécula, el que desempeña el papel de cuanto elemental. Dividiendo el número anterior por dos, se obtiene la masa de un átomo de hidrógeno. Ésta vale, aproximadamente:
0,000.000.000.000.000.000.000.0017 gramos.
La masa es, pues, una magnitud discontinua. Pero no tenemos que preocuparnos de ello, naturalmente, al efectuar una pesada. Aun la más sensible de las balanzas está muy lejos de alcanzar el grado de sensibilidad que pueda poner de manifiesto la discontinuidad en la variación de la masa.
Consideremos, ahora, el caso ya tratado de un conductor unido a una fuente eléctrica. Sabemos que es recorrido por una corriente de electricidad que circula del potencial más alto al potencial más bajo. Recordemos que la sencilla teoría de los fluidos eléctricos explica muchos hechos experimentales. Recordemos también que si nos inclinamos por la primera de las dos posibilidades siguientes, a saber: que el fluido positivo se mueve del potencial mayor al menor, o que el fluido negativo se desplaza del potencial menor al mayor, fue, simplemente, una convención. Dejemos de lado, por el momento, todo el progreso procedente de la introducción de los conceptos de campo. Aun pensando en la imagen de los fluidos, quedan, sin embargo, por resolver algunos puntos interesantes. Tal como sugiere la palabra «fluido», la electricidad fue considerada, en un principio, como una magnitud continua. El valor de la carga podía variar, según dicho punto de vista, en cantidades o pasos arbitrariamente pequeños. No fue necesario admitir la existencia de cuantos elementales de electricidad. El éxito de la teoría cinética de la materia nos sugiere la siguiente cuestión: ¿existen cuantos elementales de electricidad? Otro asunto que queda por resolver es el siguiente: ¿consiste la corriente eléctrica en un flujo de fluido positivo, negativo o de ambos, tal vez?
La idea básica de las investigaciones efectuadas con el fin de encontrar una respuesta a las cuestiones planteadas consiste en independizar el fluido eléctrico del alambre conductor, hacerlo viajar por el espacio vacío, despojarlo de toda relación con la materia y, entonces, investigar sus propiedades, que deben aparecer, bajo tales condiciones, con la máxima claridad. Durante el siglo XIX se efectuaron muchas experiencias de este tipo. Antes de explicar la idea de los dispositivos experimentales, citaremos, por lo menos en un caso, los resultados obtenidos. El fluido eléctrico que se mueve por el conductor es negativo, dirigido, por lo tanto, del potencial menor al potencial mayor. Si se hubiera sabido esto desde un principio, cuando se formuló la teoría de los fluidos, seguramente se habrían intercambiado las denominaciones, llamando positiva a la electricidad de la barra de caucho y negativa a la carga de la barra de vidrio. Hubiera sido entonces más conveniente considerar como positivo el fluido que circula por el conductor.
Como nuestra primera suposición fue errónea, debemos afrontar sus inconvenientes. La próxima cuestión de importancia consiste en determinar si la estructura de este fluido negativo es «granular», es decir, si está o no compuesta de cuantos de electricidad. Un número de investigaciones experimentales independientes entre sí muestra, sin lugar a dudas, que existe un cuanto elemental de electricidad negativa. El fluido eléctrico negativo tiene estructura granular, exactamente como una playa se compone de granos de arena y una casa está construida de ladrillos. Este resultado fue formulado con la mayor claridad por J. J. Thomson a finales del siglo XX. Los cuantos elementales de electricidad negativa se llaman _electrones._ En otras palabras, toda carga eléctrica negativa se compone de un gran número de cargas elementales iguales, los electrones. La carga negativa puede, como la masa, variar sólo de una manera discontinua. La carga eléctrica elemental es, sin embargo, tan pequeña, que en muchas investigaciones resulta igualmente posible y a veces hasta más conveniente considerarla como una magnitud continua. Así pues, las teorías atómica y electrónica introducen en la ciencia magnitudes físicas discontinuas que pueden variar, únicamente, por saltos.
Imaginemos dos placas metálicas paralelas situadas en el vacío. Una de las placas tiene una carga positiva, la otra negativa. Una carga positiva de prueba colocada entre las dos placas será repelida por la placa positiva y será atraída por la negativa. Así pues, las líneas de fuerza del campo eléctrico entre las placas se dirigirán de la que posee carga positiva hacia la que posee carga negativa (véase figura 29). La fuerza que actuaría sobre una carga de prueba negativa tendría sentido opuesto. Si las placas son suficientemente grandes, las líneas de fuerza, entre ellas, tendrán en todas partes la misma densidad; en este caso resulta indiferente la posición de la carga de prueba; la fuerza, y por lo tanto la densidad de las líneas de fuerza, será la misma. Los electrones introducidos entre las placas se comportarán como las gotas de una lluvia en el campo gravitatorio de la Tierra, moviéndose paralelamente entre sí, de la placa negativa hacia la placa positiva. Se conocen muchos dispositivos experimentales que permiten introducir un flujo de electrones dentro de tal campo, que los dirige a todos del mismo modo. Uno de los más simples consiste en disponer entre dichas placas un alambre suficientemente calentado. Este conductor emite electrones que son, entonces, dirigidos por las líneas de fuerza del campo existente entre las placas. Por ejemplo, las válvulas radiotelefónicas, tan familiares a todo el mundo, se basan en este principio.
Se han llevado a cabo muchos y muy ingeniosos experimentos con haces de electrones libres. Se han estudiado los cambios de sus trayectorias bajo la acción de campos eléctricos y magnéticos exteriores. Ha sido hasta posible aislar un solo electrón y determinar, así, su carga elemental y su masa, esto es, su resistencia inercial a la influencia de fuerzas exteriores. Aquí citaremos únicamente el valor de su masa, que resulta ser, aproximadamente, _dos mil veces menor_ que la masa de un átomo de hidrógeno. Es decir, la masa de un átomo de hidrógeno, que es ya tan pequeña, resulta grande en comparación con la masa del electrón. Desde el punto de vista de una teoría del campo consistente, toda la masa, es decir, toda la energía de un electrón, es la energía de su campo; casi todo su valor está concentrado en una esfera muy pequeña (el volumen del electrón) donde adquiere el máximo de intensidad. Esta intensidad disminuye rápidamente al alejarnos del «centro» del electrón.
Hemos dicho antes que el átomo de todo elemento constituye su cuanto elemental mínimo. Esto se creyó durante mucho tiempo. Actualmente ya no es así. La ciencia ha formado una imagen nueva que muestra las limitaciones de la anterior. Difícilmente hay en la física una conclusión más firmemente fundada en los hechos que la que sostiene la complejidad de la estructura atómica. Primero se llegó al convencimiento de que el electrón, el cuanto elemental de fluido eléctrico negativo, es uno de los componentes del átomo, uno de los ladrillos elementales que entra en la edificación de toda materia. El caso anteriormente citado de la emisión de electrones por un metal incandescente es sólo uno de los numerosos procedimientos de extraer electrones del seno de la materia. Este resultado, que relaciona el problema de la estructura de la materia con la electricidad, es consecuencia indudable de múltiples e independientes hechos experimentales.
**Fig. 29**
Es relativamente fácil extraer de un átomo alguno de los electrones que entran en su constitución. Esto se puede efectuar por medio del calor, como en el caso del alambre calentado y de manera distinta, como, por ejemplo, bombardeando los átomos con otros electrones.
Supongamos que se introduce un alambre fino y calentado al rojo en un recipiente que contiene hidrógeno enrarecido. El alambre emitirá electrones en todas las direcciones. Bajo la acción de un campo eléctrico apropiado adquirirán cierta velocidad. Un electrón, bajo la acción de un campo eléctrico constante, va aumentando su velocidad como un cuerpo que cae en un campo gravitatorio. Por este método se puede pues conseguir que un haz de electrones se mueva en una dirección y con una velocidad determinadas. Actualmente podemos hacer que los electrones alcancen velocidades del orden de la de la luz, poniéndolos bajo la acción de campos intensísimos. ¿Qué sucede cuando un haz de electrones de cierta velocidad alcanza las moléculas del hidrógeno enrarecido? El choque de un electrón de velocidad suficiente no sólo podrá dividir la molécula de hidrógeno de nuestro ejemplo en sus dos átomos, sino, además, arrancar un electrón a uno de éstos.
Aceptemos el hecho de que los electrones sean constituyentes de la materia. Entonces, un átomo al que se hubiera despojado de un electrón no puede ser eléctricamente neutro. Pues si lo era previamente, no es posible que lo sea faltándole un electrón, o sea, disminuyendo su carga negativa en una carga elemental. El resto del átomo debe poseer un exceso de carga positiva. Por otra parte, como la masa de un electrón es mucho menor que la del átomo más ligero, se puede deducir, con seguridad, que la mayor parte de la masa atómica no está representada por sus electrones, sino por las restantes partículas elementales que son mucho más pesadas. Se llama _núcleo_ a la parte pesada de cada átomo. La técnica moderna ha creado métodos que permiten dividir el núcleo atómico, transformar los átomos de un elemento en los de otro y extraer del núcleo las diversas partículas elementales pesadas que lo constituyen. Este capítulo de la física conocido como «Física nuclear», al que tanto contribuyó Rutherford, es el más interesante desde el punto de vista experimental. Pero no tenemos, todavía, una teoría que sea simple en sus ideas fundamentales y que explique la riqueza y variedad de los hechos de la física nuclear. Como en estas páginas nos ocupamos únicamente de las ideas físicas generales, omitiremos este capítulo a pesar de su gran importancia para la física moderna.
#### LOS CUANTOS DE LUZ
Consideremos una pared que se extendiera a lo largo de la costa. Las olas del mar la golpean continuamente. Cada una de las olas que llega, se lleva una pequeñísima parte de su superficie. La masa de la pared decrece, en consecuencia, con el tiempo y al cabo de un año, pongamos por caso, la pared habrá perdido un peso determinado. Imaginemos, ahora, un proceso diferente. Se quiere disminuir la masa de la pared en una cantidad igual a la perdida por la abrasión de las olas durante un año, pero por un procedimiento distinto: disparando contra la pared y desprendiendo, así, pequeños trozos de su superficie en los lugares de impacto de los proyectiles. Su masa disminuirá, evidentemente, y se puede perfectamente imaginar que se consiga la misma reducción total de la masa en ambos casos. De la apariencia de la pared se podría descubrir, sin embargo, si actuaron las olas continuas del mar o la lluvia discontinua de proyectiles. Para comprender mejor los fenómenos que vamos a describir a continuación, resultará útil recordar la diferencia entre las olas del mar y un haz de proyectiles.
Ya hemos dicho que un metal, un alambre incandescente, emite electrones. Aquí presentaremos un modo distinto de extraer electrones de los metales. Supongamos que sobre la superficie de un metal incida luz homogénea de color violeta, es decir, luz de una longitud de onda definida. Se observa que la luz extrae electrones del metal, que se alejan de su superficie con una velocidad determinada. Desde el punto de vista del principio de la conservación de la energía se puede decir: la energía de la luz incidente es parcialmente transformada en energía cinética de los electrones expelidos. La técnica experimental moderna nos permite registrar la presencia de esos proyectiles-electrones, determinar su velocidad y por ende su energía. Esta extracción de electrones de un metal por la luz que incide sobre el mismo se llama _efecto fotoeléctrico_.
Nuestro punto de partida era la acción de una onda luminosa homogénea de cierta intensidad. Como en toda investigación experimental, debemos cambiar las condiciones y ver qué influencia producen sobre el efecto observado.
Empecemos variando la intensidad de la luz violeta homogénea con la que iluminamos nuestro metal y averigüemos cómo depende de ella la energía de los electrones arrancados. Tratemos de encontrar la respuesta razonando en vez de buscarla directamente por vía experimental. Podríamos argumentar así: en el efecto fotoeléctrico una fracción definida de la energía de la radiación luminosa se transforma en energía de movimiento de los electrones. Si se ilumina la misma superficie metálica con luz de igual longitud de onda pero procedente de una fuente más intensa, entonces la energía de los electrones debe ser mayor, ya que la radiación es más energética. Debemos, por lo tanto, esperar que la velocidad de los electrones aumente al aumentar la intensidad de la luz incidente. Pero la experiencia contradice nuestra predicción. Una vez más, vemos que las leyes naturales no son como desearíamos que fueran. Estamos frente a una experiencia que, al contradecir nuestras predicciones, echa abajo la teoría sobre la que éstas se basan. El resultado experimental obtenido es, desde el punto de vista de la teoría ondulatoria, sencillamente asombroso. Los electrones emitidos tienen todos la misma velocidad, la misma energía, que no cambia al aumentar la intensidad de la luz incidente.
Este resultado experimental no pudo haber sido previsto por la teoría ondulatoria. Es por ello que nace aquí una nueva teoría como consecuencia del conflicto entre la vieja teoría y la experiencia.
Seamos deliberadamente injustos con la teoría ondulatoria de la luz, olvidando su gran conquista, la espléndida explicación de la difracción de la luz, o sea, su capacidad de bordear un pequeño obstáculo. Puesta nuestra atención en el fenómeno fotoeléctrico, pidámosle a la teoría una explicación adecuada. Evidentemente, no resulta posible deducir de la teoría ondulatoria la independencia observada de la energía de los electrones respecto de la intensidad de la luz que causa su expulsión del metal. Por esto, buscaremos una nueva teoría. Recordemos que la teoría corpuscular de la luz debida a Newton, que explica un gran número de fenómenos luminosos, fracasó ante la propiedad de la luz de rodear un obstáculo, fenómeno que ahora dejamos de lado deliberadamente. En la época de Newton no existía el concepto de energía. Los corpúsculos luminosos eran, según Newton, imponderables; cada color conservaba su propio carácter de sustancia. Más adelante, cuando se creó el concepto de energía y se reconoció que la luz transporta energía consigo, nadie pensó en aplicar estos conceptos a la teoría corpuscular de la luz. La teoría de Newton estaba muerta y nadie tomó en serio su resurrección hasta nuestro siglo.
Con el objeto de conservar la idea principal de la teoría de Newton, debemos suponer que la luz homogénea está compuesta de granos de energía, y reemplazar los antiguos corpúsculos luminosos por cuantos de luz, que llamaremos _fotones,_ pequeñas porciones de energía que viajan por el espacio vacío con la velocidad de la luz. El renacimiento de la teoría de Newton en esta forma nueva conduce a la _teoría cuántica de la luz._ No sólo la materia y la carga eléctrica, sino también la energía de la radiación tienen una estructura granular, es decir, que está formada por cuantos de luz. Juntamente con los cuantos de materia y electricidad tenemos, también, los cuantos de energía.
La idea de los cuantos de energía fue primeramente introducida por Planck a principios del siglo XX con el objeto de explicar ciertos efectos mucho más complicados que el efecto fotoeléctrico. Pero el efecto fotoeléctrico enseña, con la máxima claridad y simplicidad, la necesidad de modificar nuestros conceptos anteriores.
Se ve enseguida que la teoría cuántica de la luz explica el efecto fotoeléctrico. Un haz de fotones cae sobre una placa metálica. La interacción entre la materia y la radiación consiste aquí en numerosos procesos individuales en cada uno de los cuales un fotón choca contra un átomo y le arranca un electrón. Todos los procesos individuales son análogos y el electrón extraído tendrá la misma energía en todos los casos. También se entiende que aumentar la intensidad de los haces luminosos significa, en el nuevo lenguaje, aumentar el número de fotones incidentes. En este último caso el número de electrones arrancados del metal debe aumentar, pero la energía de cada uno de ellos no cambiará. Se ve, pues, que esta teoría está en perfecto acuerdo con la observación.
¿Qué sucederá si incide sobre la superficie del metal una luz homogénea de color diferente, por ejemplo, de color rojo en lugar de violeta? Dejemos que la experiencia responda a este interrogante, para lo cual hay que medir la energía de los electrones extraídos por la luz roja y compararla con la energía de los electrones arrancados por la luz violeta. Se encuentra así que la energía de los primeros es menor que la de los segundos. Esto significa que la energía de los cuantos de luz es distinta para distintos colores. En particular resulta que la energía de los fotones del color rojo es igual a la mitad de la energía de los fotones correspondientes al violeta. O más rigurosamente: la energía de un cuanto de luz, correspondiente a un color homogéneo, decrece proporcionalmente al aumento de la longitud de onda correspondiente. Existe una diferencia esencial entre los cuantos de energía y los cuantos de electricidad. Los cuantos de luz difieren para cada longitud de onda, mientras que los cuantos de electricidad son siempre los mismos. Si tuviéramos que usar una de las analogías anteriores, compararíamos los cuantos luminosos con los cuantos monetarios mínimos, que varían de país en país.
Continuemos descartando la teoría ondulatoria de la luz y admitamos que la estructura de la luz es granular, o sea, como ya dijimos, que está formada por cuantos luminosos, esto es, fotones que se mueven en el vacío con una velocidad de, aproximadamente, 300.000 kilómetros por segundo. Luego, en nuestra nueva imagen, la luz es una lluvia de fotones, siendo el fotón el cuanto elemental de energía luminosa. Pero si se descarta la teoría ondulatoria, ¿qué nuevo concepto ocupa su lugar? ¡La energía de los cuantos de luz! Las conclusiones expresadas en la terminología de la teoría ondulatoria pueden ser traducidas al lenguaje de la teoría cuántica de la radiación. Por ejemplo:
_Terminología de la teoría ondulatoria_ | _Terminología de la teoría cuántica_
---|---
Una luz homogénea tiene una longitud de onda determinada. La longitud de onda del extremo rojo del espectro visible es el doble de la del extremo violeta. | Una luz homogénea contiene fotones de una determinada energía. La energía de un fotón del extremo rojo del espectro visible es la mitad de la de un fotón del extremo violeta.
El estado de la cuestión puede ser resumido de la siguiente manera: hay fenómenos que pueden ser explicados por la teoría cuántica y no por la teoría ondulatoria. El efecto fotoeléctrico constituye uno de estos casos, conociéndose otros fenómenos de esta clase. Hay fenómenos que pueden ser explicados por la teoría ondulatoria, pero no por la teoría cuántica. La propiedad de la luz de bordear un obstáculo es un ejemplo típico de estos últimos. Finalmente, hay fenómenos, tales como la propagación rectilínea de la luz, que pueden ser explicados perfectamente por ambas teorías.
Pero, entonces, ¿qué es realmente la luz? ¿Es una onda o una lluvia de fotones? Ya nos planteamos antes una pregunta similar cuando nos preguntábamos: ¿Es la luz una onda o una lluvia de corpúsculos luminosos? Dimos la razón a la teoría ondulatoria porque cubría todos los fenómenos conocidos, haciendo que se abandonara el punto de vista corpuscular. Ahora, en cambio, el problema es mucho más complicado. No parece existir la posibilidad de ofrecer una descripción basada en uno solo de los lenguajes. Parece como si debiéramos usar a veces una teoría y a veces otra mientras que en ocasiones se puede emplear cualquiera de las dos. Estamos enfrentados con una nueva clase de dificultad. ¡Tenemos dos imágenes contradictorias de la realidad; separadamente ninguna de ellas explica la totalidad de los fenómenos luminosos, pero juntas, sí!
¿Cómo es posible combinar estas dos imágenes? ¿Cómo podemos entender estos dos aspectos diametralmente opuestos de la luz? No es tarea fácil la solución de esta nueva dificultad. Estamos en presencia, otra vez, de un problema fundamental.
Aceptemos, por el momento, la teoría de los fotones y tratemos, con su ayuda, de comprender los hechos explicados hasta el presente por la teoría ondulatoria. De este modo acentuaremos las dificultades que hacen que las dos teorías aparezcan, a primera vista, como irreconciliables.
Recordemos: un haz de luz homogénea que pasa a través de un orificio hecho con la punta de un alfiler forma, sobre una pantalla, anillos concéntricos luminosos y oscuros. ¿Cómo es posible entender este fenómeno con la ayuda de la teoría de los cuantos de luz, descartando la teoría ondulatoria? Supongamos que un fotón se dirige hacia el orificio. Podríamos esperar que la pantalla aparezca iluminada si el fotón pasa por él y aparezca oscura si no lo atraviesa. En lugar de esto encontramos anillos brillantes y oscuros. Podríamos tratar de dar cuenta de este fenómeno como sigue: tal vez haya cierta interacción entre el borde del orificio y el fotón que sea la causa de la aparición de los anillos de difracción. Esto puede muy difícilmente ser considerado como una explicación. En el mejor de los casos expresa un programa de trabajo para su interpretación, dando, al menos, una ligera esperanza de que en el futuro sea factible entender la difracción como una consecuencia de la interacción entre la materia y los fotones.
Pero aun esta misma tenue esperanza se estrella contra los resultados de otra experiencia que referimos también anteriormente. Supongamos que en lugar de un orificio tenemos dos de ellos. La luz homogénea que pasa por los dos, da franjas luminosas y oscuras. ¿Cómo es posible interpretar este efecto desde el punto de vista de la teoría cuántica de la luz? Se puede argüir así: un mismo fotón pasa por uno cualquiera de los dos orificios. Pero si un fotón de un haz homogéneo representa una partícula luminosa elemental, resulta muy difícil imaginar su división y su paso por los dos orificios. Pero entonces, el efecto habría de ser exactamente igual que en el caso anterior, es decir, tendrían que aparecer anillos luminosos y oscuros en vez de franjas. ¿Cómo es posible que la presencia de otro orificio modifique completamente el efecto? ¡Aparentemente, el orificio por el cual no pasa el fotón, aun estando a una distancia apreciable del otro, influye en el fenómeno y transforma los anillos en franjas! Pues si el fotón se comporta como un corpúsculo de la física clásica debe atravesar sólo una de las dos aberturas. Pero si es así, el fenómeno de difracción parece completamente incomprensible.
La ciencia nos obliga a crear nuevas ideas, nuevas teorías. Su finalidad es la de destruir el muro de contradicciones que frecuentemente bloquea el camino del progreso científico. Todas las ideas esenciales de la ciencia han nacido de un conflicto dramático entre la realidad y nuestros deseos de comprenderla. Aquí tenemos otra vez un problema para cuya solución se requieren nuevos principios. Antes de tratar de dar cuenta de los intentos de la física moderna para explicar las contradicciones entre los aspectos cuántico y ondulatorio de la luz, mostraremos que se encuentra exactamente la misma dificultad al tratar con los cuantos de materia en lugar de los cuantos de luz.
#### LOS ESPECTROS DE RAYAS
Ya sabemos que toda la materia está formada de unas pocas clases de partículas. Los electrones han sido las primeras partículas elementales de la materia que se han descubierto. Pero los electrones son también los cuantos elementales de electricidad negativa. Hemos visto, además, que ciertos fenómemos nos obligan a admitir que la luz está compuesta de cuantos elementales distintos para distintas longitudes de onda. Antes de seguir adelante con el problema planteado debemos discutir ciertos fenómenos físicos en los que tanto la materia como la radiación desempeñan un papel esencial.
El Sol emite una radiación que puede ser descompuesta en sus componentes por un prisma. Así se obtiene el espectro continuo de la luz solar, en el que están representadas todas las longitudes de onda comprendidas entre las que corresponden a los dos extremos de su parte visible. Tomemos otro ejemplo. Ha sido previamente mencionado el hecho de que el sodio incandescente emite luz homogénea, luz de un solo color o de una longitud de onda. Si se hace pasar la luz del sodio incandescente por un prisma se observa una sola línea amarilla. En general, si se coloca un cuerpo incandescente delante de un prisma la luz que emite es descompuesta, al atravesarlo, en sus componentes homogéneos, revelando el espectro característico del cuerpo emisor.
La descarga de la electricidad en un tubo que contiene un gas constituye una fuente luminosa, como la de los tubos luminosos de neón usados con fines de propaganda comercial. Supongamos que uno de esos tubos sea puesto frente a la abertura de un espectroscopio. El espectroscopio es un instrumento que actúa como un prisma pero con mucha mayor precisión y sensibilidad; divide la luz en sus componentes, esto es, la analiza. La luz solar vista a través de un espectroscopio da un espectro continuo; todas las longitudes de onda están representadas en él. Si la fuente de la luz es una descarga eléctrica a través de un gas, el espectro es de naturaleza diferente. En lugar del espectro continuo y multicolor de la luz del Sol, aparecen sobre un fondo oscuro continuo unas rayas brillantes de distintos colores, separadas entre sí. Cada raya o línea, si es bastante angosta, corresponde a un color determinado o, en el lenguaje ondulatorio, a una longitud de onda determinada. Por ejemplo, si en un espectro aparecen veinte líneas, cada una de ellas será designada por uno de otros tantos números distintos que expresan sus longitudes de onda. La luz emitida por los vapores de los diversos elementos posee diferentes combinaciones de rayas espectroscópicas y por ende distintas combinaciones de números que expresan las longitudes de onda que componen sus respectivos espectros. No hay dos elementos que tengan un idéntico sistema de líneas en sus espectros característicos, como no hay dos personas que tengan idénticas sus impresiones digitales. Cuando se obtuvo un catálogo más o menos completo de esas líneas, medidas con cuidado por distintos físicos, se evidenció gradualmente la existencia de ciertas leyes y fue finalmente posible representar, por una simple fórmula matemática, algunas de las columnas de números, en apariencia desconectados entre sí, que expresan las longitudes de onda de dichas líneas.
Todo lo que acabamos de decir puede ser transferido al lenguaje de los fotones. Las rayas corresponden a ciertas y determinadas longitudes de onda o, en otras palabras, a fotones de energías definidas. Los gases luminosos no emiten, pues, fotones de cualquier energía, sino únicamente los característicos de la sustancia. La naturaleza limita, una vez más, la riqueza de posibilidades.
Los átomos de un elemento determinado, por ejemplo, hidrógeno, sólo pueden emitir fotones con energías definidas. Se puede decir que solamente les está permitido emitir cuantos de energía determinada, estándoles prohibidos todos los demás. Imaginemos, para simplificar, que cierto elemento emita una sola línea, o sea, fotones de energía única. El átomo es más rico en energía antes de emitir el fotón. Del principio de la conservación de la energía se sigue que el _nivel energético_ del átomo es más alto antes que después de la emisión de la luz y que la diferencia entre los dos niveles debe ser igual a la energía del fotón emitido. Luego, el hecho de que un átomo de cierto elemento emita una radiación monocromática, o de una sola longitud de onda, se puede expresar de esta otra manera: en un átomo de dicho elemento sólo son permitidos dos niveles de energía, y la emisión de un fotón corresponde a la transición del átomo del nivel más alto al nivel más bajo.
Pero por regla general aparece más de una línea en los espectros de los elementos. Los fotones emitidos corresponden a muchas energías y no a una sola. O en otras palabras, debemos admitir la existencia de muchos niveles de energía atómica y que la emisión de un fotón se produce como consecuencia de la transición del átomo de uno de sus niveles a otro inferior. Pero es esencial el hecho de que no todo nivel energético es permitido, ya que no aparece en el espectro de un elemento cualquier longitud de onda, o sea, fotones de cualquier energía. Así pues, en lugar de decir que al espectro de cierto átomo le corresponden ciertas líneas, ciertas longitudes de onda, se puede decir que todo átomo posee ciertos niveles de energía perfectamente determinados y que la emisión de los cuantos de luz está asociada con la transición del átomo de un nivel a otro más bajo. Los niveles de energía de los átomos son, por regla general, discontinuos y no continuos. Otra vez vemos que las posibilidades están restringidas por la realidad.
Bohr fue quien mostró, por primera vez, por qué un elemento emitía determinadas líneas y no otras. Su teoría, formulada en 1913, da una imagen del átomo que, por lo menos en casos simples, permite calcular los espectros de los elementos, y a la nueva luz de esta teoría se presenta de pronto, con claridad y coherencia insospechadas, un gran fárrago de números aparentemente incoherentes y sin relación alguna.
La teoría de Bohr constituye un paso intermedio hacia una teoría más profunda y más general, llamada mecánica cuántica o mecánica ondulatoria. Nos proponemos en estas últimas páginas esbozar las principales ideas de esta teoría. Antes de hacerlo debemos mencionar un resultado experimental y teórico, pero de carácter más particular.
El espectro visible empieza con una cierta longitud de onda para el color violeta y termina con otra cierta longitud de onda correspondiente al color rojo. O en otras palabras, las energías de los fotones del espectro visible están siempre comprendidas entre los límites formados por las energías de los fotones rojos y violetas. Esta limitación es sólo, naturalmente, una propiedad del ojo humano. Si la diferencia de energías entre dos niveles atómicos es bastante grande, entonces se emitirá un fotón _ultravioleta,_ dando una línea espectroscópica situada fuera del espectro visible. Su presencia no puede ser puesta de manifiesto por el ojo desnudo; se tiene que emplear una placa fotográfica.
**Fig. 30**
Los rayos X están también compuestos de fotones de energía mucho mayor que la energía de los de la luz visible, o en otras palabras, sus longitudes de onda son mucho menores; de hecho, miles de veces menor que la de la luz visible.
¿Pero, será posible determinar experimentalmente esas longitudes de onda tan reducidas? Fue bastante difícil, ya, medir las del espectro visible. Hubimos de emplear obstáculos u orificios muy pequeños. Dos orificios hechos con la punta de un alfiler que producen la difracción de la luz ordinaria, tendrían que ser varios miles de veces menores y más cercanos entre sí para poder mostrar la difracción de los rayos X.
¿Cómo podremos determinar, entonces, la longitud de onda de estos rayos? La naturaleza misma viene en nuestra ayuda. Un cristal es una aglomeración de átomos ordenados de una manera perfectamente regular y a distancias muy pequeñas entre sí. La figura 30 representa un modelo simple de la estructura cristalina. En lugar de pequeñas aberturas tenemos, en un cristal, obstáculos extremadamente pequeños, formados por los átomos del elemento, ordenados según una pauta absolutamente regular y separados por distancias pequeñísimas. Las distancias entre los átomos, deducidas de la teoría de la estructura cristalina, son tan pequeñas que era de esperar que mostraran el efecto de difracción de los rayos X. La experiencia probó que, en efecto, era posible difractar las ondas de los rayos X con dichos obstáculos estrechamente empaquetados y dispuestos con perfecta regularidad en las redes tridimensionales de los cristales.
**Fig. 31** En esta fotografía se pueden observar las diferencias que existen entre los espectros obtenidos al hacer pasar la luz blanca a través de sustancias diferentes.
Supongamos que se registre sobre una placa fotográfica un haz de rayos X después de atravesar un cristal. Se encuentran formadas sobre la placa las tan características imágenes de difracción. Se han empleado varios métodos para estudiar los espectros de los rayos X y para deducir los datos referentes a las longitudes de onda a partir de las imágenes de difracción. Lo que hemos dicho aquí en pocas palabras, requeriría volúmenes enteros si se quisiera dar los detalles experimentales y teóricos de este asunto. En una imagen de difracción de los rayos X obtenida por uno de los varios métodos usuales para ese fin, se pueden ver los anillos claros y oscuros tan característicos de la teoría ondulatoria. En el centro es visible el rayo no difractado. Si no se hubiera puesto el cristal entre los rayos X incidentes y la placa fotográfica, se vería únicamente la mancha central oscura. A partir de fotografías de este tipo se pueden calcular las longitudes de onda de los rayos X y, por el contrario, si su longitud de onda es conocida, se pueden sacar importantes conclusiones respecto a la estructura del cristal.
#### LAS ONDAS DE MATERIA
¿Cómo podemos explicarnos que aparezcan, solamente, ciertas longitudes de onda características en los espectros de los elementos?
En la física ha sucedido a menudo que el desarrollo de una analogía entre fenómenos aparentemente sin relación, ha dado origen a un verdadero progreso de la misma. En las páginas de este libro quedaron ya consignados varios casos en que se pudo aplicar, con todo éxito, a ciertas ramas de la ciencia, ideas creadas y desarrolladas en otras ramas de ella. La asociación de problemas no resueltos con otros ya resueltos puede arrojar nueva luz sobre los primeros, sugiriendo otras ideas que ayuden a solucionar las dificultades halladas. Es fácil, sin embargo, encontrar analogías superficiales, que en realidad no expresan nada; pero descubrir ciertas propiedades comunes escondidas bajo superficies exteriores de aspectos diferentes y formular, sobre esta base, una teoría nueva, constituye un trabajo de creación de un gran valor. El desarrollo de lo que se llama mecánica ondulatoria, que fue iniciado por De Broglie y Schrodinger hace menos de quince años, es un ejemplo típico del alcance de una analogía feliz y profunda que da origen a una importantísima teoría física.
**Fig. 32** Cuando se hacen pasar electrones a alta velocidad a través de una rendija, se produce un fenómeno de difracción que es característico de las ondas, lo que demuestra el carácter ondulatorio de los electrones en movimiento.
El punto de partida que lleva a este resultado es un fenómeno clásico que nada tiene que ver con la física moderna. Tomemos en las manos uno de los extremos de un tubo de goma largo y flexible o una espiral elástica muy larga y démosle un rápido movimiento rítmico de sube y baja, haciendo que dicho extremo se ponga a oscilar. Entonces, como vimos en otros casos, se crea una onda que avanza a lo largo del tubo con cierta velocidad (figura 33). Si imaginamos un tubo indefinidamente largo, iniciadas las ondas parciales, éstas continuarán su viaje sin fin, sin interferencia alguna.
**Fig. 33**
Consideremos, ahora, otro caso: los dos extremos del tubo están fijos. Si se prefiere, se puede pensar en una cuerda de un violín. ¿Qué sucede si se crea, en el tubo o cuerda, una onda, en un lugar próximo a uno de sus extremos? La onda inicia su propagación hacia el otro extremo como en el caso anterior, pero al llegar a éste, se refleja, es decir, vuelve al extremo inicial. Luego tenemos dos ondas: una creada por la oscilación y la otra, por reflexión, que se propagan en sentido opuesto e interfieren entre sí. No es difícil obtener el resultado de la interferencia de esas dos ondas y descubrir la onda resultante de su superposición que se llama _onda estacionaria._ Las dos palabras «onda» y «estacionaria» parecen contradecirse; su reunión se justifica, sin embargo, por el resultado real de la superposición de aquellas dos ondas.
El caso más sencillo de una onda estacionaria lo tenemos en el movimiento de una cuerda fija en sus dos extremos y en movimiento de vibración alrededor de su posición normal, cuatro de cuyas fases están representadas en la figura 34. Este movimiento resulta, como ya dijimos, de la superposición de dos ondas que se propagan en la misma cuerda en sentidos opuestos. La propiedad característica de este movimiento es la siguiente: sólo los dos puntos extremos están en reposo. Éstos se denominan _nodos._ La onda se mantiene, por así decir, entre los dos nodos, alcanzando simultáneamente todos los puntos de la cuerda los máximos y mínimos de sus desviaciones.
**Fig. 34**
Pero éste es sólo el caso más sencillo de onda estacionaria. Existen otros. Por ejemplo, se puede producir una onda estacionaria con tres nodos, uno en cada extremo y otro en el centro de la cuerda. En este caso hay tres puntos que están permanentemente quietos. Un vistazo a su representación (figura 35) muestra que la longitud de su onda es igual a la mitad de la longitud de onda del ejemplo anterior. Igualmente, existen ondas estacionarias con cuatro, cinco, seis y más nodos. (Véase figura 36, correspondiente a cuatro nodos.) La longitud de onda dependerá en cada caso del número de nodos. Este número solamente puede ser entero y puede variar, por lo tanto, únicamente por saltos. Decir «el número de nodos de una onda estacionaria es igual a 3.576» no tiene sentido. Por la misma razón la longitud de onda sólo puede cambiar discontinuamente. En este problema clásico encontramos, pues, las características típicas de la teoría de los cuantos. La onda estacionaria producida por un violinista es, de hecho, todavía más complicada; es una mezcla de muchísimas ondas con dos, tres, cuatro, cinco y más nodos, y en consecuencia una superposición de varias longitudes de onda. La física posee métodos para descomponer dicha mezcla en las ondas estacionarias simples que la componen. Empleando la terminología anterior, podríamos decir que la cuerda vibrante tiene su espectro propio, exactamente como un elemento que está emitiendo su radiación. Y como en el espectro del elemento, sólo se pueden producir ciertas longitudes de onda, estando prohibidas todas las demás.
**Fig. 35**
**Fig. 36**
Vemos así cómo se descubrió una similitud entre la cuerda vibrante y un átomo emisor de energía. Por extraña que nos pueda parecer, tratemos de llevar hacia delante esta analogía, deduciendo ulteriores conclusiones de la misma. Los átomos de todos los elementos están formados de partículas elementales, de las cuales las más livianas son los electrones y las más pesadas componen el núcleo. Un sistema tal de partículas se comporta como un diminuto instrumento acústico en el cual se producen ciertas ondas estacionarias.
Pero una onda estacionaria es el resultado de la interferencia de dos o, generalmente, más ondas simples y progresivas. Si hay algo de cierto en nuestra analogía, a una onda progresiva simple le deberá corresponder algo de constitución más sencilla que un átomo. ¿Cuál es el corpúsculo de constitución más sencilla? En nuestro mundo material nada puede ser más simple que un electrón, una partícula elemental, sobre el que no actúen fuerzas exteriores, esto es, un electrón en reposo o en movimiento rectilíneo y uniforme. Se puede vislumbrar un eslabón más del encadenamiento de nuestra analogía: un electrón en movimiento uniforme ondas de una longitud determinada. Ésta fue la idea nueva y audaz introducida por De Broglie.
Antes se ha visto que hay fenómenos en los cuales la luz revela su carácter ondulatorio y otros en los cuales la luz revela su carácter corpuscular. Después de habernos acostumbrado a la idea de que la luz es un proceso ondulatorio encontramos, ante nuestro asombro, que en ciertos casos, por ejemplo en el efecto fotoeléctrico, se comporta como si fuera una lluvia de fotones. Ahora tenemos un estado de cosas exactamente opuesto respecto a los electrones. Nos hemos hecho a la idea de que los electrones eran partículas, cuantos elementales de electricidad y materia. Se determinó su carga y su masa. Pero si hay algo cierto en la idea de De Broglie, entonces debe haber ciertos fenómenos en los cuales la materia revele su carácter ondulatorio. De entrada, esta conclusión, obtenida siguiendo la analogía acústica, parece extraña e incomprensible. ¿Qué relación tendrá, con una onda, una partícula en movimiento?
Pero ésta no es la primera vez que nos enfrentamos en la física con una dificultad de esta clase. Ya encontramos el mismo problema en el terreno de los fenómenos luminosos.
Las ideas fundamentales desempeñan un papel esencial en la formación de una teoría física. Los libros de física están llenos de fórmulas matemáticas complicadas. Pero son los pensamientos e ideas, no las fórmulas, los que constituyen el principio de toda teoría física. Las ideas deben, después, adoptar la forma matemática de una teoría cuantitativa, para hacer posible su confrontación con la experiencia. Esto se entenderá mejor tomando como ejemplo el problema con el que estamos ocupados. La conjetura principal es que un electrón en movimiento uniforme se comportará, en ciertos fenómenos, como una onda. Supongamos que un electrón o una lluvia de electrones que tengan la misma velocidad, están en movimiento uniforme. Conocemos la masa, la carga y la velocidad de cada uno de esos electrones. Si queremos asociar, de alguna manera, un concepto de onda a uno o muchos electrones en movimiento uniforme, debemos preguntarnos ante todo: ¿cuál es la longitud de onda asociada? Ésta es una pregunta cuantitativa y se debe edificar una teoría más o menos cuantitativa que dé la respuesta buscada. Esto es, por suerte, un asunto sencillo. La simplicidad matemática de la teoría de De Broglie, que contesta a dicho interrogante, es pasmosa. En comparación con esta teoría, la técnica matemática empleada en otras teorías de la misma época era realmente sutil y complicada. Las matemáticas con que se trata el problema de las ondas de materia son extremadamente fáciles y elementales; pero las ideas fundamentales son profundas y de largo alcance.
Ha sido mostrado antes, en el caso de ondas de luz y fotones, que toda expresión formulada en el lenguaje ondulatorio puede ser trasladada al lenguaje de los fotones o corpúsculos luminosos. Vale lo mismo para las ondas electrónicas. Para el caso de electrones en movimiento uniforme, el lenguaje corpuscular ya nos es conocido. Pero toda expresión del lenguaje corpuscular puede ser traducida al lenguaje ondulatorio exactamente como en el caso de los fotones. Dos son las claves que dieron las reglas de esta traducción. La analogía entre las ondas de luz y las ondas electrónicas o entre fotones y electrones, constituye una de las claves. Se trata de usar el mismo método de traducción para la materia que el empleado para la luz. La otra clave procede de la teoría de la relatividad restringida. Las leyes de la naturaleza deben ser invariantes respecto a la transformación de Lorentz y no respecto a la transformación clásica. Estas dos claves determinan, juntas, la longitud de onda correspondiente a un electrón en movimiento. Se deduce de la teoría que un electrón que se mueve con una velocidad de unos 15.000 kilómetros por segundo, tiene una longitud de onda asociada, que es fácilmente calculable y que cae en la región de las longitudes de onda de los rayos X. Se llega así a la conclusión de que si es posible poner de manifiesto el carácter ondulatorio de la materia, tendrá que realizarse experimentalmente de forma parecida a la usada por los rayos X.
Imaginemos un haz de electrones que se mueve uniformemente con una velocidad determinada, o para usar la terminología ondulatoria, una onda electrónica homogénea, y supongamos que incide sobre un cristal muy fino el cual hace el papel de una red de difracción. Las distancias entre los obstáculos que producen la difracción en el cristal son tan pequeñas que pueden producir la difracción de rayos X. Resulta lógico esperar un efecto similar con las ondas electrónicas al atravesar la fina capa cristalina. Ahora bien, la experiencia confirma lo que constituye, indudablemente, uno de los mayores éxitos de la teoría: el fenómeno de la difracción de las ondas electrónicas. La similitud entre la difracción de una onda electrónica y un haz de rayos X es muy pronunciada, como puede observarse comparando las fotografías correspondientes. Sabemos que tal imagen nos permite determinar la longitud de onda de los rayos X. Lo mismo vale para las ondas electrónicas. La imagen de difracción de la longitud de la onda de materia, y el acuerdo cuantitativo perfecto entre la teoría y la experiencia confirman espléndidamente la concatenación de nuestro razonamiento.
Las dificultades anteriores se agrandan y profundizan con este resultado. Esto se puede aclarar con un ejemplo semejante a otro ya dado para las ondas luminosas. Un electrón disparado hacia un pequeño orificio se comportará como una onda luminosa, produciendo anillos claros y oscuros sobre una placa fotográfica. Puede haber cierta esperanza de explicar este fenómeno por una interacción entre el electrón y el borde del orificio, aun cuando esta explicación no parece ser muy prometedora. ¿Pero qué sucede en el caso de dos de esos pequeños orificios dispuestos uno al lado del otro? Como en el caso de la luz, obtenemos también aquí franjas en lugar de anillos. ¿Cómo es posible que la presencia del segundo de los orificios modifique completamente el efecto? El electrón es indivisible y parece que sólo ha de poder pasar por uno de los dos orificios. ¿Cómo podría saber un electrón que atraviese un orificio, que hay otro agujero a cierta distancia?
Antes nos preguntábamos: ¿Qué es la luz? ¿Es una lluvia de corpúsculos o una onda? Ahora preguntamos: ¿Qué es la materia? ¿Qué es un electrón? ¿Es una partícula o una onda? El electrón se comporta como una partícula cuando se mueve en un campo eléctrico o magnético exterior. Actúa como una onda al ser difractado por un cristal. Aquí tropezamos, para el cuanto elemental de materia, con la misma dificultad que encontramos para los cuantos de luz. Una de las cuestiones más fundamentales que ha originado el progreso reciente de la ciencia es cómo reconciliar las dos imágenes contradictorias de materia y onda. La formulación de una de esas dificultades fundamentales conducirá indefectiblemente al avance de la ciencia. La física ha tratado de resolver este problema. El futuro deberá decidir si la solución sugerida por la física moderna es eventual o duradera.
#### ONDAS DE PROBABILIDAD
Si se conoce la posición y la velocidad de un punto material dado, y también qué fuerzas exteriores obran sobre él, se puede predecir su trayectoria y su velocidad futura de acuerdo a las leyes de la mecánica clásica. La afirmación: «el punto material tiene tal y tal posición y velocidad en tal y tal instante», tiene un significado perfectamente definido en la mecánica clásica. Si esta afirmación perdiera su sentido concreto, el razonamiento que nos permitió predecir el movimiento futuro fallaría por su base.
Al principio del siglo XIX, los hombres de ciencia quisieron reducir toda la física a la acción de fuerzas de atracción y repulsión entre partículas materiales cuyas posiciones y velocidades eran bien definidas en todo momento. Recordemos cómo describíamos el movimiento al discutir la mecánica al principio de nuestra excursión por el dominio de los fenómenos físicos. Dibujábamos puntos a lo largo de una curva determinada que indicaban las posiciones exactas del móvil en ciertos instantes del tiempo y vectores tangentes que indicaban la dirección y la magnitud de las velocidades correspondientes. Esto era sencillo y convincente. Pero no se puede repetir lo mismo para los cuantos elementales de materia, esto es, los electrones, ni para los cuantos de energía, o sea, los fotones. No se puede determinar el movimiento de un fotón o de un electrón a la manera de la mecánica clásica. El ejemplo de los dos orificios hechos con la punta de un alfiler lo muestra claramente. Parece como si tanto el electrón como el fotón pasaran por los dos orificios. Es decir, es imposible explicar el efecto que se observa en dicho caso imaginando la trayectoria de un electrón o de un fotón, a la vieja manera clásica.
Nos vemos obligados, sin embargo, a admitir la existencia de procesos elementales como el paso de los electrones o de los fotones a través de los pequeños orificios, ya que la existencia de los cuantos elementales de materia y de energía no se puede poner en duda.
Intentemos, por lo tanto, ensayar algo diferente. Repitamos continuamente el mismo proceso elemental. Uno después de otro, los electrones son mandados en la dirección de los minúsculos orificios. Hablamos de «electrones», pero nuestro razonamiento vale también para fotones.
El mismo proceso se repite muchas veces de una manera exactamente igual; todos los electrones tienen la misma velocidad y van todos dirigidos hacia los dos orificios. Apenas si hace falta mencionar que se trata de una experiencia ideal que sólo puede ser imaginada, pero nunca realizada. No podemos disparar fotones o electrones, uno a uno, en instantes de tiempo dados, como quien dispara un proyectil con un cañón.
El resultado de los procesos repetidos debe ser, como antes, la formación de anillos iluminados y oscuros para el caso de un orificio, y franjas claras y oscuras, para dos orificios. Hay, sin embargo, una diferencia esencial. En el caso de un solo electrón el efecto observado era incomprensible. Se entiende más fácilmente si el proceso se repite muchas veces. En efecto, se puede argumentar así: donde caen muchos electrones aparecen franjas blancas; en los lugares donde inciden menos electrones las franjas son menos intensas. Una región completamente oscura significa que a ella no llega electrón alguno. No podemos aceptar, naturalmente, que todos los electrones pasan por uno solo de los dos orificios; pues, si éste fuera el caso, no podría haber la más mínima diferencia se tape o no el otro de los agujeros. Pero nosotros sabemos que tapando una de las aberturas se produce una diferencia enorme. Como esas partículas son indivisibles, no se puede imaginar que una de ellas pase por los dos orificios. El hecho de que el proceso se repita un gran número de veces señala una nueva posible explicación. Algunos de los electrones pueden pasar por uno de los orificios y los demás por el otro. No sabemos por qué un electrón dado elige un orificio y no el otro, pero el efecto resultante de muchos casos repetidos debe ser tal que ambos orificios participen en la transmisión de los electrones de la fuente a la pantalla receptora. Si nos ocupamos sólo de lo que sucede a la multitud de electrones, al repetirse la experiencia, sin preocuparnos de su comportamiento individual, se hace inteligible la diferencia entre las imágenes de anillos y las imágenes de franjas. De la discusión de una larga serie de procesos iguales, repetimos, nació una nueva idea, la de una multitud compuesta de individuos que se comportan de un modo imposible de pronosticar. No se puede precedir el curso de un electrón, pero podemos predecir el resultado neto: por ejemplo, la aparición sobre la pantalla de las franjas claras y oscuras.
Dejemos por un momento la física cuántica.
Hemos visto que en la física clásica, si se conoce la posición y la velocidad de un punto material en cierto instante y las fuerzas que actúan sobre él, se puede predecir su trayectoria futura. También vimos cómo el punto de vista mecanicista fue aplicado en la teoría cinética de la materia. Pero en esta teoría se originó una nueva idea importante, que conviene establecer y comprender con claridad.
Un recipiente contiene cierta cantidad de gas. Si se deseara seguir el movimiento de cada una de sus partículas habría que comenzar por hallar sus estados iniciales, esto es, las posiciones y velocidades iniciales de todas las partículas. Aun en el caso de que esto fuera posible, el trabajo de anotarlas sobre un papel requeriría un tiempo mayor que la vida de un hombre, debido al enorme número de partículas que habría que considerar. Si cumplida esta labor, se pretendiera aplicar los métodos conocidos de la mecánica clásica para calcular las posiciones finales de todas las partículas, las dificultades que se encontrarían en dicho cálculo serían insuperables. Es decir, en principio es posible usar, para este caso, el método aplicado al movimiento de los planetas; pero en la práctica resultaría inútil, inaplicable, por lo cual se debe abandonar y recurrir al llamado _método estadístico._ Este método nos dispensa del conocimiento exacto de los estados iniciales. Nos hacemos indiferentes a la suerte de las partículas del gas tomadas individualmente. El problema es ahora de naturaleza diferente. Por ejemplo, no nos preguntamos: «¿Cuál es la velocidad de cada una de las partículas en tal o cual instante?», sino «¿cuántas partículas del gas tienen una velocidad comprendida entre 1.000 y 1.100 metros por segundo?». No nos preocupamos de cada partícula individualmente. Lo que buscamos determinar son valores medios que caractericen al conjunto. Es, además, obvio que el método estadístico se puede aplicar, únicamente, a un sistema compuesto de un gran número de individuos.
Aplicando el método estadístico no es posible predecir el comportamiento de uno de los componentes de una multitud. Sólo se puede predecir _la probabilidad_ de que se comporte de una manera particular. Si las leyes estadísticas expresan que una tercera parte de las partículas de una agregación tiene una velocidad comprendida entre 1.000 y 1.100 metros por segundo, ello significa que haciendo nuestras observaciones repetidas veces obtendremos, realmente, dicho promedio o, en otras palabras, que la probabilidad de encontrar una partícula dentro de dicho intervalo de velocidades, es un tercio.
Igualmente, conocer el índice de natalidad de una gran comunidad no significa que sepamos si en una familia determinada nacerá una criatura. Significa el conocimiento de resultados estadísticos en los cuales se diluye la personalidad de los componentes.
Observando las placas de matrícula de una gran caravana de autos, es fácil descubrir que un tercio de sus números son divisibles por tres. Pero no es posible predecir si el número del próximo coche gozará de dicha propiedad aritmética. Las leyes estadísticas se pueden aplicar sólo a multitudes muy numerosas, pero no a sus miembros individualmente.
Ahora estamos en condiciones de retomar el problema de los cuantos.
Las leyes de la física cuántica son de naturaleza estadística. Esto es: no se refieren a un solo sistema sino a una agregación o conjunto numeroso de sistemas idénticos; no se pueden comprobar por mediciones sobre un caso aislado, individual, sino únicamente por una serie de medidas repetidas.
La desintegración radiactiva es uno de los fenómenos naturales que la física cuántica trata de interpretar formulando leyes que expliquen la transmutación espontánea de un elemento en otro. Se sabe, por ejemplo, que en 1.600 años, la mitad de un gramo de radio se desintegrará y la otra mitad quedará sin modificación. Estamos en condiciones de predecir aproximadamente cuántos átomos de dicho elemento se desintegran durante la próxima media hora, pero no podemos afirmar, ni siquiera en nuestras descripciones teóricas, si tales o cuales átomos están condenados a la desintegración. Es decir, en base al conocimiento actual no existe posibilidad alguna de individualizar los átomos condenados a transformarse. El destino de un átomo no depende de su edad. No tenemos la más ligera idea de las leyes que gobiernan su comportamiento individual. Se han podido formular únicamente leyes que valen para agregaciones compuestas de numerosísimos átomos.
Tenemos otro caso. La luz emitida por un elemento en estado gaseoso, analizada por un espectroscopio, muestra líneas de longitudes de onda bien definidas. La aparición de un conjunto discontinuo de líneas de determinadas longitudes de onda es característica de los fenómenos atómicos en los que se manifiesta la existencia de cuantos elementales. Pero hay otro aspecto interesante del problema. Algunas de las líneas espectroscópicas son intensas; otras, en cambio, débiles. Una línea intensa significa que el átomo emitió un número relativamente grande de fotones que corresponden a la longitud de onda de dicha línea; una línea débil quiere decir que el átomo emitió un número comparativamente menor de los fotones correspondientes. La teoría nos da, otra vez, una explicación de naturaleza estadística, solamente. Como sabemos, cada línea corresponde a una transición de un nivel de energía superior a otro de energía inferior. La teoría nos habla únicamente de probabilidad de cada una de las posibles transiciones, pero nada nos dice de la transición efectiva de un átomo dado. Sin embargo, las consecuencias de esta teoría están en espléndido acuerdo con la experiencia, porque todos estos fenómenos implican un gran número de átomos y no átomos aislados.
Podría parecer que la nueva física de los cuantos se asemeja a la teoría cinética de la materia, pues ambas son de naturaleza estadística y ambas se refieren a grandes conjuntos de partículas. ¡Pero no hay tal! En esta analogía es de suma importancia ver, no sólo los aspectos similares sino, también, las diferencias. La similitud entre la teoría cinética de la materia y la física cuántica reside principalmente en el carácter estadístico de ambas. ¿Pero cuáles son los aspectos diferenciales?
Si queremos saber cuántos hombres y mujeres que viven en una ciudad tienen una edad mayor de veinte años, debemos hacer que cada uno de sus habitantes llene un formulario que tenga los siguientes encabezamientos: «hombre», «mujer», «edad». En el supuesto de que las respuestas sean correctas, obtendremos fácilmente el resultado estadístico buscado, separándolas apropiadamente y contándolas. Los nombres propios y las direcciones evidentemente no interesan. Pero nuestro conocimiento estadístico se basa a su vez en el conocimiento de un gran número de casos individuales. De igual manera, en la teoría cinética de la materia tenemos leyes de carácter estadístico, que gobiernan el conjunto de numerosas partículas, obtenidas sobre la base de leyes individuales.
Pero en la física cuántica el panorama es enteramente diferente. En esta teoría, las leyes estadísticas están dadas inmediatamente, habiéndose renunciado a las leyes individuales. Del ejemplo de un electrón o de un fotón y dos orificios pequeños, se deduce la imposibilidad de una descripción del movimiento de una partícula elemental en el espacio y en el tiempo, a la manera de la física clásica. La física cuántica abandona las leyes individuales de partículas elementales y establece _directamente_ las leyes estadísticas que rigen los conjuntos numerosos. Es imposible, basándose en la física cuántica, describir las posiciones y las velocidades de una partícula elemental o predecir su trayectoria futura como en la física clásica. La física cuántica vale sólo para grandes multitudes y no para cada uno de sus componentes individuales.
No es la pura especulación ni el deseo de novedades, sino la dura necesidad la que forzó a los físicos a modificar el punto de vista clásico. Hemos expuesto las dificultades que acarrea la aplicación de la concepción clásica al fenómeno de la difracción. Podríamos citar muchos otros ejemplos en los que se encuentran dificultades de explicación análogas. En nuestro intento, siempre renovado, de comprender la realidad, nos vemos continuamente obligados a cambiar nuestro punto de vista. Pero corresponde al futuro decidir si elegimos la única salida posible o si se pudo haber encontrado una solución mejor de dichas dificultades.
Hemos tenido que abandonar la descripción de los casos individuales como sucesos objetivos en el espacio y en el tiempo; hemos tenido que introducir en la física leyes de naturaleza estadística. Éstas son las características más importantes de la moderna física cuántica.
Al introducir las nuevas realidades físicas, tales como el campo electromagnético y el campo de gravitación, hemos expuesto, en términos generales, las características fundamentales de las ecuaciones que constituyen la expresión matemática de dichas ideas. Ahora haremos lo mismo con la física cuántica, refiriéndonos, sólo brevemente, a los trabajos de Bohr, De Broglie, Schrödinger, Heisenberg, Dirac y Born.
Consideremos el caso de un solo electrón. Éste se puede encontrar bajo la influencia de un campo electromagnético arbitrario o estar libre de toda influencia exterior. Se puede mover, por ejemplo, en el campo de un núcleo atómico o ser difractado por un cristal. La física cuántica nos enseña la manera de formular las ecuaciones matemáticas para cada uno de estos problemas.
Ya hemos visto que existe cierta similitud entre una cuerda vibrante, la membrana de un tambor, un instrumento de viento, o cualquier otro instrumento acústico, y un átomo radiante o en estado de emisión. Hay también cierta semejanza entre las ecuaciones matemáticas que corresponden a esos problemas de acústica y las ecuaciones matemáticas de la física cuántica. Pero la interpretación física de las magnitudes determinadas en los dos casos es totalmente distinta. Las magnitudes físicas que describen la cuerda vibrante y un átomo radiante tienen un significado completamente diferente, a pesar de existir ciertas analogías entre las ecuaciones correspondientes. En el caso de una cuerda, se quiere conocer la desviación de uno cualquiera de sus puntos de su posición normal, en un instante arbitrario. Conociendo la forma de la cuerda en un instante dado, conocemos cuanto deseamos. Es decir, con las ecuaciones matemáticas de la cuerda vibrante se puede calcular su desviación de la normal en cualquier instante del tiempo. Este hecho se expresa de una manera más rigurosa, como sigue: en todo momento, la desviación de la posición normal es una _función_ de las coordenadas de la cuerda. Los puntos de la cuerda forman un continuo unidimensional y la desviación de su posición normal es una función definida en este continuo unidimensional, que se calcula con las ecuaciones de la cuerda vibrante.
Análogamente, en el caso de un electrón existe una función que tiene un valor determinado en todo punto del espacio y en todo instante del tiempo. Llamaremos a esta función _onda de probabilidad_. En la analogía que venimos estableciendo, la onda de probabilidad corresponde a la desviación de la cuerda de su posición normal. La onda de probabilidad es, en un instante dado, una función de un continuo tridimensional, mientras que, como acabamos de decir, en el caso de la cuerda, la desviación es, en un momento dado, una función de un continuo unidimensional. La onda de probabilidad, que se obtiene resolviendo las ecuaciones cuánticas, constituye la base de nuestro conocimiento de los sistemas cuánticos y nos permite dar una respuesta a todo problema de naturaleza estadística referente a tales sistemas. No nos da, sin embargo, la posición y la velocidad de un electrón en un instante del tiempo porque esto no tiene sentido en la física cuántica. Pero nos dará la probabilidad de encontrar un electrón en un lugar determinado del espacio o donde existe la máxima probabilidad de encontrarlo. El resultado no vale para una sola, sino para medidas repetidas un gran número de veces. Las ecuaciones de la física cuántica determinan la onda de probabilidad exactamente como las ecuaciones de Maxwell determinan el campo electromagnético y las ecuaciones gravitatorias determinan su campo. Las leyes de la física cuántica son también leyes estructurales. Pero el significado de los conceptos definidos por las ecuaciones de la mecánica cuántica es mucho más abstracto que el de los campos electromagnético y gravitatorio; sus ecuaciones sólo proporcionan los métodos matemáticos para resolver cuestiones de naturaleza estadística.
Hasta el presente hemos tratado sólo el caso de un electrón. Si no se tratara de un electrón, sino de una carga de un valor respetable, que contenga billones de electrones, podríamos dejar de lado la teoría cuántica y tratar el problema de acuerdo a la física precuántica. En concreto, hablando de corrientes en un alambre, de conductores cargados, de ondas electromagnéticas, podemos aplicar la física clásica, que contiene las ecuaciones de Maxwell. Pero no podemos proceder así tratando el efecto fotoeléctrico, la intensidad de las líneas espectroscópicas, la radiactividad, la difracción de las ondas electrónicas y muchísimos fenómenos más, en los que se manifiesta el carácter cuántico de la materia y de la energía. Tenemos que subir, por así decir, un piso más arriba. Mientras en la física clásica hablábamos de las posiciones y de las velocidades de una partícula, debemos considerar, ahora, las ondas de probabilidad en un continuo tridimensional.
La física cuántica nos da ciertas reglas que permiten tratar un problema dado, si conocemos el modo de tratar uno análogo desde el punto de vista de la física clásica.
Para una partícula elemental, un electrón o un fotón, tenemos ondas de probabilidad en un continuo tridimensional. Pero ¿qué sucede en el caso de dos partículas que ejercen una acción mutua entre sí? No podemos tratarlas separadamente, es decir, describir cada una de ellas con una onda de probabilidad tridimensional, precisamente a causa de su interacción. Sin embargo, no es difícil adivinar cómo habrá que tratar desde el punto de vista cuántico un sistema formado por un par de partículas. Tenemos que descender ahora al piso inferior, retornar, por un momento, a la física clásica. La posición de dos partículas materiales, en un instante cualquiera, está caracterizada por seis números, tres para cada una de las partículas. Todas las posibles posiciones de dos puntos materiales forman un continuo de seis dimensiones. Si ahora volvemos al piso superior, a la física cuántica, tendremos ondas de probabilidad en un continuo de seis dimensiones. Análogamente, para tres, cuatro y más partículas, las ondas de probabilidad serán funciones en un continuo de nueve, doce y más dimensiones.
Esto indica claramente que las ondas de probabilidad son más abstractas que los campos electromagnéticos y gravitatorios que existen y se extienden en nuestro espacio de tres dimensiones. Las ondas de probabilidad tienen como fondo un continuo multidimensional que se reduce a uno tridimensional, como nuestro espacio, para el caso, más simple, de una partícula elemental. El único significado físico de la onda de probabilidad es que ella nos permite contestar a cuestiones estadísticas razonables en el caso de una o de muchas partículas elementales. Así, por ejemplo, para un electrón, podríamos preguntar cuál es la probabilidad de encontrarlo en cierto lugar del espacio. Para dos partículas, la cuestión podría plantearse así: ¿cuál es la probabilidad de encontrarlas en dos lugares determinados del espacio, en cierto instante del tiempo?
Nuestro primer paso hacia la física cuántica ha sido el abandono de la descripción de los casos elementales como sucesos objetivos en el espacio y en el tiempo. Nos hemos visto obligados a aplicar el método estadístico proporcionado por las ondas de probabilidad. Habiendo adoptado este camino nos vimos obligados a continuar por él, cada vez más hacia lo abstracto, debiendo introducir ondas de probabilidad multidimensionales para problemas de más de una partícula.
Llamemos, por brevedad, física clásica a todo aquello que no sea física cuántica; entonces podemos decir: la física clásica difiere radicalmente de la física cuántica. Aquélla pretende dar descripciones de objetos con existencia en el espacio y formular leyes que rijan sus cambios en el tiempo. Pero, repetimos, los fenómenos que revelan el carácter corpuscular y ondulatorio de la materia y de la radiación, el carácter aparentemente estadístico de fenómenos como la desintegración radiactiva, la difracción, la emisión de las líneas espectroscópicas y otros más, nos forzaron al abandono de la concepción clásica. La física cuántica no pretende dar una descripción de partículas elementales en el espacio y sus cambios en el tiempo. No hay lugar, en la física cuántica, para expresiones como la siguiente: «esta partícula es así y así, y tiene estas o aquellas propiedades». Tenemos, en cambio, expresiones como ésta: «hay tal o cual probabilidad de que una partícula sea así y así tenga estas o aquellas propiedades». Insistimos: no hay lugar en la física cuántica para leyes que rijan las variaciones, en el tiempo, de objetos tomados individualmente; en cambio poseemos leyes que dan las variaciones en el tiempo de la probabilidad. Sólo por este cambio fundamental, introducido en la física por la teoría cuántica, fue posible encontrar una explicación de la naturaleza aparentemente discontinua y estadística de los sucesos del dominio de los fenómenos en los que se revela la existencia del cuanto elemental de materia y del cuanto elemental de radiación.
Sin embargo, han surgido otros problemas, aún más difíciles, que no han podido ser resueltos todavía definitivamente. En lo que sigue mencionaremos sólo algunos de estos problemas no resueltos todavía. La ciencia no es, ni será jamás, un libro terminado. Todo avance importante trae nuevas preguntas. Todo progreso revela, a la larga, nuevas y más hondas dificultades.
Ya sabemos que en el caso simple de una o muchas partículas podemos pasar del planteamiento clásico al planteamiento cuántico; de la descripción objetiva de sucesos en el espacio y el tiempo a las ondas de probabilidad. Pero no olvidemos el concepto fundamental del campo de la física precuántica. ¿Cómo podremos describir la interacción entre el campo y los cuantos elementales de materia? Si se requiere una onda de probabilidad de treinta dimensiones, para dar una descripción cuántica de un sistema de diez partículas, entonces hará falta una onda de probabilidad de un número infinito de dimensiones para interpretar el campo desde el punto de vista de los cuantos. La transición del concepto clásico del campo al problema correspondiente de las ondas de probabilidad de la física cuántica constituye un paso que encierra dificultades muy graves. Ascender, aquí, otro piso, no es asunto fácil y todas las tentativas hechas hasta el presente con el objeto de resolver este problema hay que considerarlas como infructuosas. Otro problema fundamental es el siguiente: en todas las discusiones respecto a la transición de la física clásica a la física cuántica hemos empleado el punto de vista pre-relativista, en el cual se considera diferentemente el espacio y el tiempo. Si quisiéramos partir de la descripción clásica, propuesta por la teoría de la relatividad, nuestro ascenso a la teoría de los cuantos parece mucho más complicado. Éste es otro problema atacado por la física moderna, pero se está todavía lejos de haber dado con una solución completa y satisfactoria. Citemos, finalmente, la dificultad con que se tropezó al ensayar la formulación de una física coherente de las partículas pesadas que constituyen los núcleos atómicos. A pesar del cúmulo de datos experimentales y de los múltiples ensayos de arrojar luz sobre el problema nuclear, estamos todavía en la mayor oscuridad, en algunas de las más fundamentales cuestiones, dentro de este dominio.
No hay duda de que la física de los cuantos explica una gran variedad de hechos, alcanzando generalmente un acuerdo espléndido entre la teoría y la observación. La nueva física cuántica nos aleja más y más de la clásica concepción mecanicista, y el retorno hacia el punto de vista anterior parece, hoy más que nunca, improbable. Pero no hay duda, tampoco, de que la física de los cuantos se basa todavía sobre los dos conceptos: materia y campo. En este sentido, es una teoría dualista y no adelanta ni un solo paso el viejo problema de reducirlo todo al concepto de campo.
¿Se desenvolverá el progreso futuro a lo largo de la línea elegida por la física cuántica o es más probable que se introduzcan ideas nuevas y revolucionarias? El campo del progreso científico, ¿hará una nueva curva pronunciada como lo hizo a menudo en el pasado?
En los últimos años, todas las dificultades de la física cuántica han sido concentradas en unos pocos puntos principales. La física espera impaciente su solución. Pero no podemos prever cuándo y dónde se hará la clarificación de dichas dificultades.
#### FÍSICA Y REALIDAD
¿Qué conclusiones generales se pueden deducir del desarrollo de la física, que acabamos de esbozar siguiendo sólo las ideas más fundamentales?
La ciencia no es sólo una colección de leyes, un catálogo de hechos sin mutua relación. Es una creación del espíritu humano con sus ideas y conceptos libremente inventados. Las teorías físicas tratan de dar una imagen de la realidad y de establecer su relación con el amplio mundo de las impresiones sensoriales. Así pues, la única justificación de nuestras estructuras mentales está en el grado y en la forma en que las teorías logren dicha relación.
Hemos visto cómo se crearon nuevas realidades durante el progreso de la física. Pero el proceso de creación puede ser descubierto con mucha anterioridad al punto inicial de la física. Uno de los conceptos más primitivos es el de objeto. Los conceptos de un árbol, un caballo, o de cualquier otro cuerpo material, son creaciones adquiridas de la experiencia aun cuando las impresiones en que se originaron son primitivas en comparación con el mundo de los fenómenos físicos. Un gato cazando un ratón también crea, por el pensamiento, su realidad propia y primitiva. El hecho de que el gato reaccione de igual manera contra cualquier ratón que encuentre, muestra que forma conceptos y teorías que lo guían por su propio mundo de impresiones sensoriales.
«Tres árboles» es algo diferente de «dos árboles». Pero «dos árboles» no es lo mismo que «dos piedras». Los conceptos de los números puros, 2, 3, 4..., abstraídos de los objetos de los cuales se originaron, son creaciones de la mente pensante, creaciones que contribuyen a describir la realidad de nuestro mundo.
El sentir psicológico, subjetivo, del tiempo nos permite ordenar nuestras impresiones, establecer que un suceso precede a otro. Pero relacionar todo instante del tiempo con un número, mediante el empleo de un reloj, considerar el tiempo como un continuo unidimensional, ya es una invención. También lo son los conceptos de la geometría euclidiana y no-euclidiana y de nuestro espacio entendido como un continuo tridimensional.
La física empezó, en realidad, con la invención de los conceptos de masa, de fuerza y de sistema inercial. Todos estos conceptos son invenciones libres. Ellos condujeron a la formulación de la concepción mecanicista. Para el físico de principios del siglo XIX, la realidad de nuestro mundo exterior consistía en partículas entre las que actuaban simples fuerzas que dependían únicamente de la distancia que las separara. Trató de retener, tanto como le fue posible, su creencia de que sería factible explicar todos los sucesos naturales con esos conceptos fundamentales de la realidad. Las dificultades relacionadas con la desviación de una aguja magnética por una corriente eléctrica, las relacionadas con el problema de la estructura del éter, nos indujeron a crear una realidad más sutil. Así apareció el importante descubrimiento del campo electromagnético. Hacía falta una imaginación científica intrépida para percatarse de que el comportamiento de los cuerpos pudiera dejar de ser esencial para el ordenamiento y comprensión de los sucesos, siéndolo, en cambio, el comportamiento de algo entre ellos.
Posteriores progresos han destruido los viejos conceptos y creado nuevos. El tiempo absoluto y el sistema inercial de coordenadas han sido abandonados por la teoría de la relatividad. El continuo unidimensional del tiempo y el continuo tridimensional del espacio dejaron de ser el fondo o escenario de todos los sucesos naturales, siendo sustituidos por el continuo tetradimensional del espacio-tiempo, otro invento libre con nuevas propiedades de transformación. El sistema inercial de coordenadas dejó de ser indispensable. Todo sistema de coordenadas es igualmente adecuado para la descripción de los sucesos de la naturaleza.
La teoría de los cuantos creó, también, nuevas y esenciales características de la realidad. La discontinuidad reemplazó a la continuidad. En lugar de leyes que valgan para los casos individuales, aparecieron leyes de probabilidad.
La realidad creada por la física moderna está, ciertamente, muy distante de la realidad de los primeros días. Pero el objeto de toda teoría física sigue siendo el mismo.
Con la ayuda de las teorías físicas tratamos de encontrar nuestro camino por el laberinto de los hechos observados; ordenar y entender el mundo de nuestras sensaciones. Desearíamos que los hechos observados resultaran consecuencia lógica de nuestro concepto de la realidad. Sin la creencia de que es posible asir la realidad con nuestras construcciones teóricas, sin la creencia en la armonía interior de nuestro mundo, no podría existir la ciencia. Esta creencia es, y será siempre, la motivación fundamental de toda creación científica. A través de todos nuestros esfuerzos, en cada una de las dramáticas luchas entre las concepciones viejas y las nuevas, se reconoce el eterno anhelo de comprender, la creencia siempre firme en la armonía del mundo, creencia continuamente fortalecida por el encuentro de obstáculos siempre crecientes hacia su comprensión.
RESUMEN
La enorme y variada multitud de hechos del dominio de los fenómenos atómicos nos obliga, como antes, a inventar nuevos conceptos físicos. La materia tiene una estructura granular; está compuesta de partículas elementales, de cuantos elementales de materia. También poseen estructura granular —y esto es de la máxima importancia desde el punto de vista de la teoría de los cuantos— la carga eléctrica y la energía. Los fotones son los cuantos de energía que componen la luz.
¿Es la luz una onda o una lluvia de fotones? Un haz de electrones, ¿es una lluvia de partículas elementales o una onda? Estas cuestiones fundamentales de la física proceden de la experiencia. Al tratar de contestarlas tenemos que abandonar la descripción de los sucesos atómicos como acontecimientos en el tiempo y en el espacio, tenemos que alejarnos, más todavía, del punto de vista mecanicista. La física cuántica posee leyes que rigen multitudes y no individuos. No describe propiedades, sino probabilidades, no tenemos leyes que revelen el futuro de los sistemas, sino leyes que expresan las variaciones en el tiempo de las probabilidades y que se refieren a conjuntos o agregaciones de un gran número de individuos.
# VI
## NOTAS AUTOBIOGRÁFICAS
Era famosa la frase de Albert Einstein sobre sí mismo: «No se preocupe sobre sus dificultades con las matemáticas. Le puedo asegurar que las mías son aún mayores». Aunque modesto respecto a sus habilidades, y a menudo caricaturizado como un estudiante menor (en realidad, fue simplemente un estudiante mal dirigido), Einstein mostró una curiosidad singularmente intensa sobre el mundo natural, y una tendencia a aprender todo lo que fuera posible sobre los cánones matemático y científico. En sus «Notas autobiográficas», Einstein presenta su propia e inusual historia científica. Inusual, sobre todo por estar repleta de ecuaciones.
Este trabajo, quizá más que cualquier otro dentro de este volumen, nos adentra en por qué Einstein se vuelve el icono que todavía es. Al describir su propia educación, Einstein nos da un paseo guiado por el _statu quo_ de la ciencia en su juventud. Al describir gradualmente tanto sus contribuciones como las de los demás alrededor de la relatividad y la mecánica cuántica, empezamos a darnos cuenta de lo mucho que fue revolucionado el mundo de la física durante su vida.
Con sólo doce años, Einstein leyó por primera vez _Los Elementos_ de Euclides, o lo que él llamaba Pequeño Libro Sagrado de la Geometría. Quedó sobrecogido por la idea de que con unos pocos y simples principios uno pudiera derivar las pruebas propias del universo real. Y pasó el resto de su vida en busca de estas pruebas, a pesar de que en alguna ocasión quedara desconcertado al comprobar que su intuición contradecía lo que podría ser observado, o lo que era observado. Por ejemplo, la teoría de la geometría de Euclides formó la base de nuestra comprensión del universo físico. Empezando con el supuesto de que la física es la misma para todos los observadores y que el tiempo transcurre a flujo constante, la mecánica de sir Isaac Newton podía ser directamente deducida desde Euclides.
A pesar de la admiración por sus trabajos, en última instancia Einstein sería el responsable de dar un vuelco a dos importantes factores: primero, a las bases de la geometría euclidiana como sistema de coordenadas para nuestro universo; y segundo, a la mecánica newtoniana como base de la física. Durante gran parte del siglo XIX, el dogma era que las leyes de movimiento de Newton formaban la base fundamental desde la cual todos los futuros descubrimientos debían producirse. El escenario de Newton, simplemente, era que todas las fuerzas en el universo eran producidas por partículas, y que toda la física podía ser descrita en términos de las interacciones.
Para cuando Einstein nació, ya habían surgido diversas figuras estelares alrededor de los estudios de la naturaleza corpuscular de la física. En 1864, James Clerk Maxwell desarrolló una teoría de la electrodinámica. Einstein recibió una considerable inspiración de Maxwell a través de dos importantes vías. Primero, las ecuaciones de Maxwell mostraron que una onda electromagnética (la luz) se propaga a velocidad constante, independientemente de la velocidad de la fuente. Éste fue un pilar importante para la teoría de la relatividad especial de Einstein. Segundo, las ecuaciones de Maxwell formaban una teoría de campos. Eran los campos eléctrico y magnético los que definían cómo se comportaban las partículas cargadas, y no las partículas cargadas interactuando las unas con las otras. Puede parecer una distinción sutil, pero es importante. En última instancia, este concepto de campo formaría las bases no sólo del electromagnetismo, sino también de los avances para unificar las fuerzas fundamentales de la naturaleza.
Einstein concluye sus notas con una discusión de la relatividad general, su teoría de la gravitación. Parte de la elegancia de esta teoría radica en el hecho de que para aquellos que se fijen en sus matemáticas, parece casi idéntica a la teoría de la electrodinámica de Maxwell. Este hecho no se le escapó a Einstein. Efectivamente, una de las mayores y más persistentes desilusiones de Einstein es que fue incapaz de aunar electromagnetismo y gravedad en una teoría única y unificada, lo cual permanece todavía hoy como uno de los mayores problemas no resueltos de la física teórica moderna.
### 1
### NOTAS AUTOBIOGRÁFICAS*
Aquí me encuentro, a mis sesenta y siete años, preparado para escribir algo así como mi propia necrología, no sólo porque el doctor Schilpp me haya convencido, sino porque deseo exponer a mi compañero de fatigas cuánto valoro, retrospectivamente, sus aspiraciones e investigaciones. Después de reflexionar sobre el tema, advertí que cualquier intento semejante resultaría necesariamente imperfecto, pues aunque se pueda delimitar y resumir toda una vida de trabajo con altibajos, no sería nada fácil transmitir su esencia: el hombre de hoy de sesenta y siete años es distinto del hombre de cincuenta, diferente al de treinta, ajeno al de veinte. El recuerdo del pasado está revestido por el ahora, pues lo contemplamos desde una perspectiva ilusoria. Esta constatación debería desanimar a cualquiera de tal propósito, pero sólo la propia experiencia proporciona algo que no está al alcance de otros.
Muy pronto tomé conciencia de la insignificancia de las aspiraciones e ilusiones que abruman sin descanso a la mayoría de los hombres durante toda su vida. Tampoco tardé en ver qué cruel era esa inquietud, cuya dureza entonces se disfrazaba de hipocresía y grandes palabras mejor que hoy. La necesidad condenaba a participar en esa carrera por la supervivencia, pero aunque resolviera sus necesidades físicas, no satisfacía al hombre como ser que piensa y siente. La religión era la primera opción que el sistema educativo ofrecía a los niños. Así, aunque mis padres eran judíos absolutamente descreídos, yo fui profundamente religioso hasta que cumplí doce años. Los libros de divulgación científica que leía me demostraron que los relatos bíblicos no podían ser ciertos y, consecuentemente, terminé siendo un librepensador fanático, tremendamente impresionado por la convicción de que el Estado miente de manera deliberada a la juventud. La impresión de aquellos años derivó en una desconfianza hacia toda autoridad, en un escepticismo hacia las creencias de cualquier sociedad, actitud que jamás abandoné, si bien más tarde, cuando alcancé una mejor comprensión de las relaciones causales, se moderó.
Ahora sé que la pérdida de aquel paraíso religioso de la infancia fue mi primer intento de liberarme de las ataduras «meramente personales» de una vida dominada por sueños, anhelos y sentimientos primarios. Más allá e independientemente de los hombres, se extendía el gran mundo que se alza ante nosotros misterioso, enorme y eterno, pero comprensible, al menos parcialmente, mediante la investigación y el pensamiento. Contemplarlo parecía liberar de esas ataduras y pronto sentí que más de un hombre a quien yo apreciaba y admiraba había alcanzado la libertad y la paz interior dedicándose a ello con devoción. De manera consciente o inconsciente, la comprensión de ese mundo sirviéndome de mis capacidades pasó a ser mi meta suprema. Decidí seguir a aquellos hombres que, animados por las mismas motivaciones, en el pasado y en el presente, habían comenzado a comprenderlo. Aunque el camino no era tan fácil ni tan atractivo como el paraíso religioso, ha demostrado ser tan sólido que jamás me he arrepentido de haberlo tomado.
La consideración anterior sólo es cierta en parte, al igual que los trazos de un dibujo no reproducen los detalles ni la complejidad de un objeto. Cuando alguien se divierte razonando, esa tendencia de su naturaleza puede dominar otros aspectos y determinar poco a poco su mentalidad. Retrospectivamente verá una evolución sistemática y unitaria en las experiencias que fueron vividas en su momento como a través de un calidoscopio de situaciones singulares, pues la variedad de las circunstancias externas y los momentos concretos implican una especie de atomización de la vida de cada uno. El giro decisivo en un hombre de mi talante se produce cuando la atención se separa progresivamente de lo momentáneo y de lo meramente personal y se centra en la pretensión de aprehender conceptualmente las cosas. Estas conclusiones tan rápidas, pese a su brevedad, encierran tanta verdad como permite su concisión.
¿Qué es, en realidad, «pensar»? Las imágenes de la memoria que emergen cuando recibimos impresiones sensoriales no constituyen todavía ningún «pensamiento»; tampoco se puede hablar de «pensamiento» al encadenamiento de dichas imágenes en secuencias que evocan otras imágenes; pero cuando una imagen concreta reaparece en numerosas secuencias, precisamente por ser recurrente, funciona como elemento ordenador y relaciona secuencias que en principio eran inconexas. Este elemento se convierte en herramienta, en concepto. Sospecho que el paso de la asociación libre, de la imaginación, al pensamiento deriva de la importancia del papel que desempeñe el «concepto». Realmente, no es preciso que un concepto esté unido a un signo sensorial perceptible y reproducible (palabra), pero si está unido a uno, el pensamiento se hace comunicable.
El lector se preguntará con qué derecho manejo semejantes ideas tan a la ligera y de manera tan básica sin siquiera intentar demostrar nada. En mi defensa, alego que todo nuestro pensamiento es así, un juego libre con conceptos cuya justificación radica en el grado de comprensión de nuestras experiencias sensoriales. A mi entender, el concepto de «verdad» no es aplicable a esta estructura y sólo interviene cuando se alcanza un consenso general ( _convention_ ) sobre los elementos y las reglas del juego.
Sin duda, el pensamiento en gran medida se desarrolla sin el uso de signos (palabras) y de manera inconsciente, pues cómo se explicaría en caso contrario nuestro «asombro» espontáneo ante alguna experiencia. Esta reacción nace del conflicto entre una vivencia nueva y nuestro sistema de conceptos; si el choque es intenso, afecta decisivamente a nuestras ideas, cuya evolución es, en cierto sentido, una constante huida del «asombro».
Experimenté un asombro semejante a los cuatro o cinco años, cuando mi padre me enseñó una brújula. Su precisión no se ajustaba en absoluto al comportamiento de los fenómenos que sucedían en el mundo inconsciente de los conceptos (acción ligada al «contacto»). Creo recordar que esta experiencia me impresionó de manera profunda e imborrable. Detrás de las cosas debía de haber algo tremendamente oculto. Ante todo lo que el hombre contempla desde niño, no reacciona así, no se asombra de la caída de los cuerpos, ni del viento y la lluvia, ni de que la Luna no caiga, ni de la diversidad de lo animado y lo inanimado.
A los doce años me asombré por segunda vez, pero de manera muy distinta, pues se debió a la lectura de un librito sobre geometría euclidiana del plano, que cayó en mis manos al comienzo del curso. En sus páginas se afirmaba, por ejemplo, que podía probarse la intersección de las tres alturas de un triángulo en un punto. La certeza y la seguridad de sus afirmaciones me causaron una impresión difícil de describir. No me inquietaba que se debieran aceptar los axiomas sin demostrarlos, pues para mí ya era extraordinario componer demostraciones a partir de esos postulados de cuya validez no dudaba. Antes de que aquel sagrado librito de geometría cayera en mis manos, uno de mis tíos me enseñó el teorema de Pitágoras. Después de arduos esfuerzos «probé» el teorema basándome en la semejanza de triángulos, pues para mí era «evidente» que las relaciones de los lados de un triángulo rectángulo estaban determinadas por uno de los ángulos agudos. Sólo aquello que no me parecía «evidente» necesitaba, a mi entender, ser probado. Los objetos que estudia la geometría tampoco me parecían distintos a los objetos que percibimos sensorialmente que «podían verse y tocarse». Esta concepción primaria, seguramente el fundamento de los famosos «juicios sintéticos a priori» kantianos, se basa en que la relación entre los conceptos geométricos y los objetos de la experiencia (barra rígida, intervalo, etcétera) se da de manera inconsciente.
Si bien parecía que mediante el pensamiento puro se podían conocer los objetos de la experiencia, este «milagro» descansaba en un error. Quien lo experimenta por primera vez siente que es maravilloso que el hombre alcance un grado de certidumbre y pureza como el del conocimiento geométrico desarrollado por los griegos.
Si bien incidentalmente ya he adelantado algo y puesto que sin querer he interrumpido esta necrología casi al comienzo, explicaré brevemente mi credo epistemológico, que se forjó lentamente en mi madurez.
Observo, por una parte, la totalidad de las experiencias sensoriales y, por otra, la totalidad de los conceptos y proposiciones expuestos en los libros. Las relaciones entre conceptos y proposiciones son lógicas y el pensamiento lógico se reduce a disponer la conexión entre conceptos y proposiciones según unas reglas fijas que constituyen el objeto de la lógica. Los conceptos y proposiciones sólo cobran «sentido» o «contenido» a través de su relación con experiencias de los sentidos. El nexo entre éstas y aquéllos es puramente intuitivo, no es en sí de naturaleza lógica. La mera especulación de la «verdad» científica es el grado de certeza con que se puede establecer esa relación o nexo intuitivo. El sistema de conceptos y las reglas sintácticas —la estructura de los sistemas conceptuales— es una construcción del hombre. En primer lugar, los sistemas conceptuales son en sí arbitrarios desde el punto de vista de la lógica, pero están subordinados a la finalidad de combinar de la manera más cierta (intuitiva) y completa los conceptos con las experiencias sensoriales y, en segundo lugar, aspiran a simplificar a los mínimos elementos, aquéllos lógicamente independientes (conceptos fundamentales y axiomas), es decir, conceptos no definidos y proposiciones no derivadas.
Una proposición es correcta cuando, dentro de un sistema lógico, se deduce a partir de sus reglas lógicas. Un sistema tiene contenido de verdad según con qué grado de certeza y completitud quepa coordinarlo con la totalidad de las experiencias. Una proposición correcta obtiene su «verdad» del contenido de verdad del sistema al que pertenece.
Apuntaré una observación sobre su evolución histórica. Hume advirtió que determinados conceptos, como el de causalidad, no pueden derivarse lógicamente de la experiencia; Kant, totalmente convencido de la necesidad de determinados conceptos, los consideró premisas necesarias de todo pensamiento, diferenciados de los conceptos de origen empírico. Yo considero que esa distinción es errónea o, en cualquier caso, que no plantea el tema con naturalidad. Todos los conceptos, incluso los más cercanos a la experiencia, son, desde el punto de vista lógico, supuestos libres, como lo es el concepto de causalidad, el punto de partida de esta cuestión.
Pero continuemos con mi necrología. De los doce a los dieciséis años me familiaricé con los elementos de las matemáticas, incluidos los principios del cálculo diferencial e integral. Por fortuna, di con libros no demasiados farragosos respecto a la lógica, pero que destacaban con claridad las ideas más importantes. Su lectura me fascinó tanto como la geometría elemental: la idea fundamental de la geometría analítica, las series infinitas, los conceptos de diferencial e integral. También tuve la suerte de toparme con la obra de divulgación científica de Bernstein, cinco o seis tomos que leí con avidez y que me acercaron al conocimiento de los resultados y métodos esenciales de toda la ciencia natural. También había estudiado algo de física teórica antes de ingresar a los diecisiete años en el Politécnico de Zurich como estudiante de matemáticas y física.
Mis profesores allí eran excelentes; con Hurwitz y Minkowski, por ejemplo, podría haber profundizado en mis conocimientos matemáticos, pero yo, fascinado por el contacto directo con la experiencia, pasaba muchas horas trabajando en el laboratorio de física y empleaba las horas restantes en estudiar en casa las obras de Kirchhoff, Helmholtz, Hertz, etcétera. Mi actitud algo descuidada hacia las matemáticas no se debía sólo a mi atracción por las ciencias naturales sino también a una resolución mía bastante curiosa: a mi parecer, la matemática estaba dividida en tantas especialidades, cada una de ellas tan absorbente que podía exigir la dedicación de toda una vida, que me hacía sentir como el asno de Buridán, incapaz de elegir uno de los montones de heno. Evidentemente mi sensibilidad matemática no era tan profunda como para distinguir entre los conocimientos básicos, los conocimientos fundamentales y cuestiones más o menos accesorias. Aparte, indudablemente mi interés por estudiar la naturaleza era más intenso y en mis años de estudiante no comprendía todavía que alcanzar los conocimientos físicos fundamentales exigía dominar los métodos matemáticos más sutiles, vínculo que vislumbré poco a poco después de años trabajando como científico. También la física contaba con numerosas especialidades y también cada una de ellas podía ocupar toda una trayectoria laboral sin satisfacer el deseo de alcanzar un conocimiento profundo, pues el número de datos experimentales apenas relacionados era igualmente descomunal, pero en este campo aprendí muy pronto a rastrear y seleccionar las pistas que podían conducir a la esencia, prescindiendo de la multitud de datos que nos saturan y nos desvían de nuestro objetivo. El inconveniente era que, quisieras o no, debías estudiar toda la materia y esa imposición resultó tan espantosa que, después de aprobar el examen final, perdí el interés en resolver problemas científicos durante todo un año, pero debo decir que en Suiza sufríamos menos que en otros países esa imposición que ahoga el auténtico impulso científico, ya que sólo había dos exámenes y el resto del tiempo uno podía centrarse en sus intereses y más si contaba, como yo, con un amigo que asistía con regularidad a clase y tomaba buenos apuntes. Así, a cambio de la mala conciencia que sobrellevé gustosamente, gocé de la libertad de poder elegir en qué ocuparme hasta un par de meses antes del examen. Es milagroso que los sistemas de enseñanza modernos no hayan matado ya la curiosidad por la investigación, una plantita que necesita no sólo el estímulo sino también la libertad, pues sin ella forzosamente se marchita. Creer que la ilusión de observar e investigar puede fomentarse a golpe de imposiciones y deberes es un grave error: un animal de presa sano podría perder su voracidad si, a latigazos, fuera obligado a comer cuando no está hambriento en determinados elementos elegidos intencionadamente.
En cuanto a la física de entonces, pese a ser fecunda en cuestiones concretas, la rigidez dogmática se imponía en sus principios: en su origen —si lo hubo—, Dios creó las masas, las fuerzas y las leyes del movimiento de Newton y eso es todo; a partir de esa base se debían alcanzar los métodos matemáticos adecuados por deducción, y durante el siglo XIX así se hizo, especialmente a través de la aplicación de las ecuaciones diferenciales en derivadas parciales. Sus resultados despertarían la admiración de cualquier persona atenta. La teoría de la propagación del sonido de Newton puso de manifiesto por primera vez la potencia de la ecuación diferencial en derivadas parciales. Euler ya había establecido el fundamento de la hidrodinámica, pero la elaboración más detallada de la mecánica de masas discretas como base de toda la física fue obra del siglo XIX. Ahora bien, lo que más impresionaba al estudiante no era tanto la estructura técnica de la mecánica y la resolución de complicados problemas como los logros de la mecánica en campos que aparentemente nada tenían que ver con ella: la teoría mecánica de la luz —que la consideraba un movimiento ondulatorio de un éter elástico cuasi rígido— y, en especial, la teoría cinética de los gases: la independencia del calor específico de gases monoatómicos con respecto al peso atómico, la derivación de la ecuación de los gases y su relación con el calor específico, la teoría cinética de la disociación de los gases y, sobre todo, la relación cuantitativa entre viscosidad, conducción térmica y difusión de los gases, que determinaba el tamaño absoluto del átomo. Estos resultados justificaban la consideración de la mecánica como fundamento de la física y como fundamento de la hipótesis atómica, anclada ya firmemente en la química. Sin embargo, en la química, sólo las razones entre las masas de los átomos desempeñaban un papel, no sus magnitudes absolutas, de manera que cabía contemplar la teoría atómica más como una exposición aclaratoria que como conocimiento de la estructura fáctica de la materia. Aparte, la teoría estadística de la mecánica estaba en condiciones de deducir las leyes fundamentales de la termodinámica, tarea ya emprendida exitosamente por Boltzmann.
En consecuencia, no debe extrañarnos que la gran mayoría de los físicos decimonónicos consideraran la mecánica clásica una base firme y definitiva de toda la física y de toda la ciencia natural, ni tampoco que intentaran una y otra vez también basar la teoría de Maxwell del electromagnetismo en la mecánica. Incluso Maxwell y Hertz —reconocidos retrospectivamente y con justicia como los que quebrantaron la fe en la mecánica como base definitiva de todo el pensamiento físico— se atuvieron en el plano del pensamiento consciente a la consideración de la mecánica como fundamento de la física. La _Historia de la mecánica_ , de Ernst Mach perturbó esa fe dogmática y, durante mis años de estudiante, me afectó profundamente. Aunque entonces me impresionó también su postura epistemológica —hoy me parece absolutamente insostenible—, la verdadera grandeza de Mach radica en su escepticismo y su independencia incorruptibles. Metodológicamente, Mach no calibró el carácter constructivo y especulativo de todo pensamiento, especialmente del pensamiento científico, y condenó así la teoría allí donde ese carácter se manifiesta de manera inconfundible, por ejemplo, en la teoría cinética de los átomos.
Antes de iniciar la crítica a la consideración de la mecánica como fundamento de la física, debo apuntar las perspectivas desde donde cabe criticar las teorías físicas. La primera perspectiva es inmediata: la teoría no puede contradecir hechos de la experiencia. Aunque a primera vista este requisito parezca evidente, su aplicación es compleja pues prácticamente siempre es posible aferrarse a un fundamento teórico general ajustándolo a los hechos a partir de nuevos supuestos artificiales. En cualquier caso, este primer punto de vista deriva de la comparación entre la teoría y el material empírico.
La segunda perspectiva no se refiere a la relación con el material de observaciones, sino a la relación con las premisas de la propia teoría, con lo que de manera rápida pero incorrecta, cabe llamar «naturalidad» o «simplicidad lógica» de las premisas, es decir, de los conceptos fundamentales y de las relaciones subyacentes entre ellos. Este punto de vista, cuya exacta formulación tropieza con grandes dificultades, siempre ha jugado un papel importante en la elección y evaluación de las teorías. Aun en el caso de que se pudieran precisar, no se trata de hacer un simple recuento de las premisas lógicamente independientes, sino de realizar una valoración recíproca de cualidades inconmensurables. Además, entre las teorías que se basan en fundamentos igual de «simples», hay que juzgar superior aquella que más limite las posibles cualidades de los sistemas, es decir, aquella que contiene los enunciados más específicos. No es necesario comentar ahora el «alcance» de las teorías, pues esta explicación se limita a teorías cuyo objeto es la _totalidad_ de los fenómenos físicos. En resumen, el segundo punto de vista se caracteriza como aquel que se refiere a la «perfección interna» de la teoría, mientras que el primero tiene que ver con la «confirmación externa». Entiendo que también pertenece a la «perfección interna» la siguiente consideración: valoramos una teoría tanto más cuanto no sea una elección arbitraria, desde el punto de vista lógico, entre teorías intrínsecamente equivalentes y de análoga estructura.
La falta de espacio tipográfico no es la razón de la falta de precisión de las afirmaciones anteriores, más bien al contrario, pues confieso que sería incapaz de afinar más sus definiciones. Y, aunque sí podría ofrecer una formulación más clara, se ha demostrado que los «profetas» suelen coincidir en sus juicios sobre la «perfección interna» de las teorías y más aún sobre el grado de «confirmación externa».
Emprendamos ya la crítica de la mecánica como base de la física.
Desde el primer punto de vista (confirmación por los hechos), la incorporación de la óptica ondulatoria a la representación mecánica del mundo forzosamente tenía que levantar desconfianza. Si se consideraba la luz un movimiento ondulatorio en un cuerpo elástico (éter), éste tenía que ser un medio permeable a todo, análogo en esencia —por la transversalidad de las ondas luminosas— a un cuerpo sólido, sólo que incompresible, de manera que no existirían ondas longitudinales. Como el éter no parecía ofrecer resistencia alguna al movimiento de los cuerpos «ponderables», su existencia debía de ser fantasmal al margen del resto de la materia. Para explicar los índices de refracción de los cuerpos transparentes, así como los procesos de emisión y absorción de la radiación, habría que haber supuesto interacciones meticulosas entre ambas clases de materia, pero no se intentó en serio y ni mucho menos se logró.
Además, las fuerzas electromagnéticas exigían introducir masas eléctricas que, si bien no poseían una inercia apreciable, ejercían entre sí interacciones que, al contrario que la fuerza gravitatoria, eran polares.
Después de muchas dudas, la electrodinámica de Faraday y Maxwell hizo perder a los físicos la fe en fundar toda la física en la mecánica de Newton, pues la teoría de aquéllos y su confirmación por los experimentos de Hertz demostraron que hay procesos electromagnéticos que, en sí mismos, están desligados de cualquier materia ponderable: las ondas que consisten en «campos» electromagnéticos en el espacio vacío. Para que la mecánica continuara siendo el fundamento de la física, debían interpretarse mecánicamente las ecuaciones de Maxwell, posibilidad que se intentó con tesón pero sin éxito, mientras las ecuaciones resultaban cada vez más provechosas. Se acostumbró a trabajar con estos campos como si fueran sustancias independientes, sin necesidad de explicar su naturaleza mecánica, y finalmente se abandonó casi sin darse cuenta la mecánica como base de la física, porque su adaptación a los hechos resultó inviable. Desde entonces existen dos tipos de elementos conceptuales: por un lado, puntos materiales con fuerzas a distancia entre ellos y, por otro, el campo continuo. Este estado intermedio de la física sin base unitaria para la totalidad, aunque insatisfactorio, está lejos de ser superado.
Desde el segundo punto de vista, el interno, anotaré una breve crítica a la mecánica como base de la física. En el actual estado de la ciencia, tras el abandono del fundamento mecánico, el interés de esta crítica es únicamente metodológico, pero sirve para mostrar un tipo de argumentación que en el futuro jugará un papel tanto más decisivo en la selección de teorías cuanto más se alejen éstas de los conceptos fundamentales y axiomas de lo inmediatamente perceptible, dificultando así la comparación de las consecuencias teóricas con los hechos. En cuanto al experimento del cubo, argumento de Mach ya descubierto por Newton, demuestra que todos los sistemas de coordenadas «rígidos» son, desde el punto de vista de la descripción geométrica, lógicamente equivalentes entre sí. Las ecuaciones de la mecánica (incluso la propia ley de inercia, por ejemplo) afirman su validez frente a una sola clase especial de semejantes sistemas, los «sistemas inerciales». El sistema de coordenadas, como objeto material, carece aquí de importancia. La justificación de la necesidad de esta elección específica exige algo que quede fuera de los objetos (masas, distancias) sobre los que versa la teoría. Por esa razón, Newton introdujo explícitamente, como factor determinante, el «espacio absoluto», en calidad de factor activo y omnipresente en todos los procesos mecánicos. Newton entiende por «absoluto» no influido por las masas ni por sus movimientos. Cabe objetar la existencia de infinitos sistemas inerciales y sus propios movimientos uniformes sin rotación respecto a los demás, diferenciados todos ellos del resto de sistemas rígidos.
Mach conjetura que al igual que en la teoría de Newton las demás fuerzas, en una teoría juiciosa la inercia debería descansar en la interacción de las masas, concepción que durante mucho tiempo creí teóricamente correcta, pero que presupone implícitamente que la teoría básica debería ser semejante a la mecánica de Newton: las masas y sus interacciones como conceptos primitivos. Este intento de solución no encaja en una teoría de campos consistente, como se explicará en los párrafos inmediatos.
La analogía siguiente, sin embargo, demuestra con especial claridad lo certera que es en esencia la crítica de Mach. Imaginemos una mecánica construida por personas que sólo conocen un pequeño fragmento de la superficie terrestre y que no pueden ver ninguna estrella. Esta gente tenderá a atribuir propiedades físicas especiales a la dimensión vertical del espacio (dirección de la aceleración de caída) y, basándose en ese concepto, afirmará que la Tierra es predominantemente horizontal. Seguramente no se dejarían influir por el argumento de que, en cuanto a sus propiedades geométricas, el espacio es isótropo y que, en consecuencia, no caben leyes físicas fundamentales que privilegien una dirección determinada; así que se inclinarían por defender, como hizo Newton, que la vertical es absoluta, como lo demuestra la experiencia, y que no hay más remedio que aceptarlo. La preferencia por la vertical frente a todas las demás direcciones espaciales es exactamente análoga a la preferencia por los sistemas inerciales frente a todos los demás sistemas de coordenadas rígidos.
Estudiemos ahora otros argumentos que se refieren igualmente a la simplicidad interna o naturalidad de la mecánica. Si uno acepta los conceptos de espacio (incluida la geometría) y tiempo, no hay realmente motivo alguno para poner reparos a la postulación de fuerzas a distancia, aun cuando semejante concepto no cuadre con las ideas que uno se forma a raíz de la experiencia bruta de la vida cotidiana. Sin embargo, otra consideración destaca el carácter primitivo de la mecánica como base de la física. En esencia, existen dos leyes:
(1) la ley del movimiento;
(2) la expresión de la fuerza o la energía potencial.
La ley del movimiento es precisa, pero también vacía mientras no se dé la expresión para las fuerzas. Ahora bien, a la hora de defenderlas, existe amplio margen de arbitrariedad, sobre todo si se elimina el requisito —de hecho nada natural— de que solamente dependan de las coordenadas (y no, por ejemplo, de sus derivadas respecto al tiempo). En el marco de la teoría es completamente arbitrario el que las fuerzas gravitatorias (y eléctricas) que emergen de un punto sean condicionadas por la función potencial (1/ _r_ ). Otro comentario más: se sabe desde hace mucho que esta función es la solución esféricamente simétrica de la ecuación diferencial más simple (invariante frente a la rotación) ∆f = 0; por tanto, habría sido apremiante interpretarlo como una señal de que esa función está determinada por una ley espacial, eliminando así la arbitrariedad en la elección de la ley de la fuerza. Éste es el primer descubrimiento que anima a abandonar la teoría de las fuerzas a distancia, abandono que —preparado por Faraday, Maxwell y Hertz— no se inicia hasta más tarde, bajo la presión externa de los hechos experimentales.
Como asimetría interna de la teoría quisiera mencionar también que la masa inercial que aparece en la ley del movimiento aparece asimismo en la ley de la fuerza gravitatoria, pero no en la expresión de las restantes fuerzas. Finalmente, me gustaría señalar que la división de la energía en dos partes esencialmente diferentes, energía cinética y energía potencial, no es natural (para Hertz era un elemento tan molesto que en su última obra intentó liberar a la mecánica del concepto de energía potencial, es decir, de fuerza).
Basta ya. Newton, perdóname; tú encontraste el único camino posible en tu época para un hombre de máxima capacidad intelectual y de creación. Los conceptos que tú creaste siguen rigiendo nuestro pensamiento físico, aunque ahora sabemos que, si aspiramos a una comprensión más profunda, hay que sustituirlos por otros más alejados de la esfera de la experiencia inmediata.
El lector se preguntará asombrado si estas páginas pretenden ser una necrología y yo contestaría que, en esencia, sí, porque lo fundamental en la existencia de un hombre de mi especie es _qué_ piensa y _cómo_ piensa, no sus alegrías y sus penas. De ahí que la necrología pueda únicamente comunicar las ideas fundamentales de sus deseos. Una teoría es tanto más impresionante cuanto mayor es la simplicidad de sus premisas, cuanto más diversas sean las cosas que conecta entre sí y cuanto más amplio sea su ámbito de aplicación. De ahí la profunda impresión que ejerciera sobre mí la termodinámica clásica. Es la única teoría física de contenido general de la que estoy convencido de que, en el marco de aplicabilidad de sus conceptos básicos, jamás será derribada, afirmación que dedico especialmente a los escépticos por principio.
En mis años de estudiante, la cuestión más fascinante era la teoría de Maxwell. Su aire revolucionario derivaba de la transición de fuerzas de acción a distancia a campos como magnitudes fundamentales. La incorporación de la óptica a la teoría del electromagnetismo, su relación entre la velocidad de la luz y el sistema de unidades eléctrico y magnético absoluto, así como la relación entre el coeficiente de reflexión y la conductividad metálica de un cuerpo... fue como una revelación. Además de la transición a la teoría del campo, es decir, la expresión de las leyes elementales mediante ecuaciones diferenciales, Maxwell sólo recurrió a un único paso hipotético: la introducción de la corriente de desplazamiento eléctrica en el vacío y en los dieléctricos y su efecto magnético, una innovación prescrita por las propiedades formales de las ecuaciones diferenciales. No dejaré de comentar la semejanza interna entre la pareja Faraday-Maxwell y Galileo-Newton: Faraday y Galileo captaron intuitivamente las relaciones, Maxwell y Newton las formularon y las aplicaron cuantitativamente.
En aquellos tiempos, la dificultad de captar la esencia de la teoría electromagnética se debía a una circunstancia muy particular. Las «intensidades del campo» y los «desplazamientos» eléctricos o magnéticos eran estudiados como magnitudes igualmente elementales, salvo el caso especial de un cuerpo dieléctrico del espacio vacío. Como portador del campo aparecía la _materia_ , no el _espacio_ , lo que implicaba que el portador del campo poseía un estado de velocidad, afirmación que debía ser válida también para el «vacío» (éter). La electrodinámica de los cuerpos en movimiento de Hertz descansa totalmente en esta actitud fundamental.
El gran mérito de H. A. Lorentz fue impulsar el cambio de manera convincente. En principio, según él, existe sólo un campo en el espacio vacío. La materia, concebida atómicamente, es el único soporte de las cargas eléctricas; entre las partículas materiales hay espacio vacío, la sede del campo electromagnético, creado por la posición y la velocidad de las cargas puntuales localizadas en las partículas materiales. La dielectricidad, la conductividad, etcétera están exclusivamente determinadas por la clase de enlace mecánico que existe entre las partículas de las que se componen los cuerpos. Las cargas de las partículas generan el campo que, por otro lado, ejerce fuerzas sobre esas cargas, determinando así el movimiento de las partículas de acuerdo con la ley del movimiento de Newton. Si lo comparamos con el sistema de Newton, el cambio radica en lo siguiente: las fuerzas a distancia son sustituidas por el campo, que a la vez describe también la radiación. Normalmente no se tiene en cuenta la gravitación, debido a su relativa insignificancia; pero siempre cabía su inclusión con sólo enriquecer la estructura del campo o ampliar las leyes maxwellianas del mismo. El físico de esta generación considera que el punto de vista adoptado por Lorentz es el único aceptable; pero en su momento fue un paso sorprendente y audaz que permitió la evolución posterior.
Al observar con sentido crítico esta fase del desarrollo de la teoría, llama la atención el dualismo que consiste en utilizar simultáneamente como conceptos fundamentales el punto material en el sentido de Newton y el campo como continuo. La energía cinética y la energía del campo emergen como cosas esencialmente distintas, lo cual parece tanto más insatisfactorio cuanto que, según la teoría de Maxwell, el campo magnético de una carga eléctrica en movimiento representaba inercia. ¿Por qué no entonces _toda_ la inercia? En ese caso, sólo habría ya energía del campo y la partícula sería únicamente una región de densidad muy alta de energía del campo. Cabría entonces la esperanza de deducir el concepto de punto másico, junto con las ecuaciones de movimiento de la partícula, a partir de las ecuaciones del campo, y el molesto dualismo quedaría eliminado.
H. A. Lorentz lo sabía de sobra. Sin embargo, las ecuaciones de Maxwell no permitían derivar el equilibrio de la electricidad que constituye una partícula. Quizá lo lograrían otras ecuaciones del campo que fuesen _no lineales_ , pero no había ningún método para descubrir semejantes ecuaciones del campo sin caer en arriesgadas arbitrariedades. En cualquier caso, estaba justificado creer que por el camino iniciado con tanto éxito por Faraday y Maxwell se iría encontrando poco a poco una base firme para toda la física.
La revolución iniciada por la introducción del campo en absoluto había terminado. En el cambio de siglo e independientemente a las anteriores observaciones, estalló una segunda crisis fundamental cuya seriedad puso repentinamente de manifiesto las investigaciones de Max Planck sobre la radiación térmica (1900). La historia de este episodio es tanto más valiosa en cuanto, al menos en su primera fase, ningún descubrimiento experimental influyó en ella.
Kirchhoff había concluido, mediante razonamientos termodinámicos, que la densidad de energía y la composición espectral de la radiación en una cavidad cerrada por paredes aislantes de temperatura _T_ son independientes de la naturaleza de éstas, es decir, la densidad de radiación monocromática _ρ_ es una función universal de la frecuencia _ν_ y de la temperatura absoluta _T_. Se planteó así el interesante problema de determinar esta función _ρ_ ( _n_ , _T_ ). ¿Qué podía averiguarse, teóricamente, acerca de esta función? Según la teoría de Maxwell, la radiación debía ejercer sobre las paredes una presión determinada por la densidad de energía total. Desde la termodinámica, Boltzmann extrajo de aquí la conclusión de que la totalidad de la densidad de energía de radiación ( _∫_ _ρ_ _d_ _ν_ ) era proporcional a _T_ 4, y así justificó en la teoría una ley descubierta empíricamente antes por Stefan, es decir, conectó esta ley con el fundamento de la teoría de Maxwell. W. Wien halló después —mediante una ingeniosa consideración de orden termodinámico que también hacía uso de la teoría de Maxwell— que la función universal _r_ de las dos variables _ν_ y _T_ tenía que ser de la forma
donde _f_ ( _v_ / _T_ ) representa una función universal de la única variable _ν_ / _T_. Estaba claro que la determinación teórica de esta función universal _f_ era de importancia fundamental; y ésa era precisamente la tarea con que se enfrentó Planck. Mediciones cuidadosas habían conducido a una determinación bastante exacta de la función _f_ y, apoyándose en estos valores empíricos, logró en primer lugar encontrar una expresión que reflejaba bastante bien las mediciones:
donde _h_ y _k_ son dos constantes universales, la primera de las cuales condujo a la teoría cuántica. La fórmula tiene un aspecto un poco extraño debido a su denominador. ¿Podía uno justificarla teóricamente? Planck encontró efectivamente una deducción y que sus imperfecciones permanecieran al principio ocultas fue una circunstancia verdaderamente afortunada para la evolución de la física. Si la fórmula era correcta, permitía calcular, con ayuda de la teoría de Maxwell, la energía media _E_ de un oscilador cuasi monocromático dentro del campo de radiación:
Planck prefirió intentar calcular teóricamente esta última magnitud. En este empeño no servía ya de nada, de momento, la termodinámica, ni tampoco la teoría de Maxwell. Lo increíblemente alentador de la fórmula era que para valores altos de la temperatura (con _v_ fijo) daba la expresión
_E = kT_.
Esta expresión es la misma que proporciona la teoría cinética de los gases para la energía media de un punto mágico capaz de oscilar elásticamente en una dimensión. En esta teoría se obtiene
_E = (R/N)T_ ,
donde _R_ es la constante de la ecuación de los gases y _N_ , el número de moléculas por mol, constante que expresa el tamaño absoluto del átomo. Igualando las dos expresiones se obtiene
_N = R/k_.
Así pues, de la única constante de la fórmula de Planck resulta el verdadero tamaño del átomo. El valor numérico casaba satisfactoriamente con las determinaciones de _N_ hechas por medio de la teoría cinética de los gases, valores que sin embargo no eran demasiado exactos.
Planck se dio cuenta de su éxito. El asunto tiene, sin embargo, un reverso muy dudoso que Planck, afortunadamente, al principio pasó por alto. En efecto, el razonamiento exige que la relación _E_ = _kT_ sea también válida en bajas temperaturas, lo cual daría al traste con la fórmula de Planck y con la constante _h_. Así pues, la consecuencia correcta de la teoría habría sido que o bien la energía cinética media del oscilador viene mal dada por la teoría de los gases —representaría una refutación de la mecánica [estadística]— o bien la energía media del oscilador se deriva incorrectamente de la teoría de Maxwell —representaría una refutación de esta última—. En estas circunstancias, lo más probable es que ambas teorías sólo fuesen correctas en el límite, pero falsas en lo demás; así ocurre también en los hechos, como ahora veremos. Si Planck hubiese inferido así, quizá no habría hecho su gran hallazgo, porque su razonamiento habría perdido todo fundamento.
Analicemos el argumento de Planck. Sobre la base de la teoría cinética de los gases, Boltzmann había descubierto que, prescindiendo de un factor constante, la entropía era igual al logaritmo de la «probabilidad» del estado en cuestión. Así captó la esencia de los procesos «irreversibles» en sentido termodinámico. Desde el punto de vista mecánico-molecular, por el contrario, todos los procesos son reversibles. Si a un estado definido en el marco de la teoría molecular lo llamamos estado descrito microscópicamente o microestado, en abreviatura, y macroestado a un estado descrito en sentido termodinámico, entonces a cada estado macroscópico le corresponde un número imponente ( _Z_ ) de estados. _Z_ es la medida de la probabilidad del macroestado considerado. La idea parece además sobresaliente porque su aplicación no se reduce a la descripción microscópica sobre la base de la mecánica. Planck cayó en la cuenta y aplicó el principio de Boltzmann a un sistema compuesto de múltiples resonadores de la misma frecuencia n. El estado macroscópico viene dado por la energía total de la oscilación de todos los resonadores; un microestado, por la especificación de la energía (instantánea) de cada resonador. Para poder expresar ahora mediante un número finito el número de microestados pertenecientes a un macroestado [Planck] dividió la energía total en un número elevado pero finito de elementos iguales de energía ξ y se preguntó de cuántas maneras pueden distribuirse estos elementos de energía entre los resonadores. El logaritmo de este número da la entropía y, por tanto, termodinámicamente, la temperatura del sistema. Planck obtuvo su fórmula de la radiación al elegir elementos de energía ξ de tamaño ξ = _hv_. Lo decisivo es que el resultado depende de tomar para ξ un valor finito determinado, es decir, de no pasar al límite ξ = 0. Esta forma de razonamiento no deja traslucir su contradicción con la base mecánica y electrodinámica sobre la cual descansa el resto de la derivación, pero en realidad ésta presupone implícitamente que cada resonador sólo puede absorber y emitir energía en «cuantos» de tamaño _hv_ y que, por consiguiente, tanto la energía de una estructura mecánica capaz de oscilar como la energía de la radiación sólo puede transferirse en semejantes cuantos —contradiciendo las leyes de la mecánica y de la electrodinámica—. La contradicción con la dinámica resulta fundamental, mientras que la contradicción con la electrodinámica lo es menos, pues la expresión de la densidad de energía de radiación es ciertamente _compatible_ con las ecuaciones de Maxwell, pero no su consecuencia necesaria. Esta expresión proporciona valores medios importantes, como lo demuestra que las ecuaciones de Stefan-Boltzmann y de Wien, basadas en ella, coinciden con la experiencia.
Comprendí claramente estas conclusiones en cuanto Planck publicó su trabajo fundamental, de manera que sin contar con nada que sustituyera la mecánica clásica, advertí a qué clase de consecuencias lleva esta ley de la radiación de temperatura para el efecto fotoeléctrico y otros fenómenos afines de la transformación de energía de radiación, así como para el calor específico de cuerpos sólidos en especial. Sin embargo, mis intentos de adaptar el fundamento teórico de la física a estos conocimientos fracasaron absolutamente; sentía que nos habían quitado el suelo de debajo de los pies y nadie oteaba tierra firme donde construir. Que a Bohr le bastara este fundamento inseguro y plagado de contradicciones para descubrir, con su increíble instinto y sensibilidad, las principales leyes de las rayas espectrales y de las envolturas electrónicas de los átomos, además de su importancia para la química, fue y es milagroso para mí, armonía suprema en el mundo del pensamiento.
En ese tiempo, por importantes que pudieran ser, no me centré en las consecuencias concretas del resultado de Planck, sino en extraer conclusiones generales de la fórmula de radiación en cuanto estructura y fundamento electromagnético de la física. Antes de profundizar en esta cuestión, debo mencionar brevemente algunas investigaciones relacionadas con el movimiento browniano y objetos afines (fenómenos de fluctuaciones), basados en la mecánica molecular clásica. Sin conocer las investigaciones de Boltzmann y Gibbs, que, aparecidas anteriormente, agotaban la cuestión, desarrollé la mecánica estadística y la teoría cinético-molecular de la termodinámica. Mi objetivo principal era encontrar hechos que garantizaran al máximo la existencia de átomos de tamaño finito y determinado. Descubrí que, según la teoría atomista, tenía que haber un movimiento observable de partículas microscópicas suspendidas, sin saber que las conclusiones del «movimiento browniano» eran conocidas desde hacía mucho. La derivación más sencilla descansaba en el siguiente razonamiento: si la teoría cinético-molecular es correcta, una suspensión de partículas visibles debe tener una presión osmótica que satisfaga las leyes de los gases, igual que la tiene una solución de moléculas. Esta presión osmótica depende del tamaño efectivo de las moléculas, es decir, del número de moléculas en un equivalente-gramo. Si la suspensión es de densidad no homogénea, la consiguiente variabilidad espacial de esta presión osmótica da lugar a un movimiento de difusión compensador que se puede calcular a partir de la movilidad —conocida— de las partículas. Ahora bien, cabe concebir este proceso de difusión también como el resultado del desplazamiento caótico —y en principio de magnitud desconocida— de las partículas suspendidas bajo la acción de la agitación térmica. Igualando las magnitudes obtenidas para la corriente de difusión a través de ambos razonamientos, se llega cuantitativamente a la ley estadística para dichos desplazamientos, es decir, a la ley del movimiento browniano. La concordancia entre estas consideraciones y la experiencia, junto con la determinación de Planck del tamaño molecular verdadero a partir de la ley de radiación (para temperaturas altas), convenció a los numerosos escépticos de aquel entonces (Ostwald, Mach) de la realidad de los átomos. Su rechazo a la teoría atómica se debe, sin duda, a su actitud filosófica positivista, un interesante ejemplo de que incluso investigadores de espíritu audaz e instinto agudo pueden trabajar afectados por prejuicios filosóficos cuando interpretan los hechos. El prejuicio —que desde entonces no se ha extinguido— consiste en creer que los hechos por sí solos, sin libre construcción conceptual, pueden y deben proporcionar conocimiento científico. Semejante ilusión solamente se explica porque no es fácil percatarse de que aquellos conceptos que, por estar contrastados y llevar largo tiempo en uso, parecen conectados directamente con el material empírico, han sido libremente elegidos.
El éxito de la teoría del movimiento browniano volvió a demostrar claramente que la mecánica clásica daba resultados fiables siempre que fuese aplicada a movimientos en que las derivadas superiores de la velocidad respecto al tiempo son despreciables. Sobre este conocimiento cabe fundar un método relativamente directo para deducir de la fórmula de Planck algo sobre la constitución de la radiación. En efecto, cabe concluir que, en un espacio lleno de radiación, un espejo que refleje cuasi monocromáticamente y que tenga libertad de movimiento (perpendicularmente a su plano) debe ejecutar una especie de movimiento browniano cuya energía cinética media es igual a 1/2 ( _R_ / _N_ ) _T_ ( _R_ = constante de los gases para una molécula-gramo, _N_ = número de moléculas en un mol, _T_ = temperatura absoluta). Si la radiación no estuviera sujeta a ninguna fluctuación local, el espejo iría quedándose poco a poco en reposo, porque, como consecuencia de su movimiento, refleja más radiación en el anverso que por el reverso. El espejo, sin embargo, tiene que experimentar ciertas fluctuaciones irregulares de la presión que actúa sobre él (fluctuaciones que se pueden calcular con la teoría de Maxwell) porque los paquetes de ondas que constituyen la radiación interfieren mutuamente. Pues bien, este cálculo demuestra que dichas fluctuaciones de la presión, sobre todo con densidades de radiación pequeñas, no bastan para comunicar al espejo la energía cinética media 1/2 ( _R_ / _N_ ) _T_. Para obtener este resultado hay que suponer más bien que existe un segundo tipo de fluctuaciones de la presión, no deducibles de la teoría de Maxwell, lo que equivale al supuesto de que la energía de radiación se compone de cuantos localizados puntualmente e indivisibles de energía _h v_ [y de momento _hv_ / _c_ , ( _c_ = velocidad de la luz)] que se reflejan indivisos. Este enfoque demostró de manera tajante y directa que es preciso atribuir a los cuantos de Planck una especie de realidad inmediata y que la radiación debe poseer, por tanto, en lo que atañe a su energía, una especie de estructura molecular, lo cual contradice naturalmente la teoría de Maxwell. Al mismo resultado conducían también ciertas consideraciones sobre la radiación basadas directamente en la relación entropía-probabilidad de Boltzmann —probabilidad como equivalente a la frecuencia temporal estadística—. Esa doble naturaleza de la radiación, y de los corpúsculos materiales, es una propiedad capital de la realidad que la mecánica cuántica interpretó de manera ingeniosa y con increíble éxito. Creo que esta interpretación, que casi todos los físicos contemporáneos consideran definitiva, es simplemente una salida temporal (lo comentaré más adelante).
Reflexiones semejantes me hicieron ver claro, poco después de 1900 —recién publicado el innovador trabajo de Planck—, que ni la mecánica ni la electrodinámica (salvo en casos límite) podían aspirar a la validez absoluta. Poco a poco perdí la esperanza de descubrir las leyes verdaderas mediante esfuerzos constructivos basados en hechos conocidos. Cuanto más me obstinaba y más decidido era mi empeño, tanto más me convencía de que solamente el descubrimiento de un principio formal y general podía conducir a resultados seguros. Un ejemplo era la termodinámica: su principio general derivaba del teorema de que las leyes de la naturaleza están constituidas de manera que es imposible construir un _perpetuum mobile_ (de primera y segunda especie). Mas, ¿cómo encontrar un principio general de este tipo? Tras diez años de reflexión, ese principio resultó de una paradoja con la que topé ya a los dieciséis años: si corro detrás de un rayo de luz con la velocidad _c_ (velocidad de la luz en el vacío), debería percibir el rayo luminoso como un campo electromagnético estacionario, aunque espacialmente oscilante, pero esto no parece existir en la experiencia ni resultar de las ecuaciones de Maxwell. De entrada intuí que, juzgada la situación por semejante observador, todo debería desarrollarse según las mismas leyes que para un observador que se hallara en reposo con respecto a la tierra, pues ¿cómo podría el primer observador saber o constatar que se encuentra en un estado de rápido movimiento uniforme?
Como se ve, esta paradoja contiene ya el germen de la teoría especial de la relatividad. Naturalmente, hoy nadie ignora que todos los intentos de aclarar satisfactoriamente esa paradoja estaban condenados al fracaso mientras el axioma del carácter absoluto del tiempo o de la simultaneidad siguiera aferrado en el inconsciente. La identificación de este axioma y su arbitrariedad ya soluciona el problema. Para mí, la lectura de los ensayos filosóficos de David Hume y Ernst Mach fue decisiva.
Era necesario comprender qué significaban las coordenadas espaciales y el valor temporal de un suceso en física. La interpretación física de las coordenadas espaciales presuponía un cuerpo de referencia rígido, que además tenía que estar en un estado de movimiento más o menos definido (sistema inercial). En un sistema inercial dado, las coordenadas representaban resultados de ciertas mediciones con reglas rígidas (en reposo). (Hay que tener presente que la presuposición de la existencia teórica de reglas rígidas está indicada por la experiencia aproximativa, pero que no por ello deja de ser esencialmente arbitraria.) Con esa interpretación de las coordenadas espaciales, la cuestión de la validez de la geometría euclidiana se convierte en un problema físico.
Si uno intenta ahora interpretar de manera análoga el tiempo de un suceso, necesitará algún medio para medir la diferencia de tiempos (un proceso periódico, determinado intrínsecamente y materializado a través de un sistema, de dimensión espacial suficientemente pequeña). Un reloj colocado en reposo con relación al sistema inercial define un tiempo local. Los tiempos locales de todos los puntos espaciales, tomados en su conjunto, son el «tiempo» perteneciente al sistema inercial elegido, siempre que se hayan «coordinado» estos relojes entre sí. Se evidencia que a priori no es ni siquiera necesario que los «tiempos» así definidos para diversos sistemas inerciales coincidan entre sí, lo cual habría sido advertido hace mucho de no ser porque para la experiencia práctica de la vida cotidiana no parecía que la luz (debido al alto valor de _c_ ) fuese un medio adecuado para constatar la simultaneidad absoluta.
La presuposición de la existencia (en principio) de reglas de medida (ideales o perfectas) no es independiente de la de la existencia de relojes (también ideales), porque una señal luminosa que es reflejada una y otra vez entre los extremos de una regla rígida representa un reloj ideal, siempre y cuando el postulado de la constancia de la velocidad de la luz en el vacío no conduzca a contradicciones.
Cabe formular la paradoja anterior de la siguiente manera: de acuerdo con las reglas utilizadas en la física clásica para conectar las coordenadas espaciales y el tiempo de sucesos al pasar de un sistema inercial a otro, los dos supuestos
(1) constancia de la velocidad de la luz, e
(2) independencia de las leyes y, en especial, por tanto, también de la ley de la constancia de la velocidad de la luz, con respecto a la elección del sistema inercial (principio de la relatividad especial) son mutuamente incompatibles (pese a que ambos, por separado, se sustentan en la experiencia).
La idea en que se basa la teoría especial de la relatividad es que los supuestos (1) y (2) son mutuamente compatibles si se postulan relaciones de un nuevo tipo («transformación de Lorentz») para la conversión de coordenadas y tiempos de los sucesos. Esto, con la anterior interpretación física de coordenadas y tiempo, no se reduce a un simple paso convencional, sino que entraña determinadas hipótesis sobre el comportamiento real de reglas de medida y relojes que pueden ser confirmadas o refutadas experimentalmente.
El principio general de la teoría especial de la relatividad se contiene en el postulado de que las leyes de la física son invariantes con respecto a las transformaciones de Lorentz (para el paso de un sistema inercial a otro sistema inercial cualquiera). Es un principio restrictivo para las leyes naturales, comparable al principio restrictivo en que se basa la termodinámica: la inexistencia del _perpetuum mobile_.
Antes de continuar, haré una observación acerca de la relación de la teoría con el «espacio cuadridimensional». Creer que la teoría especial de la relatividad descubrió o reintrodujo en cierto modo la cuadridimensionalidad del continuo físico es un error muy extendido. Evidentemente, no es así. También la mecánica clásica se basa en el continuo cuadridimensional de espacio y tiempo, sólo que en el continuo cuadridimensional de la física clásica las «secciones» de valor temporal constante tienen una realidad absoluta, es decir, independiente de la elección del sistema de referencia. Con ello el continuo cuadridimensional se descompone naturalmente en uno tridimensional y en otro unidimensional (tiempo), de manera que la visión cuadridimensional no se impone con carácter _necesario_. La teoría especial de la relatividad, por el contrario, crea una dependencia formal entre el modo en que tienen que entrar en las leyes de la naturaleza, por un lado, las coordenadas espaciales, y por otro, la coordenada temporal.
Antes de su investigación era preciso aplicar una transformación de Lorentz a una ley para comprobar su invariancia frente a tales transformaciones, pero Minkowski consiguió introducir un formalismo que hace que la propia forma matemática de la ley garantice su invariancia frente a las transformaciones de Lorentz. Mediante la creación de un cálculo tensorial cuadridimensional consiguió para el espacio de cuatro dimensiones lo mismo que logra el cálculo vectorial usual para las tres dimensiones espaciales. Y demostró también que la transformación de Lorentz (prescindiendo de un signo algebraico diferente, debido al carácter especial del tiempo) no es más que una rotación del sistema de coordenadas en el espacio cuadridimensional.
Una primera crítica a la teoría de Minkowski atiende a que, fuera del espacio cuadridimensional, introduce dos tipos de elementos físicos: (1) reglas de medir y relojes, (2) todos los demás elementos, por ejemplo el campo electromagnético, el punto material, etcétera. Esta distinción es, en cierto sentido, inconsecuente, pues las reglas de medir y los relojes deberían representarse en realidad como soluciones de las ecuaciones fundamentales (objetos consistentes en configuraciones atómicas móviles), no como entidades en cierta medida autónomas desde el punto de vista teórico. Semejante proceder se justifica, sin embargo, porque desde un principio se vio claro que los postulados de la teoría no son tan fuertes como para que las ecuaciones de los fenómenos físicos deducidas de ellos sean tan completas y libres de arbitrariedad que permitan fundar sobre esa base una teoría de las reglas de medir y de los relojes. De no querer renunciar por entero a una interpretación física de las coordenadas —de suyo, sería posible—, era mejor permitir semejante inconsecuencia, con la obligación de eliminarla en un estadio posterior de la teoría. Ahora bien, no cabe legitimar este pecado hasta el punto de imaginar, por ejemplo, que las distancias sean entes físicos de naturaleza especial, esencialmente distintos de las demás magnitudes físicas («reducir la física a geometría», etcétera). Preguntémonos por los hallazgos de carácter definitivo que la física adeuda a la teoría especial de la relatividad:
(1) No existe simultaneidad entre sucesos distantes; tampoco existe, pues, acción inmediata a distancia en el sentido de la mecánica de Newton. Es cierto que la introducción de acciones a distancia que se propagan con la velocidad de la luz sigue siendo concebible en esta teoría, pero parece poco natural, porque en una teoría semejante no podría haber ninguna expresión razonable para el principio [de conservación] de la energía. Parece por lo tanto inevitable describir la realidad física mediante funciones continuas en el espacio. Por eso, el punto material no puede entrar ya en consideración como concepto básico de la teoría.
(2) Los principios de conservación del momento y de conservación de la energía se funden en un solo principio: la masa inercial de un sistema aislado es idéntica a su energía, de manera que la masa, como concepto independiente, desaparece.
Observación. La velocidad de la luz _c_ es una de las magnitudes considerada en las ecuaciones físicas como «constante universal». Ahora bien, si en lugar del segundo se introduce como unidad temporal el tiempo que tarda la luz en recorrer 1 centímetro, entonces _c_ no aparece ya en las ecuaciones. En este sentido, la constante _c_ sólo es una constante universal _aparente_.
Evidentemente, cuestión admitida con carácter general, podrían eliminarse otras dos constantes universales de la física simplemente introduciendo, en lugar del gramo y del centímetro, unidades «naturales» convenientemente elegidas (masa y radio del electrón, por ejemplo).
Así, en las ecuaciones fundamentales de la física no podrían aparecer más que constantes «adimensionales», sobre las que deseo expresar mi opinión que, por el momento, sólo se basa en la confianza en la simplicidad o inteligibilidad de la naturaleza: esas constantes _arbitrarias_ no existen, es decir, la naturaleza está constituida de tal suerte que lógicamente es posible establecer leyes tan determinadas como para que en ellas sólo aparezcan constantes totalmente determinadas (por tanto no constantes cuyos valores numéricos puedan ser modificados sin destruir la teoría).
La teoría especial de la relatividad debe su creación a las ecuaciones de Maxwell del campo electromagnético. Y a la inversa, pues estas últimas sólo se captan formalmente de manera eficaz a través de la teoría especial de la relatividad: son las ecuaciones de campo invariantes-Lorentz más sencillas que se pueden aplicar a un tensor antisimétrico derivado de un campo vectorial. Esto ya sería en sí satisfactorio si no supiésemos, por los fenómenos cuánticos, que la teoría maxwelliana no hace justicia a las propiedades energéticas de la radiación. Además, en cuanto a la modificacción de la teoría de Maxwell, ni siquiera la teoría especial de la relatividad brinda ningún punto de apoyo adecuado, como tampoco tiene ninguna respuesta para la pregunta de Mach: «¿Cómo es que los sistemas inerciales se distinguen físicamente de otros sistemas de coordenadas?».
No advertí con claridad que la teoría especial de la relatividad es sólo el primer paso de una evolución necesaria hasta que intenté representar la gravitación en el marco de esta teoría. En la mecánica clásica, interpretada en función del campo, el potencial de gravitación aparece como un campo _escalar_ (la posibilidad teórica más simple de un campo con una sola componente). No es fácil hacer invariante semejante teoría escalar del campo gravitacional con respecto al grupo de las transformaciones de Lorentz. El programa siguiente parece, pues, natural: el campo físico total consta de un campo escalar (gravitación) y de un campo vectorial (campo electromagnético); hallazgos posteriores podrían eventualmente hacer necesaria la introducción de clases de campos más complicadas, pero de momento no hacía falta preocuparse de eso.
La posibilidad de realizar este programa era, sin embargo, dudosa desde el principio, porque la teoría tenía que reunir estos elementos:
(1) Por consideraciones generales de la teoría especial de la relatividad estaba claro que la masa _inercial_ de un sistema físico crecía con la energía total (por tanto, con la energía cinética, por ejemplo).
(2) Por experimentos muy precisos (en especial por los de Roland Eötvös con la balanza de torsión) se sabía empíricamente con gran exactitud que la masa _pesante_ de un cuerpo es exactamente igual a su masa _inercial_.
De (1) y (2) se concluía que el _peso_ de un sistema depende, de manera perfectamente conocida, de su energía total. Si la teoría no lograba este objetivo, o al menos no de forma natural, habría que rechazarla. La condición puede enunciarse de modo más sencillo así: la aceleración de caída de un sistema en un campo gravitacional dado es independiente de la naturaleza del sistema que cae y, en concreto, por tanto, de su contenido de energía también.
El problema era que, siguiendo el programa previsto, era imposible representar ese estado de cosas elemental o, al menos, imposible hacerlo de forma natural. Esta imposibilidad me convenció de que no cabía una teoría satisfactoria de la gravitación en el marco de la teoría especial de la relatividad.
Entonces reparé en que la igualdad entre masa inercial y masa pesante o, si se prefiere, el hecho de que la aceleración gravitatoria es independiente de la naturaleza de la sustancia que cae, puede expresarse de la siguiente manera: en un campo gravitacional (de extensión espacial reducida) las cosas se comportan igual que en un espacio libre de gravitación, siempre y cuando se introduzca en éste, en lugar de un «sistema inercial», un sistema de referencia acelerado con respecto a aquél.
Así, si se interpreta que el comportamiento de los cuerpos en relación con este último sistema de referencia viene causado por un campo gravitacional «real» (y no simplemente aparente), cabe contemplar este sistema de referencia como un «sistema inercial», con el mismo derecho que en el caso del sistema de referencia primitivo.
Quiere decirse que si uno juzga que son posibles los campos gravitatorios de extensión arbitraria, no limitados de entrada por condiciones de contorno espaciales, el concepto de sistema inercial se vacía completamente. El concepto «aceleración con respecto al espacio» pierde todo significado y, con él, también el principio de inercia junto con la paradoja de Mach.
La igualdad entre masa inercial y pesante nos conduce de forma natural a reconocer que el postulado básico de la teoría especial de la relatividad (invariancia de las leyes frente a las transformaciones de Lorentz) es demasiado estrecho y que es preciso sostener una invariancia de las leyes también con respecto a transformaciones _no lineales_ de las coordenadas en el continuo cuadridimensional.
Todo sucedió en 1908. ¿Por qué tuvieron que transcurrir otros siete años para formular la teoría general de la relatividad? No es tan fácil liberarse de la idea de que las coordenadas deben poseer un significado métrico inmediato. La transformación se produjo más o menos como sigue.
Partimos de un espacio vacío, libre de campo, como —referido a un sistema inercial— se revela, en el sentido de la teoría especial de la relatividad, como la más sencilla de todas las situaciones físicas imaginables. Si suponemos ahora que introducimos un sistema no inercial, de suerte que el nuevo sistema esté uniformemente acelerado con respecto al sistema inercial (en una descripción tridimensional) en una dirección (convenientemente definida), existe con respecto a este sistema un campo gravitacional paralelo y estático. El sistema de referencia elegido puede ser rígido, de carácter euclidiano en sus propiedades métricas tridimensionales, pero el tiempo en el cual el campo parece estático _no_ es medido por relojes estacionarios _de idéntica constitución_. Basta este ejemplo especial para darse cuenta de que el significado métrico inmediato de las coordenadas se malogra en cuanto uno permite transformaciones no lineales de las coordenadas. Sin embargo, _es obligado_ hacer esto último si se quiere tener en cuenta la igualdad entre masa pesante e inercial a través del fundamento de la teoría y si se quiere superar la paradoja de Mach relativa a los sistemas inerciales.
Así pues, si hay que renunciar a dar a las coordenadas un significado métrico inmediato (diferencias de coordenadas = longitudes o tiempos medibles), se deben tratar como equivalentes todos los sistemas de coordenadas que pueden generarse mediante transformaciones continuas de las coordenadas.
En consecuencia, la teoría general de la relatividad parte del siguiente principio: las leyes de la naturaleza han de expresarse por medio de ecuaciones que sean covariantes con respecto al grupo de las transformaciones continuas de coordenadas. Este grupo viene así a reemplazar aquí al grupo de las transformaciones de Lorentz de la teoría especial de la relatividad, un subgrupo del primero.
Este postulado no basta por sí solo como punto de partida para una derivación de las ecuaciones básicas de la física. De entrada cabría incluso cuestionar la idea de que por sí solo implique una restricción real de las leyes físicas; pues dada una ley formulada en principio para ciertos sistemas de coordenadas solamente, siempre es posible reformularla de manera que la nueva formulación sea covariante en su forma. Además, se puede formular un número elevadísimo de leyes del campo que posean esta propiedad de covariancia. La eminente importancia heurística del principio general de la relatividad reside, sin embargo, en que nos conduce a la búsqueda de aquellos sistemas de ecuaciones que en su formulación _covariante general_ son los más _sencillos posibles_ ; entre ellos hemos de buscar las leyes de campo del espacio físico. Aquellos campos que pueden concordar unos con otros mediante tales transformaciones describen la misma situación real.
La pregunta fundamental para el investigador es ¿de qué clase matemática son las variables (funciones de las coordenadas) que permiten expresar las propiedades físicas del espacio («estructura»)?, y después debe cuestionarse ¿qué ecuaciones cumplen esas variables?
Actualmente continuamos sin contestar ambas preguntas con seguridad. La opción de la primera formulación de la teoría general de la relatividad se define de la siguiente manera: aunque no sepamos mediante qué tipo de variables de campo (estructura) se debe caracterizar el espacio físico, conocemos con certeza un caso especial, el espacio «libre de campo» en la teoría especial de la relatividad; un campo semejante se caracteriza por el hecho de que, para un sistema de coordenadas convenientemente elegido, la expresión
_g_ _ik,l_ – _g_ _ks_ Γ + _dx_ 32 – _dx_ 42
(1)
correspondiente a dos puntos vecinos, representa una magnitud mensurable (cuadrado de la distancia), es decir, tiene un significado físico real. Referida a un sistema arbitrario, esta cantidad se expresa así:
_ds_ 2 = _g ik dxi dxk_
(2)
donde los índices van de 1 a 4. Los _g ik_ forman un tensor simétrico. Si después de realizar una transformación sobre el campo (1) no se anulan las derivadas primeras de los _g ik_, con respecto a las coordenadas, existe en relación con este sistema de coordenadas, un campo gravitacional en el sentido antes estudiado, concretamente un campo gravitacional de índole muy especial. Gracias a la investigación de Riemann de los espacios métricos _n_ -dimensionales es posible caracterizar invariantemente este campo especial:
(1) El tensor de curvatura de Riemann _R iklm_, formado a partir de los coeficientes de la métrica (2), se anula.
(2) La trayectoria de un punto másico referida al sistema inercial [en relación con el cual es válida (1)] es una línea recta, es decir, una extremal (geodésica). Pero esto último es ya una caracterización de la ley del movimiento que se apoya en (2).
La ley _general_ del espacio físico será una generalización de la ley que acabamos de describir. Presumí entonces la existencia de dos etapas en la generalización:
(a) campo gravitacional puro;
(b) campo general (en el cual aparecen también magnitudes que de algún modo se corresponden con el campo electromagnético).
El caso (a) se caracterizaba por el hecho de que el campo sigue siendo representable por una métrica de Riemann (2) o por un tensor simétrico, pero sin que exista (salvo en lo infinitesimal) ninguna representación de la forma (1). Esto significa que en el caso (a) el tensor de Riemann _no_ se anula. Sin embargo, como es evidente, en este caso debe valer una ley del campo que sea una generalización (debilitamiento) de esta ley. Si esa ley [generalizada] ha de ser también del segundo orden de diferenciación y lineal en las derivadas segundas, entonces sólo entraba en consideración, como ecuación del campo en el caso (a), la ecuación obtenida por una sola contracción:
0 = _R kl_ = _g imRiklm_
Además, parece natural suponer que también en el caso (a) sigue representando la línea geodésica la ley del movimiento del punto material.
Por aquel entonces consideré inútil intentar representar el campo total (b) y determinar sus leyes del campo; preferí, por tanto, establecer un marco formal provisional para una representación de toda la realidad física, algo necesario con el fin de poder investigar, al menos de momento, la utilidad de la idea básica de la relatividad general. El asunto se desarrolló así.
En la teoría de Newton cabe escribir como ecuación del campo de gravitación
∆ _φ_ = 0
( _φ_ = potencial de gravitación) en aquellos lugares donde la densidad _r_ de materia se anula. En general habría que escribir (ecuación de Poisson)
∆ _φ_ = 4 _π_ _k_ _ρ_ ( _ρ_ = densidad de masa).
En el caso de la teoría relativista del campo gravitacional aparece _R ik_ en lugar de ∆ _φ_. En la derecha tenemos entonces que sustituir también ρ por un tensor. Puesto que sabemos, por la teoría especial de la relatividad, que la masa (inercial) es igual a la energía, habrá que colocar en la derecha el tensor de la densidad de energía o, para ser más precisos, de la densidad de energía total, en la medida en que no pertenezca al campo gravitacional puro. Alcanzamos así las ecuaciones de campo
_R ik_ – 1/2 _g ik R_ = – _kT ik_.
El segundo término de la izquierda se añade por motivos formales, pues está escrito de manera tal que su divergencia, en el sentido del cálculo diferencial absoluto, es idénticamente nula. La derecha es un resumen formal de todas aquellas cosas cuya comprensión en el sentido de una teoría de campos sigue siendo problemática. Como es natural, no dudé ni un instante de que esta formulación sólo era un recurso provisional para expresar el principio general de la relatividad, porque realmente no era _nada más_ que una teoría del campo gravitacional, aislado artificialmente, de un campo total de estructura aún desconocida.
Si algo en la teoría apuntada —aparte del requisito de invariancia de las ecuaciones con respecto al grupo de las transformaciones continuas de coordenadas— puede aspirar a ser definitivo, es la teoría del caso límite del campo gravitacional puro y su relación con la estructura métrica del espacio, de manera que a continuación, sólo comentaré las ecuaciones del campo gravitacional puro.
La peculiaridad de estas ecuaciones es, por un lado, su complicada estructura, especialmente su carácter no lineal con respecto a las variables del campo y a sus derivadas, y por otro, la necesidad casi compulsiva con que el grupo de transformaciones determina esta complicada ley del campo. Si no se hubiera sobrepasado la teoría especial de la relatividad, es decir, de la invariancia con respecto al grupo de Lorentz, la ley del campo _R ik_ = 0 sería invariante también en el marco de este grupo más restringido, pero desde el punto de vista de este grupo, no existiría de entrada ningún motivo para representar la gravitación mediante una estructura tan complicada como el tensor simétrico g _ik_. Si, pese a todo, se encontraran motivos suficientes para hacerlo, aparecería un número incontable de leyes de campo a partir de cantidades _g ik_ que son todas ellas covariantes con respecto a las transformaciones de Lorentz (pero no con respecto al grupo general). Ahora bien, aun en el caso de que de todas las leyes invariantes-Lorentz imaginables se hubiese acercado por casualidad a la perteneciente al grupo más amplio, no se estaría aún en el nivel de conocimiento alcanzado por el principio general de la relatividad, porque, desde el punto de vista del grupo de Lorentz, habría que decir, erróneamente, que dos soluciones son físicamente diferentes si son transformables la una en la otra por una transformación de coordenadas no lineal, es decir, si desde el punto de vista del grupo más amplio sólo son representaciones distintas del mismo campo.
Otra observación general acerca de los conceptos de estructura y grupo. Se juzgará una teoría tanto más perfecta cuanto más simple sea la «estructura» subyacente y cuanto más amplio sea el grupo respecto al cual son invariantes las ecuaciones del campo. Pues bien, se advierte que estos dos requisitos se molestan mutuamente. Según la teoría especial de la relatividad (grupo de Lorentz), cabe por ejemplo establecer una ley covariante para la estructura más simple que pueda imaginarse (campo escalar), mientras que en la teoría general de la relatividad (grupo más amplio de las transformaciones continuas de coordenadas) no existe una ley de campo invariante más que para la estructura más complicada del tensor simétrico. Anteriormente dimos ya razones _físicas_ de que en la física hay que exigir invariancia frente al grupo más amplio; desde el punto de vista puramente matemático no veo necesidad alguna de sacrificar la simplicidad de estructura en beneficio de la generalidad del grupo.
El grupo de la relatividad general ha sido el primero en exigir que la ley invariante más simple no sea lineal y homogénea en las variables del campo ni en sus cocientes diferenciales, cuestión fundamental porque si la ley del campo es lineal (y homogénea), la suma de dos soluciones es también una solución; así ocurre, por ejemplo, en las leyes del campo de Maxwell para el vacío. En una teoría semejante no se puede inferir, de la sola ley del campo, una interacción de estructuras que por separado pueden representarse mediante soluciones del sistema. Por eso, en todas las teorías anteriores eran precisas, además de las leyes del campo, leyes especiales para el movimiento de las formaciones materiales bajo la influencia de los campos. Es cierto que en la teoría relativista de la gravitación se defendió inicialmente, junto a la ley del campo, la ley del movimiento (línea geodésica), con independencia de aquélla, pero posteriormente se ha comprobado que la ley del movimiento no puede ni debe formularse independientemente, pues está contenida implícitamente en la ley del campo gravitacional.
Cabe explicar la esencia de esta situación, en sí muy complicada, de la siguiente manera: un único punto material en reposo queda representado por un campo gravitacional que es finito y regular en todas partes menos en el lugar donde reside el punto material; el campo tiene allí una singularidad. Ahora bien, si por integración de las ecuaciones del campo se calcula el campo correspondiente a dos puntos materiales en reposo, aquél tiene, además de las singularidades en las posiciones de los puntos materiales, otra línea compuesta de puntos singulares que conecta entre sí ambos puntos. Sin embargo, es posible poner como condición un movimiento de los puntos materiales de suerte que el campo gravitacional determinado por ellos no se haga nunca singular fuera de los puntos materiales. Estos movimientos son precisamente los descritos por la primera aproximación de las leyes de Newton. Cabe por tanto decir que las masas se mueven de manera que la ecuación del campo en el espacio exterior a las masas no determina en ningún punto singularidades del campo. Esta propiedad de las ecuaciones de la gravitación está íntimamente relacionada con su no-linealidad, condicionada a su vez por el grupo de transformaciones más amplio.
Sin embargo, cabe objetar que, si se permiten singularidades en las localizaciones de los puntos materiales, ¿qué justificaría la prohibición de la aparición de singularidades en el espacio restante? La objeción sería válida si hubiera que contemplar las ecuaciones de la gravitación como ecuaciones del campo total, pero el campo de una partícula material podrá contemplarse tanto menos como un _campo gravitatorio puro_ cuanto más se acerque uno a la verdadera localización de la partícula. De tener la ecuación de campo del campo total, habría que exigir que las partículas mismas pudiesen representarse como soluciones de las ecuaciones de campo completas, libres de singularidades _en todos los puntos_. Sólo entonces sería la teoría general de la relatividad una teoría _completa_.
Antes de abordar la perfección de la teoría general de la relatividad, explicaré mi posición ante la teoría física de más éxito de nuestro tiempo, la teoría cuántica estadística, que hace unos veinticinco años cobró una forma lógica consistente (Schrodinger, Heisenberg, Dirac, Born). Es la única teoría actual que permite comprender unitariamente las experiencias relativas al carácter cuántico de los procesos micromecánicos. Esta teoría, por un lado, y la teoría de la relatividad, por el otro, se consideran correctas en cierto sentido, aunque su fusión se ha resistido hasta ahora a todos los esfuerzos, resistencia derivada seguramente por las diversas opiniones de los físicos teóricos actuales acerca del fundamento teórico de la física futura: ¿será una teoría de campo?, ¿será una teoría esencialmente estadística? Apuntaré brevemente mis respuestas.
La física es un esfuerzo por aprehender conceptualmente la realidad como algo que se considera independiente del ser percibido. En este sentido, se habla de lo «físicamente real». En la física precuántica no había ninguna duda acerca de cómo entenderlo: lo real estaba representado en la teoría de Newton por puntos materiales en el espacio y en el tiempo; en la teoría de Maxwell, por un campo en el espacio y el tiempo. En la mecánica cuántica esta cuestión no es tan transparente. Si se pregunta si una función ψ de la teoría cuántica representa una situación real en el mismo sentido que un sistema de puntos materiales o un campo electromagnético, surge la duda entre la simple afirmación y la simple negación. ¿Por qué? Lo que expresa la función ψ (en un momento determinado) es cuál es la probabilidad de encontrar una determinada magnitud física _q_ (o _p_ ) en un determinado intervalo si la mido en el tiempo _t_. Hay que considerar la probabilidad como una magnitud empíricamente determinable, es decir, como una magnitud ciertamente «real» que puedo determinar si genero repetidas veces la misma función ψ y hago cada vez una medición _q_. ¿Pero qué decir del valor de _q_ medido? El sistema individual correspondiente ¿tenía ya este valor _q_ antes de la medición? La pregunta no tiene respuesta determinada alguna en el marco de la teoría, porque la medición es un proceso que entraña una intervención finita desde el exterior en el sistema; en consecuencia, se podría pensar que el sistema sólo adquiere un valor numérico determinado para _q_ (o para _p_ ), el valor numérico medido a través de la propia medición. Para la siguiente digresión, imaginemos a dos físicos _A_ y _B_ que representan concepciones diferentes acerca del estado real descrito por la función ψ.
_A_. El sistema individual tiene (antes de la medición) un valor determinado de _q_ (o de _p_ ) para todas las variables del sistema, concretamente _aquel_ valor que es determinado en una medición de esas variables. Basándose en esta concepción, _A_ declarará que la función ψ no es una representación exhaustiva del estado real del sistema, sino una representación incompleta, pues solamente expresa aquello que sabemos sobre el sistema gracias a mediciones anteriores.
_B_. El sistema individual no tiene (antes de la medición) ningún valor determinado de _q_ (o de _p_ ). El valor medido nace, precisamente a través del acto de medir, bajo la acción conjunta de la probabilidad que le es peculiar gracias a la función ψ. Basándose en esta concepción, _B_ declarará (o por lo menos podría declarar) que la función ψ es una representación exhaustiva del estado real del sistema.
Imaginemos ahora a estos dos físicos en el siguiente caso. Sea un sistema que en el momento _t_ de nuestra observación se compone de dos sistemas parciales _S_ 1 y _S_ 2, que en ese instante están espacialmente separados y, en el sentido de la física clásica, sin gran interacción mutua. Supongamos que el sistema total viene descrito completamente, en el sentido de la mecánica cuántica, por una función ψ conocida, ψ12. Todos los teóricos cuánticos coinciden en que si hago una medición completa de _S_ 1, obtengo, de los resultados de la medición y de ψ12, una función ψ completamente determinada del sistema _S_ 2 (llamémosla ψ2). El carácter de ψ2 depende entonces de _qué tipo_ de medición efectúe yo sobre _S_ 1. Pues bien, a mi entender se puede hablar de la situación real del sistema parcial _S_ 2. De entrada, y antes de la medición sobre _S_ 1, sabemos menos aún acerca de esta situación real que acerca de un sistema descrito por la función ψ. Pero hay un supuesto al que deberíamos atenernos incondicionalmente: la situación (estado) real del sistema _S_ 2 es independiente de lo que se emprenda con el sistema _S_ 1, espacialmente separado de él. Sin embargo, según el tipo de medición que efectúe sobre _S_ 1, obtengo una ψ2 diferente para el segundo sistema parcial (ψ2, ψ2
1...). Ahora bien, el estado real de _S_ 2 tiene que ser independiente de lo que suceda con _S_ 1. Por lo tanto, para el mismo estado real de _S_ 2 pueden hallarse (según la elección de la medición sobre _S_ 1) diferentes funciones ψ. (Esta conclusión sólo cabe eludirla o suponiendo que la medición sobre _S_ 1 modifica (telepáticamente) el estado real de _S_ 2 o negando de plano que las cosas que están espacialmente separadas poseen estados reales independientes. Ambas posibilidades me parecen completamente inaceptables.)
Si los físicos _A_ y _B_ dan entonces este razonamiento por válido, _B_ tendrá que abandonar su posición de que la función ψ es una descripción completa de una situación real, pues en ese caso sería imposible poder asignar a la misma situación (de _S_ 2) dos funciones ψ diferentes.
El carácter estadístico de esta teoría sería una consecuencia necesaria del carácter incompleto de la descripción de los sistemas en la mecánica cuántica y no existiría ya motivo alguno para suponer que la futura base de la física debe fundarse en la estadística.
Mi opinión es que la actual teoría cuántica, con ciertos conceptos básicos fijos que en esencia están tomados de la mecánica clásica, supone una formulación óptima del estado de cosas. Creo, sin embargo, que esta teoría no brinda un punto de partida útil para una evolución futura. En este punto mis expectativas difieren de las de la mayoría de los físicos contemporáneos. Ellos están convencidos de que los rasgos esenciales de los fenómenos cuánticos (variaciones aparentemente discontinuas y temporalmente no determinadas del estado de un sistema, cualidades simultáneamente corpusculares y ondulatorias de las formaciones energéticas elementales) no pueden explicarse mediante funciones del espacio para las cuales son válidas ecuaciones diferenciales. Los físicos contemporáneos piensan también que por ese camino no se podrá comprender la estructura atómica de la materia y de la radiación y prevén que los sistemas de ecuaciones diferenciales que entrarían en consideración para una teoría semejante ni siquiera tienen soluciones que sean regulares (libres de singularidades) en todos los puntos del espacio de cuatro dimensiones, pero ante todo creen que el carácter aparentemente discontinuo de los procesos elementales sólo puede representarse mediante una teoría estadística, en que las variaciones discontinuas de los sistemas quedan reflejadas en variaciones _continuas_ de las probabilidades de los posibles estados.
Me impresionan sobremanera todas las observaciones anteriores, pero pienso que la cuestión que realmente importa es si dada la situación actual de la teoría, puede emprenderse con ciertos visos de éxito. Mis experiencias en la teoría de la gravitación marcan mis expectativas: a mi entender, estas ecuaciones tienen más perspectivas de enunciar algo _preciso_ que todas las demás ecuaciones de la física. Pensemos, por comparar, en las ecuaciones de Maxwell del espacio vacío, por ejemplo. Son formulaciones que se corresponden con nuestra experiencia con campos electromagnéticos infinitamente débiles. Su mismo origen empírico determina ya su forma lineal; pero ya subrayamos que las verdaderas leyes no pueden ser lineales. Semejantes leyes cumplen el principio de superposición para sus soluciones, es decir, no contienen enunciados sobre las interacciones de cuerpos elementales. Las verdaderas leyes no pueden ser lineales, ni pueden derivarse de leyes de ese tipo. De la teoría de la gravitación he aprendido también que una colección de hechos empíricos, por muy cuantiosa que sea, no puede conducir a ecuaciones tan complicadas. Una teoría puede contrastarse con la experiencia, pero no hay ningún camino de la experiencia a la construcción de una teoría. Ecuaciones tan complejas como las del campo gravitacional sólo pueden hallarse encontrando una condición matemática lógicamente sencilla que determine por completo, o casi por completo, las ecuaciones. Una vez que se dispone de esas condiciones formales suficientemente fuertes, se necesita muy poco conocimiento fáctico para establecer la teoría; en el caso de las ecuaciones de la gravitación son la cuadridimensionalidad y el tensor simétrico como expresión de la estructura del espacio los que, junto con la invariancia frente al grupo de transformaciones continuas, determinan casi por entero las ecuaciones.
Nuestra tarea, por tanto, es encontrar las ecuaciones del campo para el campo total. La estructura que debemos buscar tiene que ser una generalización del tensor simétrico. El grupo no puede ser más restringido que el de las transformaciones continuas de coordenadas. Si se introduce una estructura más rica, el grupo no determinará ya las ecuaciones tan fuertemente como en el caso del tensor simétrico como estructura. Lo más hermoso, por tanto, sería lograr expandir otra vez el grupo, por analogía con el paso que ha conducido de la relatividad especial a la relatividad general. Concretamente, traté de utilizar el grupo de las transformaciones complejas de coordenadas, pero todos los intentos de este tipo fracasaron. También abandoné la tentativa de aumentar, abierta o encubiertamente, el número de dimensiones del espacio, proyecto iniciado por Kaluza y que, en su variante proyectiva, goza aún hoy de partidarios. Nosotros nos limitaremos al espacio de cuatro dimensiones y al grupo de las transformaciones reales continuas de coordenadas. Tras muchos años de búsqueda infructuosa, creo que la solución que apunto a continuación es la más satisfactoria desde el punto de vista lógico.
En lugar del tensor simétrico _g ik_ ( _g ik_ = _g ki_), se introduce el tensor no simétrico _g ik_. Esta cantidad se compone de una parte simétrica sik y de una parte real o puramente imaginaria y antisimétrica _g ik_, de la siguiente manera:
_g ik _= _s ik_ \+ _a ik_.
Desde el punto de vista del grupo, esta combinación de _s_ y _a_ es arbitraria, porque los tensores _s_ y _a_ , por separado, tienen carácter tensorial. Se comprueba, sin embargo, que estos _g ik_ (considerados como un todo) desempeñan en la construcción de la nueva teoría un papel análogo al de los _g ik_ simétricos en la teoría del campo gravitacional puro.
Esta generalización de la estructura del espacio parece también natural desde el punto de vista de nuestro conocimiento físico, pues sabemos que el campo electromagnético tiene que ver con un tensor antisimétrico.
Para la teoría de la gravitación es además esencial que a partir de los _g ik_ simétricos se pueda formar la densidad escalar , así como el tensor contravariante _g ik_ según la definición
_g_ _ik_ _g_ _il_ = _δ_ _k l_ ( _δ_ _k l_ = tensor de Kronecker).
Estas estructuras se pueden definir, en exacta correspondencia, para los _g ik_ no simétricos, incluidas densidades tensoriales.
En la teoría de la gravitación es además esencial que para un campo _g ik_ simétrico dado se pueda definir un Γ _ik_
_l_ que sea simétrico en los subíndices y que, geométricamente considerado, gobierne el desplazamiento paralelo de un vector. Análogamente, para los _g ik_ no simétricos se puede definir un Γ _ik l_ no simétrico según la fórmula:
_g_ _ik,l_ – _g_ _sk_ Γ _s _il__ – _g_ _is_ Γ _s _lk__ = 0
(A)
que concuerda con la correspondiente relación del _g_ simétrico, sólo que aquí hay que tener en cuenta, naturalmente, la posición de los subíndices en _g_ y Γ.
Al igual que en la teoría real (con _g ik_ simétricos) cabe formar a partir de los Γ una curvatura _R i_
_klm_ , y a partir de ésta una curvatura contraída _R kl_. Finalmente, utilizando un principio de variación junto con (A), es posible encontrar ecuaciones de campo compatibles:
(B1)
(B2)
_R ik_ = 0
(C1)
(C2)
Cada una de las dos ecuaciones (B1), (B2) es consecuencia de la otra si se satisface (A). R _kl_ representa la parte simétrica de _R kl_, la parte antisimétrica.
En el caso de que se anule la parte antisimétrica de _g ik_, se reducen estas fórmulas a (A) y (Cl), el caso del campo gravitacional puro.
Creo que estas ecuaciones representan la generalización más natural de las ecuaciones de la gravitación. La comprobación de su utilidad física es una tarea extraordinariamente difícil, porque no sirven las aproximaciones. ¿Qué soluciones libres de singularidades en todo el espacio tienen estas ecuaciones?
Ojalá esta recapitulación haya alcanzado su finalidad de exponer al lector no sólo cómo están tramados los esfuerzos de toda una vida sino también por qué me han animado a tener determinadas expectativas.
# VII
## _MIS ÚLTIMOS AÑOS_ (ANTOLOGÍA)
Esta colección de ensayos fue escrita durante los últimos veinte años de la vida de Einstein, después de lograr sus mayores contribuciones a la ciencia y tras haber conseguido celebridad internacional como pensador eminente de su tiempo. A diferencia de sus anteriores trabajos, Einstein ya no anhelaba explicar los funcionamientos básicos de su mayor logro (la teoría de la relatividad), sino presentar una perspectiva histórica más amplia en el desarrollo de la física. En 1936, cuando Einstein escribió el más extenso y más detallado de estos ensayos, «Física y realidad», el mundo científico estaba sufriendo una serie de revoluciones basadas en la nueva comprensión tanto de la teoría de la relatividad de Einstein como de la mecánica cuántica.
Aunque Einstein fuera una pieza capital en el desarrollo de la física cuántica con su artículo de 1905 sobre el efecto fotoeléctrico, muy pocos de sus escritos más famosos se centran en él. Al contrario que con la relatividad, que proveía una explicación determinista a fenómenos físicos, la mecánica cuántica es probabilística desde sus fundamentos, lo cual era de difícil aceptación para Einstein. Consideremos lo que dice la física cuántica: una partícula puede existir en dos estados simultáneamente, y sólo se verá forzada a realizar una decisión particular (y además, al azar) cuando el sistema sea observado. Tales sistemas son tan incompatibles con el mundo macroscópico que Einstein proponía que si fuéramos capaces de investigar fenómenos microscópicos en las más pequeñas escalas, seríamos capaces de encontrar relaciones deterministas.
Tampoco veía con buenos ojos el hecho de que la mecánica cuántica requiera un espacio y tiempo absolutos, conceptos que eran rechazados por su propia teoría de la relatividad. Einstein, Podolsky y Rosen argumentaron un año antes que las dos teorías creaban una paradoja.
Dos partículas subatómicas creadas en un experimento de física de altas energías quedarían ligadas la una con la otra, y por tanto la observación de una «forzaría» a la otra, aun a distancias remotas, a un particular estado cuántico. Esta idea parecía sugerir que, dado que el efecto tenía que ocurrir instantáneamente, una señal entre las dos debía de viajar más rápido que la luz. Y la relatividad excluye viajes más rápidos que la luz. La interpretación moderna dice que la paradoja de Einstein-Podolsky-Rosen queda resuelta por el hecho de que no hay ninguna información fluyendo de una partícula a la otra.
Resulta evidente de sus escritos que Einstein era perfectamente consciente de que se encontraba en medio de una revolución (revolución que él mismo, en gran parte, había ayudado a llevar a cabo). Sus preocupaciones sobre los problemas filosóficos con la relatividad y la mecánica cuántica se resolvieron finalmente mediante el desarrollo de la mecánica cuántica relativista, la teoría cuántica de campos, que en última instancia forman las bases de la teoría de cuerdas. Lo que a su vez podría satisfacer el sueño de Einstein de unificar las fuerzas de la física.
### 1
### LA TEORÍA DE LA RELATIVIDAD*
Las matemáticas tratan exclusivamente de las relaciones internas entre los conceptos, sin considerar sus relaciones con la experiencia. La física también trata con conceptos matemáticos, pero estos conceptos solo alcanzan contenido físico mediante la clara determinación de su relación con los objetos de la experiencia. Éste es, en particular, el caso de los conceptos de movimiento, espacio y tiempo.
La teoría de la relatividad es esa teoría física que se basa en una interpretación física consistente de estos tres conceptos. El nombre «teoría de la relatividad» está relacionado con el hecho de que, desde el punto de vista de la experiencia posible, el movimiento aparece siempre como movimiento _relativo_ de un objeto con respecto a otro (e.g., de un automóvil con respecto al suelo, o de la Tierra con respecto al Sol y las estrellas fijas). El movimiento no es nunca observable como «movimiento con respecto al espacio» o, como se ha expresado, como «movimiento absoluto». El «principio de relatividad» en su sentido más amplio está contenido en el enunciado: La totalidad de los fenómenos físicos es de un carácter tal que no da base para la introducción del concepto de «movimiento absoluto»; o de forma más corta pero menos precisa: No hay movimiento absoluto.
Podría parecer que nuestra intuición no va a ganar mucho con semejante enunciado negativo. Sin embargo, dicho enunciado supone una fuerte restricción para las leyes (concebibles) de la naturaleza. En este sentido existe una analogía entre la teoría de la relatividad y la termodinámica. La última también se basa en un enunciado negativo: «No existe un perpetuum mobile».
El desarrollo de la teoría de la relatividad procedió en dos pasos, la «teoría de la relatividad especial» y la «teoría de la relatividad general». La última presupone la validez de la primera como un caso límite y es su continuación consistente.
#### TEORÍA DE LA RELATIVIDAD ESPECIAL
LA INTERPRETACIÓN FÍSICA DEL ESPACIO Y EL TIEMPO EN MECÁNICA CLÁSICA
La geometría, desde un punto de vista físico, consiste en la totalidad de las leyes de acuerdo con las cuales unos cuerpos rígidos mutuamente en reposo pueden situarse con respecto a los demás (e.g., un triángulo consiste en tres varillas cuyos extremos están en contacto permanente). Se supone que con tal interpretación las leyes euclidianas son válidas. El «espacio» en esta interpretación es en principio un cuerpo rígido (o esqueleto) infinito al que se refiere la posición de todos los demás cuerpos (cuerpo de referencia). La geometría analítica (Descartes) utiliza como cuerpo de referencia, que representa al espacio, tres varillas rígidas mutuamente perpendiculares sobre las que se miden las coordenadas ( _x, y, z_ ) de los puntos del espacio de la manera conocida como proyecciones perpendiculares (con la ayuda de una unidad-de-medida rígida).
La física trabaja con «sucesos» en el espacio y en el tiempo. A cada suceso pertenece, además de sus coordenadas de lugar _x, y, z_ , un valor del tiempo _t_. Este último se consideraba medible por un reloj (un proceso periódico ideal) de extensión espacial despreciable. Este reloj _C_ debe considerarse en reposo en un punto del sistema de coordenadas, e.g., en el origen de coordenadas ( _x=y=z=O_ ). El tiempo de un suceso que tiene lugar en un punto _P_ ( _x, y, z_ ) se define entonces como el tiempo que se muestra en el reloj _C_ simultáneamente con el suceso. Aquí se daba por hecho que el concepto de «simultáneo» es físicamente significativo sin una definición especial. Si esta falta de exactitud parece inocua es sólo porque, con ayuda de la luz (cuya velocidad es prácticamente infinita desde el punto de vista de la experiencia diaria), parece posible definir inmediatamente la simultaneidad de sucesos espacialmente distantes. La teoría de la relatividad especial elimina esta falta de precisión dando una definición física de simultaneidad con el uso de señales luminosas. El tiempo _t_ del suceso en _P_ es la lectura del reloj _C_ en el instante de llegada de una señal luminosa emitida desde el suceso, corregido teniendo en cuenta el tiempo necesario para que la señal luminosa recorra la distancia. Esta corrección presupone (postula) que la velocidad de la luz es constante.
Esta definición reduce el concepto de simultaneidad de sucesos espacialmente distantes al de simultaneidad de sucesos que ocurren en el mismo lugar (coincidencia), a saber: la llegada de la señal luminosa en _C_ y la lectura de _C_.
La mecánica clásica se basa en el principio de Galileo: Un cuerpo está en movimiento rectilíneo y uniforme mientras otros cuerpos no actúan sobre él. Este enunciado no puede ser válido para sistemas de coordenadas con movimiento arbitrario. Sólo puede pretender validez para los denominados «sistemas inerciales». Los sistemas inerciales están en movimiento rectilíneo y uniforme con respecto a otros sistemas inerciales. En física clásica las leyes pretenden validez sólo con respecto a todos los sistemas inerciales (principio de relatividad especial).
Ahora es fácil de entender el dilema que ha llevado a la teoría de la relatividad especial. Experiencia y teoría han llevado poco a poco al convencimiento de que la luz en el espacio vacío viaja siempre con la misma velocidad _c_ independiente de su color y del estado de movimiento de la fuente luminosa (principio de constancia de la velocidad de la luz—al que en lo sucesivo llamaremos «principio L»). Ahora bien, consideraciones intuitivas elementales parecen mostrar que el mismo rayo luminoso no puede moverse con la misma velocidad _c_ con respecto a todos los sistemas inerciales: el principio L parece contradecir el principio de relatividad especial.
Resulta, sin embargo, que ésta es sólo una contradicción aparente, basada esencialmente en el prejuicio sobre el carácter absoluto del tiempo o, más bien, de la simultaneidad de sucesos distantes. Acabamos de ver que las _x, y, z_ y _t_ de un suceso sólo pueden ser definidas, por el momento, con respecto a un cierto sistema de coordenadas escogido (sistema inercial). La transformación de las _x, y, z, t_ de sucesos que debe realizarse al pasar de un sistema inercial a otro (transformación de coordenadas), es un problema que no puede resolverse sin hipótesis físicas especiales. Sin embargo, el siguiente postulado es exactamente lo que hace falta para una solución: _El principio L es válido para todos los sistemas inerciales_ (aplicación del principio de relatividad especial al principio L). Las transformaciones así definidas, que son lineales en _x, y, z, t_ se denominan transformaciones de Lorentz. Las transformaciones de Lorentz se caracterizan formalmente por la exigencia de que la expresión
_dx_ 2 \+ _dy_ 2 \+ _dz_ 2 – _c_ 2 _dt_ 2,
formada a partir de las diferencias de coordenadas _dx, dy, dz, dt_ de dos sucesos infinitamente próximos, sea invariante (i.e., que mediante la transformación se llega a la _misma_ expresión formada a partir de las diferencias de coordenadas en el nuevo sistema).
Con la ayuda de las transformaciones de Lorentz el principio de relatividad especial puede expresarse así: Las leyes de la naturaleza son invariantes con respecto a transformaciones de Lorentz (i.e., una ley de la Naturaleza no cambia su forma si introducimos en ella un nuevo sistema inercial con la ayuda de una transformación de Lorentz sobre _x, y, z, t_.)
La teoría de la relatividad especial ha llevado a una clara comprensión de los conceptos físicos de espacio y tiempo y, en conexión con esto, a un reconocimiento del comportamiento de varas de medir y relojes en movimiento. En principio ha eliminado el concepto de simultaneidad absoluta y con ello también el de acción instantánea a distancia en el sentido de Newton. Ha mostrado cómo debe modificarse la ley de movimiento al tratar con movimientos que no son despreciablemente pequeños comparados con la velocidad de la luz. Ha llevado a una clarificación formal de las ecuaciones de Maxwell del campo electromagnético; en particular ha llevado a una comprensión de la unicidad esencial del campo eléctrico y magnético. Ha unificado las leyes de conservación del momento y de la energía en una única ley y ha demostrado la equivalencia de masa y energía. Desde un punto de vista formal se puede caracterizar el logro de la teoría de la relatividad especial así: ha _mostrado_ en general el papel que la constante universal _c_ (velocidad de la luz) desempeña en las leyes de la naturaleza y ha demostrado que existe una estrecha relación entre la forma en que el tiempo, por una parte, y las coordenadas espaciales, por otra, entran en las leyes de la Naturaleza.
#### LA TEORÍA DE LA RELATIVIDAD GENERAL
La teoría de la relatividad especial retenía las bases de la mecánica clásica en cuanto a un punto fundamental; éste era el enunciado: Las leyes de la naturaleza son válidas sólo con respecto a sistemas inerciales. Las transformaciones «permisibles» para las coordenadas (i.e., aquéllas que no cambian la forma de las leyes) son _exclusivamente_ las transformaciones de Lorentz (lineales). ¿Está esta restricción fundada realmente en hechos físicos? El siguiente argumento lo niega de forma convincente.
Principio de equivalencia. Un cuerpo tiene una masa inercial (resistencia a la aceleración) y una masa pesante (que determina el peso del cuerpo en un campo gravitatorio dado, e.g., el campo en la superficie de la Tierra). La experiencia dice que estas dos magnitudes, tan diferentes según su definición, se miden por un mismo número. Debe haber una razón muy profunda para ello. El hecho también puede describirse así: En un campo gravitatorio, masas diferentes reciben la misma aceleración. Finalmente, también puede expresarse así: Los cuerpos en un campo gravitatorio se comportan como lo harían en ausencia de campo gravitatorio si, en este segundo caso, el sistema de referencia utilizado es un sistema de coordenadas uniformemente acelerado (en lugar de un sistema inercial).
Parece, por consiguiente, que no hay razón para prohibir la siguiente interpretación del segundo caso. Se considera el sistema como si estuviera «en reposo» y se considera el campo gravitatorio «aparente» que existe con respecto a él como si fuera un campo «real». Por supuesto, este campo gravitatorio «generado» por la aceleración del sistema de coordenadas sería de extensión ilimitada, de tal manera que no podría ser causado por masas gravitatorias en una región finita; sin embargo, si estamos buscando una teoría de tipo campo, este hecho no tiene que detenernos. Con esta interpretación el sistema inercial pierde su significado y se tiene una «explicación» para la igualdad de masa pesante y masa inerte (la misma propiedad de la materia aparece como peso o como inercia dependiendo del modo de descripción).
Considerada formalmente, la admisión de un sistema de coordenadas que está acelerado con respecto a las coordenadas «inerciales» originales significa la admisión de transformaciones de coordenadas no lineales, y con ello una potente ampliación de la idea de invariancia, i.e., el principio de relatividad.
En primer lugar, una discusión detallada, utilizando los resultados de la teoría de la relatividad especial, demuestra que con dicha generalización las coordenadas ya no pueden ser interpretadas directamente como los resultados de medidas. Sólo la diferencia de coordenadas junto con las cantidades del campo que describen el campo gravitatorio determinan distancias medibles entre sucesos. Una vez que uno se ha visto obligado a admitir transformaciones de coordenadas no lineales como transformaciones entre sistemas de coordenadas equivalentes, la exigencia más sencilla parece ser la de admitir todas las transformaciones de coordenadas continuas (que forman un grupo), i.e., admitir sistemas de coordenadas curvilíneas en las que los campos se describen mediante funciones regulares (principio de la relatividad general).
Ahora no es difícil entender por qué el principio de relatividad general ( _sobre la base del principio de equivalencia_ ) ha llevado a una teoría de la gravitación. Hay un tipo especial de espacio cuya estructura física (campo) podemos presuponer conocida de forma precisa sobre la base de la teoría de la relatividad especial. Éste es el espacio vacío sin campo electromagnético y sin materia. Está completamente determinado por su propiedad «métrica»: Sean _dx_ 0, _dy_ 0, _dz_ 0, _dt_ 0 las diferencias de coordenadas de dos puntos (sucesos) infinitamente próximos; entonces
(1)
_ds_ 2 = _dx_ 02 \+ _dy_ 02 \+ _dz_ 02 – _c_ 2 _dt_ 02
es una cantidad medible que es independiente de la elección concreta del sistema inercial. Si uno introduce en este espacio las nuevas coordenadas _x_ 1, _x_ 2, _x_ 3, _x_ 4 a través de una transformación de coordenadas general, entonces la cantidad _ds_ 2 para el mismo par de puntos tiene una expresión de la forma
donde _g ik _= _g kl_. Las _g ik_ que forman un «tensor simétrico» y son funciones continuas de _x_ 1, _x_ 2, _x_ 3, _x_ 4 describen entonces, según el principio de equivalencia, un campo gravitatorio de un tipo especial (a saber, uno que puede re-transformarse en la forma (1)). De las investigaciones de Riemann sobre espacios métricos pueden darse exactamente las propiedades matemáticas de este campo _g ik_ («condición de Riemann»). Sin embargo, lo que estamos buscando son las ecuaciones satisfechas por campos gravitatorios «generales». Es natural suponer que también pueden describirse como campos tensoriales del tipo _g ik_, que en general _no_ admiten una transformación en la forma (1), i.e., que no satisfacen la «condición de Riemann», sino condiciones más débiles, que, como la condición de Riemann, son independientes de la elección de coordenadas (i.e., son generalmente invariantes). Una simple consideración formal lleva a condiciones más débiles que están íntimamente relacionadas con la condición de Riemann. Estas condiciones son las ecuaciones mismas del campo gravitatorio puro (en el exterior de la materia y en ausencia de un campo electromagnético).
Estas ecuaciones dan las ecuaciones de Newton de la mecánica gravitatoria como una ley aproximada, y además dan ciertos pequeños efectos que han sido confirmados por la observación (desviación de la luz por el campo gravitatorio de una estrella, influencia del campo gravitatorio sobre la frecuencia de la luz emitida, lenta rotación de las trayectorias elípticas de los planetas, movimiento del perihelio del planeta Mercurio). Dan además una explicación para el movimiento en expansión de los sistemas galácticos, que se manifiesta en el desplazamiento hacia el rojo de la luz emitida desde estos sistemas.
La teoría de la relatividad general está todavía incompleta en tanto que sólo ha sido capaz de aplicar satisfactoriamente el principio de relatividad general a campos gravitatorios, pero no al campo total. Aun no sabemos con certeza mediante qué mecanismo matemático debe describirse el campo total en el espacio y cuáles son las leyes generales invariantes a las que está sometido este campo total. Una cosa, no obstante, parece cierta: que el principio de relatividad general se mostrará como una herramienta necesaria y efectiva para la solución de los problemas del campo total.
### 2
### _E = Mc 2_*
Para entender la ley de la equivalencia de masa y energía debemos retroceder hasta dos principios de conservación o «balance» que, independientes uno de otro, mantenían un alto lugar en la física prerelativista. Estos eran el principio de conservación de la energía y el principio de conservación de la masa. El primero de estos, avanzado por Leibniz ya en el siglo XVII, fue desarrollado en el siglo XIX esencialmente como un corolario de un principio de mecánica.
**Dibujo del manuscrito del Dr. Einstein**
Consideremos, por ejemplo, un péndulo cuya masa oscila de un lado a otro entre los puntos _A_ y _B_. En estos puntos la masa _m_ se encuentra a una altura que supera en una cantidad _h_ a la altura a la que se encuentra en el punto _C_ , el punto más bajo de la trayectoria (ver el dibujo). Por otra parte, en _C_ ha desaparecido esa elevación y en su lugar la masa tiene una velocidad _v_. Es como si la elevación en altura pudiera convertirse totalmente en velocidad, y viceversa. La relación exacta se expresaría como , donde _g_ representa la aceleración de la gravedad. Lo interesante aquí es que esta relación es independiente tanto de la longitud del péndulo como de la forma de la trayectoria en la que se mueve la masa.
Lo importante es que algo permanece constante a lo largo del proceso, y ese algo es la energía. En _A_ y en _B_ es una energía de posición, o energía «potencial»; en _C_ es una energía de movimiento, o energía «cinética». Si este concepto es correcto, entonces la suma debe tener el mismo valor para cualquier posición del péndulo, si se entiende que ahora _h_ representa la altura por encima de _C_ y _v_ representa la velocidad en ese punto de la trayectoria del péndulo. La generalización de este principio nos da la ley de conservación de la energía mecánica. Pero ¿qué sucede cuando la fricción frena el péndulo?
La respuesta a esto se encontró en el estudio de los fenómenos térmicos. Dicho estudio, basado en la hipótesis de que el calor es una sustancia indestructible que fluye de un objeto más caliente a otro más frío, parecía darnos un principio de «conservación del calor». Por otra parte, desde tiempos inmemoriales se ha sabido que el calor podía producirse por fricción, como en los taladros para hacer fuego que utilizan los indios. Durante mucho tiempo los físicos fueron incapaces de explicar este tipo de «producción» de calor. Sus dificultades sólo fueron superadas cuando se estableció de forma satisfactoria que, para producir una cantidad dada de calor por fricción, había que gastar una cantidad de energía exactamente proporcional. Así llegamos a un principio de «equivalencia de trabajo y calor». En nuestro péndulo, por ejemplo, la energía mecánica es convertida poco a poco en calor por la fricción.
De esa manera, los principios de conservación de las energías mecánica y térmica se fundieron en uno. Inmediatamente los físicos se convencieron de que el principio de conservación podría ampliarse todavía más para incluir procesos químicos y electromagnéticos —en resumen, podría aplicarse a todos los campos. Parecía que en nuestro sistema físico había una suma total de energías que permanecía constante a través de todos los cambios que pudieran ocurrir.
Llegamos ahora al principio de conservación. La masa se define por la resistencia que opone un cuerpo a su aceleración (masa inerte). También se mide por el peso del cuerpo (masa pesante). Que estas dos definiciones radicalmente diferentes lleven al mismo valor para la masa de un cuerpo es, en sí mismo, un hecho sorprendente. De acuerdo con el principio —a saber, que las masas permanecen invariables bajo cualquier cambio físico o químico— la masa parecía ser la cualidad esencial (pues no varía) de la materia. El calentamiento, la fusión, la vaporización o la combinación en compuestos químicos no cambiarían la masa total.
Los físicos aceptaban este principio hasta hace unas pocas décadas. Pero el mismo se mostró inadecuado frente a la teoría de la relatividad especial. Por consiguiente se fusionó con el principio de la energía, igual que, unos 60 años antes, el principio de conservación de la energía mecánica se había combinado con el principio de conservación del calor. Podríamos decir que el principio de la conservación de la energía, que había absorbido previamente al de conservación del calor, procedía ahora a absorber al de conservación de la masa y deja al campo solo.
Es costumbre expresar la equivalencia de masa y energía (aunque de forma algo inexacta) por la fórmula _E=mc 2_, en la que _c_ representa la velocidad de la luz, unos 300.000 kilómetros por segundo. _E_ es la energía contenida en un cuerpo en reposo; _m_ es su masa. La energía que pertenece a la masa _m_ es igual a dicha masa multiplicada por el cuadrado de la enorme velocidad de la luz, lo que supone una enorme cantidad de energía por cada unidad de masa.
Pero si cada gramo de material contiene esta tremenda energía, ¿por qué pasó tanto tiempo inadvertida? La respuesta es bastante simple: mientras nada de la energía se cede al exterior, no puede ser observada. Es como si un hombre que fuera fabulosamente rico nunca gastara o diera un céntimo; nadie podría decir cuán rico era.
Ahora bien, podemos invertir la relación y ver que un incremento de _E_ en la cantidad de energía debe ir acompañado por un incremento de en la masa. Yo puedo fácilmente suministrar energía a la masa—por ejemplo, si yo la caliento hasta que su temperatura suba 10 grados. Entonces, ¿por qué no medir el incremento de masa, o el incremento de peso, relacionado con este cambio? El problema aquí es que en el incremento de masa el enorme factor _c_ 2 aparece en el denominador de la fracción. En tal caso el incremento es demasiado pequeño para ser medido directamente; ni siquiera con la balanza más sensible.
Para que un incremento de masa sea medible, el cambio en energía por unidad de masa debe ser extraordinariamente grande. Sólo sabemos de un proceso en el que se liberan tales cantidades de energía por unidad masa: la desintegración radiactiva. Esquemáticamente, el proceso va así: Un átomo de masa _M_ se divide en dos átomos de masas _M'_ y _M''_ , que se separan con enorme energía cinética. Si imaginamos que estas masas llegan al reposo —es decir, si les quitamos su energía de movimiento— entonces, consideradas juntas, son esencialmente más pobres en energía que lo era el átomo original. Según el principio de equivalencia, la suma de las masas _M' + M''_ de los productos de la desintegración debe ser también algo menor que la masa original _M_ del átomo que se ha desintegrado, en contradicción con el viejo principio de conservación de la masa. La diferencia relativa de las dos es del orden de 1/10 de un 1 por 100.
En realidad no podemos pesar los átomos individualmente. Sin embargo, hay métodos indirectos para medir sus pesos exactamente. Análogamente, podemos determinar las energías cinéticas que son transferidas a los productos de la desintegración _M'_ y _M''_. Así se ha hecho posible poner a prueba y confirmar la fórmula de la equivalencia. Asimismo, las leyes nos permiten calcular por adelantado, a partir de pesos atómicos determinados con precisión, cuánta energía será liberada en cualquier desintegración atómica en la que pensemos. La ley no dice nada, por supuesto, acerca de si —o cómo— puede producirse la reacción de desintegración.
Lo que sucede puede ilustrarse con la ayuda de nuestro hombre rico. El átomo _M_ es un rico avaro que durante su vida no dona dinero ( _energía_ ). Pero en su testamento deja su fortuna a sus hijos _M'_ y _M''_ con la condición de que donen a la comunidad una pequeña cantidad, menos de una milésima parte del legado total ( _energía o masa_ ). Los hijos juntos tienen algo menos de lo que tenía el padre ( _la suma de las masas M' + M'' es algo menor que la masa M del átomo radiactivo_ ). Pero la parte cedida a la comunidad, aunque relativamente pequeña, es aún tan enormemente grande ( _considerada como energía cinética_ ) que conlleva una gran amenaza de catástrofe. Evitar esa amenaza se ha convertido en el problema más urgente de nuestro tiempo.
### 3
### ¿QUÉ ES LA TEORÍA DE LA RELATIVIDAD?*
Con mucho gusto accedo a la petición que me hace su colega de escribir algunas palabras sobre relatividad para _The Times_. Tras la lamentable ruptura del diálogo entre los hombre de ciencia, aprovecho esta oportunidad para expresar mis sentimientos de alegría y mi gratitud hacia los astrónomos y físicos de Inglaterra. Es propio de la gran y orgullosa tradición del trabajo científico en su país que científicos eminentes dediquen tiempo y esfuerzo, y sus instituciones científicas no ahorren gastos, para comprobar las implicaciones de una teoría que fue completada y publicada durante la Guerra en la tierra de sus enemigos. Aunque la investigación de los efectos del campo gravitatorio del Sol sobre los rayos luminosos es una cuestión objetiva, yo no puedo dejar de expresar mi agradecimiento personal a mis colegas ingleses por su trabajo, pues sin éste difícilmente hubiera podido ver comprobada en el curso de mi vida la consecuencia más importante de mi teoría.
Podemos distinguir varios tipos de teorías en física. La mayoría de ellas son constructivas. Intentan construir una imagen de los fenómenos más complejos a partir de materiales de un esquema formal relativamente simple. Así, la teoría cinética de los gases trata de reducir los procesos mecánicos, térmicos y difusivos a movimientos de moléculas; es decir, trata de construirlos a partir de la hipótesis del movimiento molecular. Cuando decimos que hemos conseguido entender un conjunto de procesos naturales, invariablemente queremos decir que se ha encontrado una teoría constructiva que abarca los procesos en cuestión.
Además de esta clase muy importante de teorías, existe una segunda clase a la que llamaré «teorías de principio». Éstas utilizan el método analítico, no el sintético. Los elementos que constituyen su base y punto de partida son características generales de procesos naturales que no se construyen hipotéticamente sino que se descubren empíricamente; estos principios dan lugar a criterios formulados matemáticamente que deben ser satisfechos por los procesos individuales o por sus representaciones teóricas. Por ejemplo, la ciencia de la termodinámica parte de un hecho de experiencia universal, como es la imposibilidad del movimiento perpetuo, y a partir de ello trata de deducir, por medios analíticos, conexiones necesarias que deben satisfacer los sucesos individuales.
Las ventajas de la teoría constructiva son la completitud, la adaptabilidad y la claridad; las de la teoría de principios son la perfección lógica y la seguridad de los fundamentos.
La teoría de la relatividad pertenece a la segunda clase. Para captar su naturaleza debemos ante todo familiarizarnos con los principios en los que está basada. Sin embargo, antes de entrar en ello debo recordar que la teoría de la relatividad se parece a un edificio de dos plantas, la teoría especial y la teoría general. La teoría especial, sobre la que descansa la teoría general, se aplica a todos los fenómenos físicos con excepción de la gravitación; la teoría general proporciona la ley de gravitación y sus relaciones con las otras fuerzas de la naturaleza.
Es bien sabido desde los tiempos de la Grecia antigua que para describir el movimiento de un cuerpo se necesita un segundo cuerpo al que referir el movimiento del primero. El movimiento de un vehículo se considera en referencia a la superficie de la Tierra; el de un planeta se refiere a la totalidad de las estrellas fijas visibles. En física se denomina sistema de coordenadas al cuerpo al que están referidos espacialmente los sucesos. Las leyes de la mecánica de Galileo y Newton, por ejemplo, sólo pueden formularse con la ayuda de un sistema de coordenadas.
Sin embargo, para que las leyes de la mecánica sean válidas el estado de movimiento del sistema de coordenadas no puede escogerse arbitrariamente: debe estar libre de rotación y aceleración. Un sistema de coordenadas admisible en mecánica se denomina «sistema inercial». Según la mecánica, el estado de movimiento de un sistema inercial no está unívocamente determinado por la naturaleza. Por el contrario, es válida la siguiente definición: un sistema de coordenadas que se mueve uniformemente y en línea recta con respecto a un sistema inercial es también un sistema inercial. Se entiende por «principio de relatividad especial» la generalización de esta definición para incluir cualquier suceso natural: así, toda ley universal de la naturaleza que es válida con relación a un sistema de coordenadas _C_ , también debe ser válida, en la misma forma, con relación a un sistema de coordenadas _C_ ' que está en movimiento de traslación uniforme con respecto a _C_.
El segundo principio sobre el que descansa la relatividad especial es el «principio de constancia de la velocidad de la luz en el vacío». Este principio afirma que la luz en el vacío siempre tiene una velocidad de propagación determinada (independiente del estado de movimiento del observador o de la fuente de la luz). La confianza que tienen los físicos en este principio deriva de los éxitos conseguidos por la electrodinámica de Clerk Maxwell y Lorentz.
Los dos principios mencionados están firmemente apoyados por la experiencia, aunque parecen difíciles de reconciliar lógicamente. La teoría de la relatividad especial consiguió reconciliarlos mediante una modificación de la cinemática —i.e., de la doctrina de las leyes relativas al espacio y el tiempo (desde el punto de vista de la física)—. Quedó claro que hablar de la simultaneidad de dos sucesos no tenía significado salvo en el caso en que se refiriesen a un mismo sistema de coordenadas, y que la forma de los aparatos de medida y la velocidad a la que se mueven los relojes depende de su estado de movimiento con respecto al sistema de coordenadas.
Pero la vieja física, incluidas las leyes de movimiento de Galileo y Newton, no encajaba en la dinámica relativista sugerida. De esta última se seguían condiciones matemáticas generales a las que debían conformarse las leyes naturales si los dos principios antes mencionados son realmente válidos. Había que adaptar la física a éstos. En particular, los científicos llegaron a una nueva ley de movimiento para masas puntuales (en movimiento rápido) que fue admirablemente confirmada en el caso de partículas cargadas eléctricamente. El resultado más importante de la teoría de la relatividad especial concernía a la masa inerte de sistemas corpóreos. Resultó que la inercia de un sistema depende necesariamente de su contenido de energía, y esto llevó directamente a la idea de que la masa inerte es simplemente energía latente. El principio de conservación de la masa perdió su independencia y se fusionó con el de conservación de la energía.
No obstante, la teoría de la relatividad especial, que era simplemente un desarrollo sistemático de la electrodinámica de Clerk Maxwell y Lorentz, tenía mayor alcance. La independencia de las leyes físicas del estado de movimiento del sistema de coordenadas, ¿debería estar restringida al movimiento de traslación uniforme de unos sistemas de coordenadas con respecto a otros? ¿Qué tiene que ver la naturaleza con nuestros sistemas de coordenadas y su estado de movimiento? Si para describir la naturaleza es necesario hacer uso de un sistema de coordenadas arbitrariamente introducido por nosotros, entonces la elección de su estado de movimiento no debería estar sometida a ninguna restricción: las leyes deberían ser totalmente independientes de esta elección (principio de relatividad general).
El establecimiento de este principio de relatividad general se hace más fácil por un hecho de experiencia conocido desde hace tiempo, a saber, que el peso y la inercia de un cuerpo están controlados por la misma constante. (Igualdad de masas inerte y gravitatoria.) Imaginemos un sistema de coordenadas que está rotando uniformemente con respecto a un sistema inercial en el sentido newtoniano. Las fuerzas centrífugas que se manifiestan en dicho sistema deben ser consideradas, según las enseñanzas de Newton, como efectos de la inercia. Pero dichas fuerzas centrífugas son, exactamente igual que las fuerzas gravitatorias, proporcionales a las masas de los cuerpos. ¿No debería ser posible en este caso considerar el sistema de coordenadas estacionario y las fuerzas centrífugas como fuerzas gravitatorias? Ésta parece la visión obvia, pero la mecánica clásica la prohíbe.
Esta breve consideración sugiere que una teoría de la relatividad general debe proporcionar las leyes de la gravitación, y las consecuencias de la idea han justificado nuestras esperanzas.
Pero el camino era más espinoso de lo que se podía suponer, porque exigía el abandono de la geometría euclidiana. Es decir, las leyes de acuerdo con las cuales pueden disponerse los cuerpos en el espacio no están en completo acuerdo con las leyes espaciales que la geometría euclidiana atribuye a los cuerpos. Eso es lo que queremos decir cuando hablamos de la «curvatura del espacio». Los conceptos fundamentales de «línea recta», «plano», etc., pierden con ello su significado preciso en física.
En la teoría de la relatividad general la doctrina del espacio y el tiempo, o cinemática, ya no figura como un fundamento independiente del resto de la física. El comportamiento geométrico de los cuerpos y el movimiento de los relojes depende de los campos gravitatorios, que a su vez son producidos por la materia.
La nueva teoría de la gravitación difiere considerablemente, en lo que concierne a los principios, de la teoría de Newton. Pero sus resultados prácticos concuerdan tan bien con los de la teoría de Newton que es difícil encontrar criterios para distinguirlas que sean accesibles a la experiencia. Hasta ahora se han encontrado los siguientes:
En la precesión de las elipses de las órbitas planetarias en torno al Sol (confirmado en el caso de Mercurio).
En la curvatura de los rayos luminosos por la acción de campos gravitatorios (confirmada por las fotografías inglesas de los eclipses).
En un desplazamiento de las líneas espectrales hacia el extremo rojo del espectro en el caso de la luz que nos llega procedente de estrellas de magnitud considerable (no confirmado hasta ahora).
El principal atractivo de la teoría está en su compleción lógica. Basta que se pruebe la falsedad de una sola de las conclusiones que se siguen de ella para que la teoría deba ser abandonada; modificarla sin destruir la estructura entera parece imposible.
Que nadie suponga, no obstante, que el majestuoso trabajo de Newton puede ser superado por ésta o cualquier otra teoría. Sus grandes y lúcidas ideas conservarán siempre su importancia singular como fundamento de toda nuestra moderna estructura conceptual en la esfera de la filosofía natural.
NOTA: Algunas de las afirmaciones en su artículo concernientes a mi vida y mi persona deben su origen a la viva imaginación del autor. He aquí otra aplicación del principio de relatividad para delectación del lector: Hoy se me describe en Alemania como un «sabio alemán», y en Inglaterra como un «judío suizo». Pero si el destino me llevara a ser representado como una _bête noire_ , yo me convertiría, por el contrario, en un «judío suizo» para los alemanes y un «sabio alemán» para los ingleses.
### 4
### FÍSICA Y REALIDAD*
#### I. CONSIDERACIONES GENERALES SOBRE EL MÉTODO DE LA CIENCIA
A menudo se ha dicho, y no sin justificación por cierto, que el hombre de ciencia es un filósofo de mala calidad. ¿Por qué el físico no deja pues que el filósofo se entregue a la tarea de filosofar? Esto bien puede ser lo correcto en momentos en que el físico cree tener a su disposición un sistema rígido de conceptos y leyes fundamentales, tan bien establecidos que ninguna duda puede tocarlos. Pero puede no serlo en un momento en que las bases mismas de la física se han vuelto tan problemáticas como lo son hoy. En tiempos como el presente, cuando la experiencia nos compele a buscar una nueva y más sólida fundamentación, el físico no puede simplemente entregar al filósofo la contemplación crítica de los fundamentos teóricos, porque nadie mejor que él puede explicar con mayor acierto dónde le aprieta el zapato. En su búsqueda de un nuevo fundamento, el físico se verá obligado a poner bien en claro hasta qué punto están justificados y constituyen verdaderas necesidades los conceptos que utiliza.
El conjunto de la ciencia es, tan sólo, un refinamiento del pensamiento de cada día. Por este motivo el pensamiento crítico del físico no ha de ser restringido, en lo posible, al mero examen de los conceptos que pertenecen a su propio campo de acción. Resultará imposible para el científico avanzar sin la previa consideración crítica de un problema verdaderamente arduo: el problema de analizar la naturaleza del pensamiento de cada día.
Nuestra experiencia psicológica nos ofrece experiencias sensoriales, imágenes de ellas, recuerdos y sentimientos. A diferencia de la psicología, la física se ocupa directamente sólo de las experiencias sensoriales y de la «comprensión» de sus conexiones. Pero con todo, el concepto de «mundo real externo» que existe en el pensamiento de cada día reposa en forma exclusiva sobre impresiones sensoriales.
En primer término debemos subrayar que la diferenciación entre impresiones sensoriales e imágenes no es posible o, al menos, no es posible establecerla con absoluta seguridad. Con la discusión de este problema, que también afecta a nuestra noción de la realidad, no adelantaríamos mucho, de modo que consideraremos como un hecho dado la existencia de experiencias sensoriales, o sea unas experiencias psíquicas de tipo especial.
Creo que el primer paso para el establecimiento de un «mundo exterior real» es la formación del concepto de objetos materiales y de objetos materiales de distintos tipos. De entre la multitud de nuestras experiencias sensoriales, mental y arbitrariamente, escogemos ciertos conjuntos de impresiones sensoriales que se repiten (en parte en conjunción con impresiones sensoriales que son interpretadas como signos de experiencias sensoriales de otros) y relacionamos con ellos un concepto: el concepto de objeto material. Si lo consideramos desde el punto de vista lógico, veremos que este concepto no es idéntico a la totalidad de las impresiones sensoriales que a él se refieren; se trata de una libre creación de la mente humana (o animal). Por otra parte, este concepto debe su significado y su justificación, en forma exclusiva, a la totalidad de las impresiones sensoriales que asociamos con él.
El segundo paso nos lleva a considerar que, en nuestro pensamiento (que es el que determina nuestras expectativas), atribuimos a ese concepto de objeto material una significación que en muy alto grado es independiente de las impresiones sensoriales que originalmente lo han conformado. A esto hacemos referencia cuando atribuimos al objeto material «una existencia real». El proceso hasta aquí descrito se justifica exclusivamente por el hecho de que, mediante esos conceptos y las relaciones mentales existentes entre ellos, nos hallamos en condiciones adecuadas para orientarnos en el laberinto de las impresiones sensoriales. Aun cuando son creaciones mentales libres, estas nociones y relaciones nos parecen más solidas y más inalterables que la experiencia sensorial individual en sí misma, a la que jamás se le puede garantizar por completo que no sea una ilusión o fruto de una alucinación. Además, estos conceptos y relaciones, y también la postulación de objetos reales y, hablando de manera general de la existencia del «mundo real», están justificados exclusivamente en la medida en que se conecten con impresiones sensoriales entre las cuales configuran una conexión mental.
La totalidad de nuestras experiencias sensoriales (uso de conceptos, creación y empleo de relaciones funcionales definidas entre ellos y la coordinación de las experiencias sensoriales con esos conceptos) pueden ser puestas en orden mediante un proceso mental: este hecho en sí tiene una naturaleza que nos llena de reverente temor, porque jamás seremos capaces de comprenderlo por completo. Bien se podría decir que «el eterno misterio del mundo es su comprensibilidad». Uno de los más importantes logros de Emmanuel Kant ha sido postular que el mundo externo real carecería de sentido si careciera de comprensibilidad.
Aquí, al hablar de comprensibilidad, la expresión está utilizada en su sentido más modesto. En este caso, la palabra implica la creación de cierto orden en las impresiones sensoriales; un orden que se produce por la creación de conceptos generales, de relaciones entre dichos conceptos y de relaciones definidas de cierta clase entre los conceptos y la experiencia sensorial. En este sentido es comprensible el mundo de nuestras experiencias sensoriales. El hecho de que sea comprensible es un milagro.
En mi opinión no se puede decir nada a priori con respecto al modo en que deben formarse y conectarse los conceptos ni a la manera en que debemos coordinarlos con las experiencias sensoriales. La única guía posible en la creación de ese orden, el único factor determinante, es el éxito. Todo lo que se necesita es fijar un conjunto de normas, porque sin esas normas sería imposible adquirir el conocimiento orientado en el sentido en que nos interesa. Se puede establecer una comparación entre esas reglas y las reglas de un juego en el que, si bien las normas en sí mismas son arbitrarias, su rigidez es lo único que hace posible el juego. Sin embargo, el establecimiento de las normas nunca podrá ser definitivo. Tendrá que tener validez tan sólo para un campo especial de aplicación (es decir, que no existen categorías últimas en el sentido que Kant adjudicara a este término).
La conexión de los conceptos elementales del pensamiento cotidiano con los conjuntos de experiencias sensoriales sólo puede ser comprendida por vía intuitiva y no puede fijarse científicamente. La totalidad de estas conexiones —ninguna de las cuales es expresable en términos conceptuales— es lo único que diferencia el gran edificio de la ciencia de un esquema de conceptos lógico pero vacío. Gracias a esas conexiones, las proposiciones puramente conceptuales de la ciencia se convierten en enunciados generales acerca de conjuntos de experiencias sensoriales.
Denominaremos «conceptos primarios» a aquellos conceptos que están directa e intuitivamente conectados con conjuntos típicos de experiencias sensoriales. Desde el punto de vista de la física, todas las demás nociones adquieren significado sólo en la medida en que estén conectadas con las nociones primarias a través de proposiciones. Hasta cierto punto, estas proposiciones son definiciones de los conceptos (y de los enunciados derivados de ellos por vía lógica) y hasta cierto punto proposiciones que no derivan de las definiciones, hecho que expresa al menos relaciones indirectas entre los «conceptos primarios» y, en este sentido, entre las experiencias sensoriales. Las proposiciones de esta segunda clase son «enunciados acerca de la realidad» o leyes de la naturaleza, es decir, proposiciones que deben demostrar su validez cuando son aplicadas a las experiencias sensoriales a las que se puede aludir a través de conceptos primarios. Determinar cuáles de esas proposiciones habrán de ser consideradas definiciones y cuáles leyes naturales dependerá concretamente de la representación elegida. Establecer esta diferenciación se convierte en una necesidad absoluta cuando se examina el grado hasta el que no está vacío, desde el punto de vista físico, todo el sistema de conceptos considerados.
LA ESTRATIFICACIÓN DEL SISTEMA CIENTÍFICO
El objetivo de la ciencia es una comprensión tan _completa_ como sea posible de la conexión entre las experiencias sensoriales en su totalidad y el logro de ese objetivo _mediante el uso de un mínimo de conceptos primarios y de relaciones_. (Mientras se busca, en la medida de lo posible, una unidad lógica en la imagen del mundo, es decir, parvedad en los elementos lógicos.)
La ciencia utiliza la totalidad de los conceptos primarios, o sea conceptos conectados directamente con las experiencias sensoriales, y de las proposiciones que los relacionan. En su primera etapa de desarrollo, la ciencia no contiene nada más. Nuestro pensamiento de cada día se contenta, en términos generales, con este nivel. No obstante, una situación así no puede resultar satisfactoria para quien posea una verdadera mentalidad científica, porque la totalidad de los conceptos y las relaciones obtenidos de esta manera carece por completo de unidad lógica. Con la finalidad de cubrir esta deficiencia, se inventa un sistema más pobre en conceptos y relaciones, un sistema que considera que los conceptos y relaciones del «primer estrato» son conceptos y relaciones derivados lógicamente. En bien de su más elevada unidad lógica, este nuevo «sistema secundario» paga el precio de operar con conceptos elementales (conceptos del segundo estrato) que ya no están conectados de modo directo con las experiencias sensoriales. Una posterior búsqueda de la unidad lógica nos conduce a un sistema terciario, más pobre aún en conceptos y relaciones, mediante la deducción de los conceptos y relaciones del estrato secundario (y de modo indirecto de los del primario). Y el proceso continúa en estos términos, hasta el momento en que hemos llegado a un sistema dueño de la mayor unidad concebible y de la mayor pobreza de conceptos en materia de fundamentos lógicos, que todavía es compatible con las observaciones realizadas por nuestros sentidos. No sabemos si esta ambición será o no capaz de forjar alguna vez un sistema definitivo. Si se recabara una opinión al respecto, lo más probable sería obtener una respuesta negativa. No obstante, mientras se lucha con los problemas, jamás se pierde la esperanza de acercarse a ese objetivo.
Un adepto de la teoría de la abstracción o de la inducción llamará a nuestros estratos «grados de abstracción», pero no considero justificable encubrir la independencia lógica del concepto con respecto a las experiencias sensoriales. No se trata de la relación que existe entre la sopa y el pollo sino, más bien, de la del número del guardarropa y el abrigo.
Los estratos además no están tan claramente separados. No está absolutamente claro qué conceptos pertenecen al estrato primario. En rigor, estamos manejando conceptos formados libremente que, con un grado de certeza suficiente en la práctica, son conectados de manera intuitiva con los conjuntos de experiencias sensoriales de tal modo que, en cualquier experiencia, no se produce ninguna incertidumbre en lo que respecta a la validez de una aserción. El hecho fundamental es el intento de representar la multitud de conceptos y de proposiciones cercanos a la experiencia bajo la forma de proposiciones, deducidas por un proceso lógico a partir de una base —tan estrecha como sea posible— de conceptos y de relaciones fundamentales que pueden ser elegidas con libertad (axiomas). La libertad de elección, sin embargo, pertenece a una clase muy especial; no se asemeja a la libertad de un escritor de obras de ficción. En rigor, se parece a la de un hombre empeñado en resolver un crucigrama bien pensado: aunque podría proponer cualquier palabra como posible solución, sólo una palabra es la que le permitirá resolver el crucigrama con acierto. Es materia de fe que la naturaleza —tal como la percibimos a través de nuestros cinco sentidos— asume las características de un crucigrama bien pensado. Los éxitos que hasta el presente ha cosechado la ciencia otorgan una cierta base para mantener esa fe, sin duda alguna.
La multitud de estratos a los que nos hemos referido corresponde a las diversas etapas que se han recorrido en la lucha por la unidad. En lo que respecta al objetivo final, los estratos intermedios sólo tienen una naturaleza provisional. En su momento, habrán de desaparecer por falta de pertinencia. Sin embargo, tenemos que trabajar con la ciencia de hoy, en la que esos estratos representan logros parciales y problemáticos, que sirven de base los unos para los otros, pero que también se amenazan mutuamente, porque el sistema de conceptos presente contiene incongruencias muy arraigadas que encontraremos más adelante.
La finalidad de las siguientes líneas ha de ser la de mostrar cuáles son los caminos por los que ha penetrado la mente humana, para llegar a una base de la física que sea tan uniforme como se pueda desde el punto de vista lógico.
#### II. LA MECÁNICA Y LOS INTENTOS DE CONSIDERARLA COMO BASE PARA TODA LA FÍSICA
Una importante propiedad de nuestras experiencias sensoriales y, de modo más general, de todas nuestras experiencias, es su orden temporal. Este tipo de orden conduce a la concepción de un tiempo subjetivo, un esquema ordenador de nuestra experiencia. El tiempo subjetivo, por vía del concepto de objeto material y de espacio, nos lleva hacia el concepto de tiempo objetivo, tal como lo veremos más adelante.
Por delante de la noción de tiempo objetivo, sin embargo, se alza el concepto de espacio y por delante de éste hallamos el concepto de objeto material, que está directamente conectado con los conjuntos de experiencias sensoriales. Ya se ha señalado que una propiedad característica de la noción de «objeto material» es la de que le asignemos una existencia, independiente del tiempo (subjetivo) e independiente del hecho de que sea percibido por nuestros sentidos. Y esto es así a pesar de que percibimos alteraciones temporales en dicho objeto. Con exactitud Poincaré ha señalado enfáticamente el hecho de que distingamos dos clases de alteraciones del objeto corpóreo: «cambios de estado» y «cambios de posición». Estos últimos, señala este autor, son alteraciones que podemos contrarrestar mediante movimientos voluntarios.
Existen objetos materiales a los cuales, dentro de cierta esfera de percepción, no adjudicamos alteraciones de estado sino sólo alteraciones de posición; este hecho tiene una importancia fundamental para la formación del concepto de espacio (en cierto sentido, incluso para la justificación de la misma noción de objeto material). A este tipo de objetos le aplicaremos la denominación de «prácticamente rígidos».
Si como objeto de nuestra percepción consideramos en forma simultánea (o sea, como una única unidad) dos cuerpos prácticamente rígidos, existirán para ese conjunto unas alteraciones tales que posiblemente _no_ puedan ser consideradas como cambios de posición del conjunto, a pesar de que sea así para cada uno de los dos elementos constituyentes. Esto nos lleva a la noción de «cambio de posición relativa» de los dos objetos y también de esta manera arribamos a la noción de «posición relativa» de los dos objetos. Además se ha demostrado que entre las posiciones relativas existe una, de un tipo especial, a la que denominamos «contacto». El contacto permanente de dos cuerpos en tres o más «puntos» significa que se han unido en un cuerpo compuesto casi-rígido. Es lícito afirmar que el segundo cuerpo constituye en ese caso una continuación (casi-rígida) del primer cuerpo y que, a su vez, podría recibir otra continuación casi-rígida. La posibilidad de la continuación casi-rígida de un cuerpo es ilimitada. La totalidad de todas las continuaciones casi-rígidas concebibles de un cuerpo _B_ 0 es el «espacio» infinito determinado por él.
Todo objeto corpóreo situado arbitrariamente puede ser puesto en contacto con la continuación casi-rígida de un cuerpo dado _B 0_ (cuerpo de referencia). En mi opinión, este hecho es la base empírica de nuestra concepción del espacio. En el pensamiento precientífico, la corteza sólida de la Tierra asume el papel de _B_ 0 y su continuación. La misma palabra geometría indica que el concepto de espacio está psicológicamente conectado con la Tierra como un cuerpo de referencia siempre presente.
La atrevida noción de «espacio» que precedió a toda la geometría científica transformó nuestro concepto de las relaciones entre posiciones de los objetos materiales en la noción de posición de esos objetos en el «espacio». Esto, por sí mismo, representa una gran simplificación formal. A través de ese concepto de espacio llegamos, además, a una actitud en la que cualquier descripción de posición es implícitamente una descripción de contacto; el enunciado que dice que un punto de un objeto material está situado en un punto _P_ del espacio significa que el objeto toca el punto _P_ del cuerpo de referencia _B_ 0 (al que se supone apropiadamente continuado) en el punto considerado.
En la geometría de los griegos, el espacio sólo asume un papel cualitativo, porque si bien se considera como dada la posición de los cuerpos en relación con el espacio, no se la describe mediante números. Descartes fue el primero que introdujo ese método. En su lenguaje todo el contenido de la geometría euclidiana puede estar axiomáticamente fundado en los siguientes postulados: (1) dos puntos definidos de un cuerpo rígido determinan un segmento; (2) podemos asignar números triples, _X_ 1, _X_ 2, _X_ 3 a los puntos del espacio de tal modo que para cada segmento _P_ '– _P_ '', las coordenadas de cuyos extremos sean _X_ '1, _X_ '2; _X_ '3, _X_ ''1, _X_ ''2, _X_ '' _3_ , resulte que la expresión
_s_ 2 = ( _X_ ''1 – _X_ '1)2 \+ ( _X_ ''2 – _X_ '2)2 \+ ( _X_ ''3 – _X_ '3)2
sea independiente de la posición del objeto, y de las posiciones de todos y cualesquiera de los demás objetos.
El número (positivo) _s_ representa la longitud del segmento o la distancia entre los dos puntos _P_ 'y _P_ '' del espacio (que son coincidentes con los puntos _P_ 'y _P_ '' del segmento).
De manera intencional se ha elegido una formulación que exprese con claridad no sólo el contenido lógico y axiomático de la geometría euclidiana sino también su contenido empírico. La representación puramente lógica (axiomática) de la geometría euclidiana tiene, es verdad, la ventaja de una gran simplicidad y claridad. Sin embargo, el precio ha sido renunciar a la representación de la conexión entre el modelo conceptual y las experiencias sensoriales, sobre cuya conexión, tan sólo, descansa la significación de la geometría para la física. Se ha incurrido, con el tiempo, en el error de considerar que la necesidad lógica —anterior a toda experiencia— era la base de la geometría euclidiana y del concepto de espacio perteneciente a ella; ese fatal error surgió del hecho de que la base empírica, sobre la cual descansa la construcción axiomática de la geometría euclidiana, cayó en el olvido.
En la medida en que se puede hablar de la existencia de cuerpos rígidos en la naturaleza, la geometría de Euclides es una ciencia física, que debe ser confirmada por experiencias sensoriales. Abarca la totalidad de las leyes que deben dar cuenta de las posiciones relativas de los cuerpos rígidos, en forma independiente del tiempo. Vemos, pues, que la noción física de espacio, tal como fuera originalmente utilizada en la física, también está relacionada con la existencia de los cuerpos rígidos.
Desde el punto de vista del físico, la importancia central de la geometría euclidiana descansa en el hecho de que sus leyes son independientes de la naturaleza concreta de los objetos cuyas posiciones relativas trata. Su simplicidad formal está caracterizada por las propiedades de homogeneidad e isotropía (y la existencia de entidades similares).
El concepto de espacio es útil, por cierto, pero no indispensable para la geometría, es decir, para la formulación de las reglas referidas a las posiciones relativas de los cuerpos rígidos. Por contraste, el concepto de tiempo objetivo, sin el cual la formulación de los fundamentos de la mecánica clásica resulta imposible, está ligado con el concepto de continuo espacial.
La introducción del tiempo objetivo implica dos postulados que son independientes entre sí.
1. La introducción del tiempo objetivo local mediante la conexión de la sucesión temporal de experiencias con las lecturas de un «reloj», es decir, de un sistema cerrado recurrente en forma periódica.
2. La introducción de la noción de tiempo objetivo para los fenómenos en todo el conjunto del espacio, noción por la cual, exclusivamente, la idea de tiempo local se extiende a la idea de tiempo en física.
Nota al postulado 1: Según mi punto de vista, no sería una _petitio principii_ poner el concepto de recurrencia periódica por delante del concepto de tiempo, en la medida en que la preocupación principal sea la clarificación del origen y del contenido empírico del concepto de tiempo. Esta concepción corresponde con exactitud a la precedencia del concepto de cuerpo rígido (o casi-rígido) en la interpretación del concepto de espacio.
Discusión adicional del postulado 2: Antes de la enunciación de la teoría de la relatividad, prevalecía la ilusión de que, desde el punto de vista de la experiencia, el significado de la simultaneidad en relación con los fenómenos distantes en el espacio y, en consecuencia, el significado del tiempo físico estaba claro en forma apriorística; esta ilusión tuvo su origen en el hecho de que en nuestra experiencia de cada día podemos ignorar el tiempo que tarda la luz en propagarse. A causa de esto, estamos acostumbrados a confundir lo «simultáneamente visto» con lo «simultáneamente sucedido» y como resultado se confunde la diferencia entre tiempo y tiempo local.
La falta de precisión que, desde el punto de vista empírico, tiene la noción de tiempo en la mecánica clásica, estaba oculta por la representación axiomática del espacio y del tiempo como independientes de nuestras experiencias sensoriales. No representa necesariamente un daño para la ciencia el uso de los conceptos con independencia de la base empírica que les ha dado origen. Sin embargo, sería fácil caer en el error de creer que esas nociones, cuyo origen se ha olvidado, son necesarias desde el punto de vista lógico y por lo tanto inalterables; este error puede llegar a constituir un serio peligro para el progreso de la ciencia.
Para el desarrollo de la mecánica y, por ende, también para el desarrollo de la física en general, ha sido un hecho afortunado el que la falta de precisión en el concepto de tiempo objetivo permaneciera oculta para los primeros filósofos, en lo que se refiere a su interpretación empírica. Llenos de confianza en el significado real del espacio-tiempo, desarrollaron los fundamentos de la mecánica que, esquemáticamente, es posible caracterizar del siguiente modo:
_a_ ) Concepto de punto material: Un objeto material que —en lo que se refiere a su posición y movimiento— puede ser descrito con suficiente exactitud como un punto con las coordenadas _X_ 1, _X_ 2, _X_ 3. Su movimiento se describe (en relación con el «espacio» _B 0_) considerando _X_ 1, _X_ 2, _X_ 3 como funciones del tiempo.
_b_ ) Ley de la inercia: La desaparición de los componentes de la aceleración en un punto material que esté suficientemente alejado de todos los demás puntos.
_c_ ) Ley del movimiento (para el punto material): Fuerza = masa X aceleración.
_d_ ) Leyes de fuerza (interacciones entre puntos materiales).
Aquí ( _b_ ) no es sino un importante caso especial de ( _c_ ). Una teoría real existe sólo cuando las leyes de fuerza están dadas. Las fuerzas, en primer lugar, deben obedecer únicamente a la ley de la igualdad de la acción y reacción para que un sistema de puntos —permanentemente conectados los unos con los otros mediante fuerzas— se pueda comportar como un punto material.
Estas leyes fundamentales, junto con la ley de Newton sobre la fuerza de la gravedad, constituyen la base de la mecánica de los cuerpos celestes. En esta mecánica de Newton y en contraste con las precitadas concepciones del espacio derivadas de los cuerpos rígidos, el espacio _B_ 0 aparece con una novedad. No se adjudica validez a todo _B_ 0 (para una ley de fuerza dada) a través de ( _b_ ) y ( _c_ ), sino para un _B_ 0 en un estado de movimiento apropiado (sistema inercial). A causa de este hecho el espacio de coordenadas adquirió una propiedad física independiente que no está contenida en la noción de espacio puramente geométrica, una circunstancia que dio a Newton mucho que pensar (experimento del cubo).
La mecánica clásica no es más que un esquema general; se convierte en una teoría sólo a través de la indicación explícita de las leyes de fuerza ( _d_ ), como Newton lo hiciera, con éxito, en el ámbito de la mecánica celeste. Desde el punto de vista del objetivo de conseguir la mayor simplicidad posible de los fundamentos, este método teórico es deficiente en la medida en que las leyes de fuerza no pueden ser obtenidas mediante consideraciones lógicas y formales, de modo que su elección a priori es arbitraria hasta cierto punto. Hay que decir también que la ley de la gravedad de Newton se distingue de otras leyes de fuerza concebibles exclusivamente por su _éxito_.
A pesar del hecho de que, hoy, sabemos positivamente que la mecánica clásica fracasa como fundamento de toda la física, esa disciplina ocupa todavía el centro de la física. Y es que, más allá del importante progreso realizado a partir de los tiempos de Newton, todavía no hemos llegado a una nueva fundamentación de la física, con respecto de la cual tengamos la certidumbre de que la multiplicidad de todos los fenómenos investigados, y de los sistemas teóricos parciales que han alcanzado éxito, pueda ser deducida de ella por vía lógica. A continuación describiré con brevedad la situación actual del tema.
En primer término intentemos hacernos una idea muy clara de hasta qué punto el sistema de la mecánica clásica ha resultado adecuado para servir de base a toda la física. Toda vez que aquí sólo nos importan los fundamentos de la física y su desarrollo, no es necesario que nos preocupemos por los progresos puramente _formales_ de la mecánica (ecuaciones de Lagrange, ecuaciones canónicas y demás). _Una_ observación parece ser indispensable, sin embargo. La noción de «punto material» es básica en mecánica. Si tratamos de desarrollar la mecánica de un objeto corpóreo que en sí mismo _no_ puede ser tratado como un punto material —y hablando de manera estricta todo objeto «perceptible a nuestros sentidos» pertenece a esta categoría— surge una pregunta: ¿cómo imaginar el objeto constituido por puntos materiales y qué fuerzas debemos suponer que actúan entre ellos? La formulación de esta pregunta es indispensable, si se pretende que la mecánica describa el objeto de _forma completa_.
Dentro de la tendencia natural de la mecánica corresponde suponer que estos puntos materiales, y las leyes de fuerzas que actúan entre ellos, son invariables, porque los cambios temporales quedarían fuera del alcance de una explicación mecánica. A partir de todo esto podemos ver que la mecánica clásica debe conducirnos a una interpretación atómica de la materia. Ahora, con especial claridad, comprendemos cuán equivocados están aquellos teóricos que creen que la teoría surge de la experiencia por vía inductiva. Aun el gran Newton no pudo liberarse de ese error (« _Hypotheses non fingo_ »).
Con el fin de salvarse a sí misma del peligro de hallarse perdida sin esperanza dentro de esa forma de pensamiento (el atomismo), la ciencia dio los siguientes pasos. La mecánica de un sistema está determinada si su energía potencial está dada como una función de su configuración. Si las fuerzas actuantes son capaces de garantizar el mantenimiento de ciertas propiedades estructurales de la configuración del sistema, ésta puede ser descrita con suficiente exactitud por un número relativamente pequeño de configuraciones variables _q_ r; la energía potencial es considerada sólo en la medida en que depende de _esas_ variables (por ejemplo, la descripción de la configuración de un cuerpo prácticamente rígido mediante seis variables).
Un segundo método de aplicación de la mecánica, que evita la subdivisión de la materia en puntos materiales «reales», es la mecánica de los llamados medios continuos. Esta mecánica se caracteriza por la ficción de que la densidad y la velocidad de la materia dependen continuamente de coordenadas y tiempo y de que la parte de las interacciones no explícitamente dadas puede considerarse como fuerzas de superficie (fuerzas de presión) que una vez más son funciones continuas de posición. Aquí nos encontramos con la teoría hidrodinámica y la teoría de la elasticidad de los cuerpos sólidos. Estas teorías evitan la introducción explícita de puntos materiales mediante ficciones que, a la luz de los fundamentos de la mecánica clásica, sólo pueden tener una significación aproximada.
Además de su gran significado _práctico_ , estas categorías de la ciencia —al desarrollar nuevos conceptos matemáticos— han creado las herramientas formales (las ecuaciones diferenciales parciales) que han sido necesarias para los posteriores intentos de establecer una nueva base de toda la física.
Estas modalidades de aplicación de la mecánica pertenecen a la llamada física «fenomenológica». Es característico de esta clase de física el hecho de que utilice en la mayor medida posible conceptos que están cercanos a la experiencia, aunque esto la ha llevado a renunciar, en gran parte, a la unidad de sus fundamentos. El calor, la electricidad y la luz son descritos por distintas variables de estado y por constantes materiales distintas de las cantidades mecánicas. Determinar todas estas variables en su dependencia mutua y temporal constituía una tarea que, en general, sólo podía ser resuelta por la vía empírica. Muchos contemporáneos de Maxwell vieron en esta forma de presentación el objetivo último de la física, al que creyeron poder llegar de una manera puramente inductiva a partir de la experiencia, dada la relativa cercanía de los conceptos utilizados con respecto a ella. Desde la óptica de las teorías del conocimiento, St. Mill y E. Mach adoptaron más o menos este punto de vista.
Desde el mío, el mayor logro de la mecánica de Newton estriba en que su aplicación consistente nos ha llevado más allá de ese punto de vista fenomenológico, en particular dentro del ámbito de los fenómenos térmicos. Así ha sucedido en la teoría cinética de los gases y en la mecánica estadística en general. La primera relaciona la ecuación de estado de los gases ideales, la viscosidad, la difusión y la conductividad del calor de los gases y los fenómenos radiométricos de los gases y presenta la conexión lógica de fenómenos que, desde el punto de vista de la experiencia directa, ninguna relación guardan los unos con los otros. La segunda proporciona una interpretación de las ideas termodinámicas y de las leyes que han llevado al descubrimiento del límite de aplicabilidad de las nociones y leyes de la teoría clásica del calor. Esta teoría cinética ha superado ampliamente a la física fenomenológica en lo que respecta a la unidad lógica de sus fundamentos y, además, ha obtenido valores precisos de las magnitudes verdaderas de los átomos y las moléculas por métodos independientes y que por lo tanto los sitúan más allá del ámbito de la duda razonable. Estos progresos decisivos fueron pagados por la coordinación de las entidades atómicas con los puntos materiales, siendo obvio —como lo era— el carácter especulativo de esas entidades. Nadie podrá pensar jamás en «percibir directamente» un átomo. Las leyes referidas a las variables que se conectan en forma más directa con los hechos experimentales (por ejemplo, temperatura, presión, velocidad) han sido deducidas de las ideas fundamentales por medio de complicados procesos de cálculo. Y por este camino la física (o al menos parte de ella), construida en sus orígenes de un modo más fenomenológico, al basarse en la mecánica de Newton para los átomos y las moléculas, se ha reducido a una base más alejada de la experimentación directa, pero más uniforme en su carácter.
#### III. EL CONCEPTO DE CAMPO
Al explicar los fenómenos eléctricos y ópticos, la mecánica de Newton ha tenido menos éxito que en los ámbitos de los que hemos hablado antes. En efecto: Newton, en su teoría corpuscular de la luz, trató de reducirla a un movimiento de puntos materiales. Más tarde, sin embargo, cuando los fenómenos de la polarización, difracción e interferencia de la luz obligaron a realizar modificaciones cada vez más incompatibles con esta teoría, se impuso la teoría ondulatoria de la luz de Huygens. Es probable que esta teoría deba su origen, en esencia, a los fenómenos de la óptica de cristales y a la teoría del sonido, que por entonces ya había alcanzado cierto nivel de elaboración. También hemos de admitir que la teoría de Huygens se basaba en primera instancia en la mecánica clásica. El éter que todo lo penetraba hubo de ser considerado como el conductor de las ondas, pero ningún fenómeno conocido podía explicar el éter a partir de puntos materiales. Jamás se pudo lograr una clara descripción de las fuerzas internas que gobernaban el éter ni de las fuerzas que actuaban entre el éter y la materia «ponderable». Por lo tanto, los fundamentos de esta teoría permanecieron eternamente en la oscuridad. La verdadera base era una ecuación diferencial parcial, cuya reducción a elementos mecánicos fue siempre problemática.
Para la concepción teórica de los fenómenos eléctricos y magnéticos se introdujeron, una vez más, unas masas de un tipo especial y se conjeturó que entre esas masas existían fuerzas que actuaban a distancia, similares a las fuerzas gravitatorias de Newton. Sin embargo, este tipo especial de materia carecía de la propiedad fundamental de la inercia. Y las fuerzas que actuaban entre esas masas y la materia ponderable permanecieron en la oscuridad. A estas dificultades hubo que agregar el carácter polar de esta clase de materia que no se adecuaba al esquema de la mecánica clásica. La base de la teoría se hizo aún más insatisfactoria cuando se conocieron los fenómenos electrodinámicos, a pesar de que estos fenómenos permitieron a los físicos la explicación de los fenómenos magnéticos con lo que se hizo innecesario el supuesto de masas magnéticas. Por cierto, que estos progresos hubieron de pagarse aumentando la complejidad de las fuerzas de interacción, cuya existencia debía suponerse, entre las masas eléctricas en movimiento.
La salida de esta situación poco satisfactoria, gracias a la teoría de campo eléctrico de Faraday y Maxwell, representa probablemente la más profunda transformación en los fundamentos de la física desde los tiempos de Newton. Una vez más, se trata de un paso dado en el sentido de la especulación constructiva que aumenta la distancia entre los fundamentos de la teoría y las experiencias sensoriales. La existencia del campo se manifiesta, por cierto, sólo cuando son introducidos en éste cuerpos con carga eléctrica. Las ecuaciones diferenciales de Maxwell conectan los coeficientes diferenciales espaciales y temporales de los campos eléctricos y magnéticos. Las masas eléctricas no son otra cosa que lugares en que la divergencia del campo eléctrico no desaparece. Las ondas luminosas se presentan como procesos de un campo ondulatorio electromagnético en el espacio.
Para cerciorarse, Maxwell intentó una interpretación mecánica de su teoría de campo, a través de modelos mecánicos del éter. Pero gradualmente estos intentos fueron quedando marginados por la interpretación (despojada de todos los accesorios innecesarios) de Heinrich Hertz. De este modo, el campo ocupó dentro de esta teoría la posición fundamental que en la mecánica de Newton habían ocupado los puntos materiales. Sin embargo, en un primer momento, sólo fue aplicada a los campos electromagnéticos en el espacio vacío.
En su etapa inicial de desarrollo, la teoría resultaba aún poco satisfactoria para el interior de la materia porque, en este caso, había que introducir dos vectores eléctricos que estaban enlazados por relaciones dependientes de la naturaleza del medio, relaciones inaccesibles a cualquier análisis teórico. Se produjo una situación análoga con respecto al campo magnético, y entre la densidad de la corriente eléctrica y el campo.
Fue H. A. Lorentz quien halló la solución que apuntaba, además, hacia una teoría electrodinámica de los cuerpos en movimiento, una teoría que estaba más o menos libre de supuestos arbitrarios. Su teoría estaba construida sobre la base de las siguientes hipótesis fundamentales:
En cualquier parte (incluido el interior de los cuerpos ponderables) el asiento del campo es el espacio vacío. La participación de la materia en los fenómenos electromagnéticos tiene su origen sólo en el hecho de que las partículas elementales de la materia llevan cargas eléctricas inalterables y, por este motivo, están sujetas, por una parte, a las acciones de las fuerzas ponderomotoras y, por otra, poseen la propiedad de generar un campo. Las partículas elementales obedecen a la ley del movimiento de Newton para los puntos materiales.
Sobre esta base H. A. Lorentz obtuvo su síntesis de la mecánica de Newton y la teoría de campo de Maxwell. La debilidad de esta teoría reside en el hecho de que trataba de determinar los fenómenos mediante una combinación de ecuaciones diferenciales parciales (las ecuaciones de campo de Maxwell, para el espacio vacío) y ecuaciones diferenciales totales (ecuaciones de movimiento de puntos), procedimiento que era obviamente artificial. Las insuficiencias de este punto de vista se manifestaron por sí mismas en la necesidad de suponer dimensiones finitas para las partículas, a fin de evitar que el campo electromagnético que existe en sus superficies llegara a ser infinitamente grande. Además, la teoría era incapaz de dar una explicación de las tremendas fuerzas que conservan las cargas eléctricas de las partículas individuales. H. A. Lorentz aceptaba estas debilidades de su teoría, y las conocía muy bien, a cambio de poder explicar los fenómenos de una manera correcta, al menos en términos generales.
Por otra parte, había un aspecto que apuntaba más allá del marco de la teoría de Lorentz. En las inmediaciones de un cuerpo eléctricamente cargado existe un campo magnético que contribuye, aparentemente, a su inercia. ¿No sería posible explicar la inercia _total_ de las partículas desde un punto de vista electromagnético? Está claro que este problema sólo puede ser analizado satisfactoriamente si las partículas pueden ser consideradas como soluciones regulares de las ecuaciones diferenciales parciales electromagnéticas. En su forma original, sin embargo, las ecuaciones de Maxwell no admiten tal descripción de las partículas porque sus correspondientes soluciones contienen una singularidad. Por ello, los físicos teóricos han tratado durante mucho tiempo de lograr su objetivo modificando las ecuaciones de Maxwell. Con todo, estos intentos no han sido coronados con el éxito. Así comprobamos que el objetivo de erigir una pura teoría de campo electromagnético de la materia, de momento continúa siendo un objetivo inalcanzado, aunque en principio no existan objeciones contra la posibilidad de lograrlo. La falta de un método sistemático que nos lleve a una solución ha sido un obstáculo para nuevos intentos. No obstante, me parece innegable que en la fundamentación de cualquier teoría de campo consistente, el concepto de partícula no debe aparecer junto al concepto de campo. Toda la teoría ha de estar basada, exclusivamente, en las ecuaciones diferenciales parciales y en sus soluciones desprovistas de singularidades.
#### IV. LA TEORÍA DE LA RELATIVIDAD
No existe un método inductivo que nos conduzca a los conceptos fundamentales de la física. La imposibilidad de comprender este hecho constituyó la base del error filosófico de muchos investigadores del siglo pasado. Tal vez también haya sido el motivo por el cual la teoría molecular y la teoría de Maxwell sólo pudieron quedar establecidas en una fecha relativamente tardía. El pensamiento lógico es necesariamente deductivo; se basa en conceptos hipotéticos y en axiomas. ¿Cómo seleccionar éstos, con la esperanza de que se confirmen las consecuencias que de ellos se derivan?
La situación más satisfactoria, es evidente, se hallará en los casos en que las nuevas hipótesis fundamentales sean sugeridas por el propio mundo de la experiencia. La hipótesis de la inexistencia de un movimiento perpetuo como base de la termodinámica proporciona el ejemplo de una hipótesis fundamental sugerida por la experiencia; otro tanto ocurre con el principio de inercia de Galileo. También dentro de la misma categoría hallamos la hipótesis fundamental de la teoría de la relatividad, teoría que nos ha conducido a una inesperada extensión de la teoría de campo y a la sustitución de los fundamentos de la mecánica clásica.
El éxito de la teoría de Maxwell-Lorentz ha dado gran confianza en la validez de las ecuaciones electromagnéticas para el espacio vacío, y por lo tanto y en especial, en la afirmación de que la luz se mueve «en el espacio» con una determinada velocidad constante _c._ Esta afirmación del valor constante de la velocidad de la luz ¿es válida para todo sistema inercial? Si así no fuera, un sistema inercial específico o, por mayor precisión, un estado de movimiento concreto (de un cuerpo de referencia) tendría que distinguirse de todos los demás. Sin embargo, esto parecía contradecir todos los hechos experimentales mecánicos y electromagnéticos.
Por estas razones era necesario elevar al rango de principio, la validez de la ley de la constancia de la velocidad de la luz para todos los sistemas inerciales. A partir de esto se sigue que las coordenadas espaciales _X_ 1, _X_ 2, _X_ 3 y el tiempo _X_ 4 deben ser transformados de acuerdo con la «transformación de Lorentz» que se caracteriza por el carácter invariable de la expresión
_ds_ 2 = _dx_ 12 \+ _dx_ 22 \+ _dx_ 32 – _dx_ 42
(si la unidad de tiempo está elegida de tal modo que la velocidad de la luz _c_ = 1).
Por este procedimiento, el tiempo pierde su carácter absoluto y se asocia a las coordenadas «espaciales» como si tuviera un carácter (casi) similar algebraicamente. El carácter absoluto del tiempo y en particular el concepto de simultaneidad fueron destruidos, quedando la descripción cuatridimensional como la única adecuada.
Con el fin de explicar también la equivalencia de todos los sistemas inerciales con respecto a todos los fenómenos de la naturaleza, es necesario postular el carácter invariable de todos los sistemas de ecuaciones físicas que expresan leyes generales con respecto a la transformación de Lorentz. La elaboración de esta exigencia forma el contenido de la teoría de la relatividad restringida.
Esta teoría es compatible con las ecuaciones de Maxwell, pero es incompatible con la base de la mecánica clásica. Es verdad que las ecuaciones de movimiento del punto material pueden ser modificadas (y con ellas las expresiones del momento y la energía cinética del punto material), de tal modo que lleguen a satisfacer la teoría. Pero el concepto de la fuerza de interacción y con él el concepto de energía potencial de un sistema pierde su base, porque estos conceptos se apoyan en la idea de la simultaneidad absoluta. El campo, determinado por ecuaciones diferenciales, ocupa el lugar de la fuerza.
En razón de que la teoría anterior permite la interacción sólo por campos, se requiere una teoría de campo de la gravedad. Y por cierto que no es difícil formular una teoría en la que, tal como en la de Newton, los campos gravitatorios puedan ser reducidos a un escalar, que es la solución de una ecuación diferencial parcial. Sin embargo, los hechos experimentales expresados en la teoría de la gravedad de Newton apuntan hacia otra dirección: la de la teoría de la relatividad general.
Una característica poco satisfactoria de la mecánica clásica es la de que en sus leyes fundamentales la misma masa constante aparece en dos papeles diferentes: como «masa inercial» en la ley del movimiento y como «masa pesante» en la ley de la gravedad. Como resultado, la aceleración de un cuerpo en un campo gravitatorio puro es independiente de su material; de modo que en un sistema de coordenadas uniformemente acelerado (acelerado en relación con un «sistema inercial») los movimientos ocurren como en un campo gravitatorio homogéneo (en relación con un sistema «inmóvil» de coordenadas). Si suponemos que la equivalencia de estos dos casos es completa, obtenemos una adaptación de nuestro pensamiento teórico al hecho de que la masa inercial y pesante son iguales.
De aquí se deduce que ya no hay motivos para privilegiar, como cuestión de principio, los «sistemas inerciales»; y debemos admitir en un pie de igualdad las transformaciones _no lineales_ de las coordenadas ( _x_ 1, _x_ 2, _x_ 3, _x_ 4). Si realizamos esta transformación de un sistema de coordenadas de la teoría de la relatividad restringida, la métrica
_ds_ 2 = _dx_ 12 \+ _dx_ 22 \+ _dx_ 32 – _dx_ 42
pasa a una métrica (riemanniana) general de la forma
_ds_ 2 = ∑ _g_ µν _dx_ µ _dx_ ν (sumados para µ y ν)
donde los _g_ µν, simétricos en µ y ν, son funciones de _x_ 1 ... _x_ 4 que describen tanto las propiedades métricas como el campo gravitatorio, en relación con el nuevo sistema de coordenadas.
Esa mejoría en la interpretación de la base mecánica, con todo, y tal como se muestra ante un escrutinio minucioso, hubo de pagarse: las nuevas coordenadas ya no podían ser interpretadas como resultados de las mediciones con cuerpos rígidos y relojes, tal como ocurría en el sistema original (un sistema inercial con un campo gravitatorio que desaparecía).
El paso a la teoría de la relatividad general se lleva a cabo mediante el supuesto de que la representación de las propiedades de campo del espacio ya mencionadas a través de las funciones gµν (o sea a través de una métrica riemanniana), también está justificado en el caso _general_ , en el que no existe ningún sistema de coordenadas en relación con el cual la métrica adquiera la simple forma casi euclidiana de la teoría de la relatividad especial.
Ahora las coordenadas, por sí mismas, ya no expresan relaciones métricas, sino tan sólo la «cercanía» de los objetos cuyas coordenadas difieren muy poco las unas de las otras. Todas las transformaciones de las coordenadas deben ser admitidas en la medida en que esas transformaciones están libres de singularidades. Sólo las ecuaciones que son covariantes en relación con transformaciones arbitrarias, en este sentido, tienen significado como expresiones de leyes generales de la naturaleza (postulado de la covarianza general).
El primer objetivo de la teoría de la relatividad general era una versión preliminar que, aunque no cumpliera los requisitos de un sistema cerrado, podía enlazarse de manera muy simple con los «hechos directamente observables». Si la teoría se restringía a la pura mecánica gravitatoria, la teoría de la gravedad de Newton podía servir como modelo. Esta versión preliminar puede caracterizarse así:
1. El concepto de punto material y el de su masa se conservan. Una ley del movimiento es formulada para dicho punto y esta ley resulta ser la traducción de la ley de la inercia al lenguaje de la teoría de la relatividad general. Esta ley es un sistema de ecuaciones diferenciales totales, el sistema característico de la línea geodésica.
2. La ley de la interacción por la gravedad de Newton es reemplazada por el sistema de las ecuaciones diferenciales covariantes generales más simples que pueden establecerse para el tensor _g_ µν. Se forma igualando a cero el tensor de la curvatura riemanniana ( _R_ µν = 0).
Esta formulación permite el tratamiento del problema de los planetas. Para decirlo con mayor precisión, permite el tratamiento del problema del movimiento de puntos materiales de masa prácticamente despreciable en el campo gravitatorio (simétrico con respecto al centro) producido por un punto material que, se supone, está «en descanso». No toma en cuenta la reacción de los puntos materiales «en movimiento» en el campo gravitatorio ni se para a considerar cómo produce ese campo gravitatorio la masa central.
Por analogía con la mecánica clásica se observa que la siguiente es una forma de completar la teoría; se establecen como ecuaciones de campo
donde _R_ representa el escalar de la curvatura de Riemann, _T ik_ el tensor de energía de la materia en una representación fenomenológica. El miembro izquierdo de la ecuación ha sido elegido de tal modo que su divergencia desaparece idénticamente. La desaparición correspondiente de la divergencia del miembro derecho produce las «ecuaciones de movimiento» de la materia, en forma de ecuaciones diferenciales parciales para el caso en que _T ik_ introduce, para la descripción de la materia, sólo _cuatro_ funciones independientes más (por ejemplo, componentes de densidad, presión y velocidad, donde entre los últimos existe una identidad y entre la presión y la densidad una ecuación de condición).
Con esta formulación toda la mecánica gravitatoria se reduce a la solución de un único sistema de ecuaciones diferenciales parciales covariantes. Esta teoría evita todos los defectos que hemos atribuido a la base de la mecánica clásica. Por lo que sabemos, esta teoría es suficiente para la representación de los hechos observados de la mecánica celestial. Pero es similar a un edificio, una de cuyas alas está construida con fino mármol (miembro izquierdo de la ecuación), en tanto que la otra ha sido hecha con madera de mala calidad (miembro derecho de la ecuación). La representación fenomenológica de la materia es, en realidad, sólo un rústico sustituto de una representación que hiciera justicia a todas las propiedades conocidas de la materia.
No existe dificultad para conectar la teoría de Maxwell de campo electromagnético con la teoría de campo gravitatorio en la medida en que nos restrinjamos al espacio libre de materia ponderable y libre de densidad eléctrica. Todo lo que se necesita es poner en el miembro derecho de la anterior ecuación en lugar de _T ik_, el tensor de energía del campo electromagnético en el espacio libre y unir al sistema de ecuaciones así modificado la ecuación de Maxwell para el espacio libre, escrita en forma covariante general. Bajo estas condiciones, entre todas estas ecuaciones existirá un número suficiente de identidades diferenciales para garantizar su consistencia. Podemos agregar que esta propiedad formal necesaria del sistema total de ecuaciones permite elegir en forma arbitraria el signo del miembro _T ik_, un hecho que más tarde ha resultado ser importante.
El deseo de obtener, en los fundamentos de la teoría, la mayor unidad posible ha desembocado en diversos intentos de incluir el campo gravitatorio y el campo electromagnético en un todo formal unificado. Aquí hemos de mencionar en particular la teoría de las cinco dimensiones de Kaluza y Klein. Después de haber estudiado esta posibilidad con especial cuidado, considero que es preferible aceptar la carencia de uniformidad interna que presenta la teoría original, porque no creo que la totalidad de las hipótesis de la teoría de cinco dimensiones contenga menos elementos arbitrarios que la teoría original. La misma afirmación se podría hacer respecto a la versión proyectiva de la teoría, que ha sido elaborada con gran cuidado en especial por Von Dantzig y por Pauli.
Las anteriores consideraciones se refieren, en forma exclusiva, a la teoría de campo, libre de materia. A partir de ahí ¿qué hemos de hacer para obtener una teoría completa de la materia atómica? En una teoría ideal las singularidades tendrán que estar excluidas, sin duda, porque sin esa exclusión las ecuaciones diferenciales no determinan por completo el campo total. Aquí, en la teoría de campo de la relatividad general, nos enfrentamos con el mismo problema de la representación teórica de campo de la materia, tal como lo habíamos visto en relación con la teoría pura de Maxwell.
Y una vez más el intento de obtener una construcción teórica de campo de las partículas nos lleva, en apariencia, a ciertas singularidades. También en este caso se ha realizado un esfuerzo para superar este defecto mediante la introducción de nuevas variables de campo y la elaboración y extensión del sistema de ecuaciones de campo. Sin embargo, hace poco tiempo, en colaboración con el doctor Rosen, hemos descubierto que la combinación más simple de ecuaciones de campo de la gravedad y electricidad mencionada anteriormente produce soluciones centralmente simétricas, libres de cualquier singularidad (las conocidas soluciones centralmente simétricas de Schwarzschild para el campo gravitatorio puro y las de Reissner para el campo eléctrico con consideración de su acción gravitatoria). A esto nos referiremos en un párrafo siguiente, en forma breve. Por este camino parece posible establecer, para la materia y sus interacciones, una teoría de campo pura, libre de hipótesis adicionales, una teoría que al ser sometida a verificación empírica no nos cree más dificultades que las puramente matemáticas (que, desde luego, son muy serias).
#### V. TEORÍA CUÁNTICA Y LOS FUNDAMENTOS DE LA FÍSICA
Los físicos teóricos de nuestra generación están esperando que se erija una nueva base teórica para la física que usaría conceptos distintos por completo de los que utiliza la teoría de campo considerada hasta el presente. El motivo es que se ha demostrado que es necesario emplear —para la representación matemática de los fenómenos llamados cuánticos— unos métodos enteramente nuevos.
En tanto que, tal como lo ha mostrado la teoría de la relatividad, el fracaso de la mecánica clásica está vinculado al carácter finito de la velocidad de la luz, se ha descubierto a comienzos de este siglo que existen otras clases de incongruencias entre las deducciones de la mecánica y los hechos experimentales, incongruencias que están relacionadas con el carácter finito (no nulo) de la constante _h_ de Planck. En particular, mientras que la mecánica molecular requiere que tanto la cantidad de calor y la densidad (monocromática) de radiación de los cuerpos sólidos disminuyan _en proporción_ con la disminución de la temperatura absoluta, la experiencia ha demostrado que disminuyen mucho más rápidamente que la temperatura absoluta. Para una explicación teórica de este comportamiento era necesario suponer que la energía de un sistema mecánico no puede tomar valores arbitrarios, sino sólo ciertos valores discretos cuyas expresiones matemáticas son siempre dependientes de la constante _h_ de Planck. Además, esta concepción era esencial para la teoría del átomo (teoría de Bohr). Para la transición de estos estados en otros —con o sin emisión o absorción de radiación— no podían establecerse leyes causales sino exclusivamente leyes estadísticas; y una conclusión similar es válida para la descomposición radiactiva de los átomos, que fue investigada con sumo cuidado por ese mismo tiempo. A lo largo de más de dos décadas, los físicos han tratado en vano de encontrar una interpretación uniforme de este «carácter cuántico» de los sistemas y los fenómenos. Esta empresa tuvo éxito hace unos diez años, gracias a dos métodos teóricos diferentes por completo entre sí. Uno de esos métodos ha sido obra de Heisenberg y Dirac y el otro ha surgido del trabajo de De Broglie y Schrödinger. La equivalencia matemática de ambos métodos fue reconocida muy prontamente por Schrödinger. Aquí trataré de bosquejar la línea de pensamiento de De Broglie y Schrödinger, que está más cercana al método que utiliza el físico, y acompañaré esa descripción con algunas consideraciones generales.
La cuestión es, en primer lugar: ¿Cómo se puede asignar una sucesión discreta de valores de energía _H_ σ a un sistema concreto en el sentido de la mecánica clásica? (la función de energía es una función dada de las coordenadas _q_ r y los momentos correspondientes _p_ r). La constante _h_ de Planck relaciona la frecuencia _H_ σ/ _h_ con los valores de la energía _H_ σ. Por lo tanto es suficiente asignar al sistema una sucesión de _frecuencias_ discretas. Esto nos recuerda el hecho de que en acústica una serie de frecuencias discretas está vinculada a una ecuación diferencial parcial de primer grado (para condiciones límite fijadas), es decir, a soluciones sinusoidales periódicas. De la misma manera, Schrödinger se fijó la tarea de asociar una ecuación diferencial parcial para una función escalar ψ a la función de energía dada ε( _q_ r, _p_ r) donde _q_ r y el tiempo _t_ son variables independientes. En esto tuvo éxito (para una función compleja ψ) de modo que los valores teóricos de la energía _H_ σ, como lo requiere la teoría estadística, se pudieron obtener satisfactoriamente a partir de las soluciones periódicas de la ecuación.
A decir verdad, no resultó posible asociar un movimiento definido, en el sentido de la mecánica de los puntos materiales, con una solución definida ψ( _q_ r, _t_ ) de la ecuación de Schrödinger. Esto significa que la función ψ no determina, al menos _exactamente_ , la historia de los _q_ r como funciones del tiempo _t_. De acuerdo con Born, sin embargo, se ha demostrado que es posible una interpretación del significado físico de las funciones ψ, de la siguiente forma: ψψ (el cuadrado del valor absoluto de la función compleja ψ) es la densidad de probabilidad en el punto bajo consideración, en el espacio de configuración de _q_ r en el tiempo _t_. Es posible, pues, caracterizar el contenido de la ecuación de Schrödinger de una manera fácilmente comprensible aunque no muy precisa. Se trata de lo siguiente: ese contenido determina cómo varía la densidad de probabilidad de un conjunto estadístico de sistemas en el espacio de configuración con el tiempo. En pocas palabras: la ecuación de Schrödinger determina el cambio de la función ψ de _q_ r con el tiempo.
Ha de mencionarse que los resultados de esta teoría contienen —como valores límite— los resultados de la mecánica de partículas, si las longitudes de onda halladas en la solución del problema de Schrödinger son siempre tan pequeñas, que la energía potencial varía por acción de una cantidad prácticamente infinitesimal, para una distancia de una longitud de onda en el espacio de configuración. Bajo estas condiciones se puede demostrar lo siguiente: elegimos una región _G_ 0 en el espacio de configuración que, aunque muy amplia (en todas las direcciones) en relación con la longitud de onda, es pequeña en relación con las dimensiones relevantes del espacio de configuración. Bajo estas condiciones es posible elegir una función ψ para un tiempo inicial _t_ 0, de tal modo que se anule fuera de la región _G_ 0 y se comporte, según la ecuación de Schrödinger, reteniendo esa propiedad —al menos aproximadamente— también para un tiempo posterior, _t_ , en el que la región _G_ 0 se ha convertido en otra región _G_. De esta forma, con cierto grado de aproximación, se puede hablar del movimiento de la región _G_ como un conjunto y es posible aproximarse a este movimiento mediante el movimiento de un punto en el espacio de configuración. Este movimiento coincide, pues, con el movimiento requerido por las ecuaciones de la mecánica clásica.
Los experimentos sobre interferencias realizados con rayos de partículas han proporcionado una brillante verificación de la correspondencia con los hechos del carácter ondulatorio de los fenómenos del movimiento, tal como supone la teoría. Además de esto, la teoría ha logrado demostrar con facilidad las leyes estadísticas de la transición de un sistema de un estado cuántico a otro, bajo la acción de fuerzas externas lo que, desde el punto de vista de la mecánica clásica, parece un milagro. Las fuerzas externas estaban representadas en este caso por aumentos de la energía potencial durante períodos breves de tiempo. Mientras que en la mecánica clásica esos aumentos pueden producir sólo cambios relativamente pequeños del sistema, en la mecánica cuántica los producen de cualquier magnitud, incluso grandes, pero con una probabilidad pequeña, resultado que se halla en perfecta armonía con la experiencia. Incluso la comprensión de las leyes de la radiactividad, al menos en líneas generales, surgió de esta teoría.
Probablemente nunca antes se había desarrollado una teoría que proporcionara la clave para la interpretación y cálculo de un grupo de fenómenos empíricos tan heterogéneo como el que abarca la teoría cuántica. A pesar de esto, con todo, creo que la teoría es capaz de inducirnos a error en nuestra búsqueda de una base uniforme para la física porque, a mi entender, es una representación _incompleta_ de la realidad, aunque es la única que puede construirse a partir de los conceptos fundamentales de fuerza y punto material (correcciones cuánticas a la mecánica clásica). Su carácter incompleto necesariamente conduce a la naturaleza estadística (carácter incompleto) de las leyes. Ahora expondré las razones de esta opinión.
En primer lugar pregunto lo siguiente: ¿hasta qué punto la función ψ describe un estado real de un sistema mecánico? Supongamos que ψr son las soluciones periódicas (ordenadas según valores de energía crecientes) de la ecuación de Schrödinger. De momento dejaré abierta la pregunta de hasta qué punto los ψr individuales son descripciones _completas_ de estados físicos. Un sistema está primero en el estado ψ1 de la energía más baja ε1. Después, durante un tiempo finito, una pequeña fuerza perturbadora actúa sobre el sistema. Un momento más tarde, a partir de la ecuación de Schrödinger se obtiene una función ψ de la forma:
donde _c_ r son constantes (complejas). Si las ψr están «normalizadas», | _c_ 1 | es casi igual a 1, | _c_ 2 |, etc., es pequeño comparado con 1. Ahora corresponde preguntar: ¿ψ describe un estado real del sistema? Si la respuesta es afirmativa, no podremos por menos que adscribir a este estado una energía ε y, en particular, una energía que excede ε1 por un poco (en cualquier caso ε1 < ε < ε2). No obstante, esa suposición no concuerda con los experimentos sobre impactos del electrón, tal como los realizaron J. Franck y G. Hertz, si se toma en cuenta la demostración de Millikan sobre la naturaleza discreta de la electricidad. En realidad, estos experimentos nos llevan a la conclusión de que no existen valores de energía situados entre los valores cuánticos. De aquí podemos deducir que nuestra función ψ de ninguna manera describe un estado homogéneo del sistema, sino que es más bien una descripción estadística en la que los _c_ r representan las probabilidades de los valores individuales de energía. Por lo tanto, parece claro que la interpretación estadística que Born hiciera con respecto a la teoría cuántica es la única posible. La función ψ de ningún modo describe un estado que pudiera ser el de un sistema único; se refiere a muchos sistemas, a «un conjunto de sistemas» en el sentido de la mecánica estadística. Si, con excepción de ciertos casos especiales, la función ψ provee tan sólo datos _estadísticos_ de magnitudes mensurables, el motivo está no sólo en el hecho de que la _operación de medición_ introduce elementos desconocidos, sino también en el hecho de que la función ψ, en ningún sentido, describe el estado de _un_ sistema único. La ecuación de Schrödinger determina las variaciones en el tiempo que ocurren en el conjunto de sistemas que pueden existir independientemente de cualquier acción externa sobre el sistema único.
Esta interpretación también elimina la paradoja que no hace mucho he demostrado con la ayuda de dos colaboradores y que se relaciona con el siguiente problema:
Consideremos un sistema mecánico que consiste en dos sistemas parciales _A_ y _B_ que se mantienen en interacción durante un tiempo limitado. Supongamos que la función ψ antes de la interacción es conocida. Entonces la ecuación de Schrödinger tendrá que darnos la función ψ después de que se haya producido la interacción. Determinemos ahora el estado físico del sistema parcial _A_ tan completamente como nos sea posible mediante mediciones. A continuación, la mecánica cuántica nos permite determinar la función ψ del sistema parcial _B_ a partir de las mediciones realizadas y de la función ψ del sistema total. Sin embargo, esta determinación nos dará un resultado que depende de _cuál_ de las cantidades físicas (observables) de _A_ haya sido medida (por ejemplo, coordenadas o momentos). Toda vez que sólo puede haber _un_ estado físico de _B_ después de la interacción, que puede ser razonablemente considerado independiente de la medición que hayamos realizado en el sistema _A_ separado de _B_ , se puede concluir que la función ψ no representa sin ambigüedad el estado físico. El que diversas funciones ψ representen el mismo estado físico del sistema _B_ demuestra una vez más que la función ψ no puede ser interpretada como una descripción (completa) de un estado físico de un único sistema. También en este caso la referencia de la función ψ a un conjunto de sistemas elimina toda dificultad.
La mecánica cuántica proporciona, de esta manera tan simple, teoremas sobre transiciones (aparentemente) discontinuas de un estado a otro, sin brindar en rigor una descripción del proceso concreto; este hecho está conectado con otro: la teoría, en realidad, no opera con el sistema único, sino con una totalidad de sistemas.
Los coeficientes _c_ r de nuestro primer ejemplo son apenas alterados por la acción de la fuerza externa. Con esta interpretación de la mecánica cuántica es posible comprender el motivo por el cual esta teoría puede explicar con facilidad el hecho de que unas fuerzas perturbadoras débiles sean capaces de producir cambios de cualquier magnitud en el estado físico de un sistema. Estas fuerzas perturbadoras, por cierto, producen sólo cambios pequeños de la _densidad estadística_ del conjunto de los sistemas y en consecuencia sólo cambios infinitesimales en las funciones ψ, cuya descripción matemática ofrece muchas menos dificultades que las que se presentarían en la descripción matemática de cambios finitos experimentados por parte de los sistemas únicos. Es verdad, no obstante, que lo que ocurre con el sistema único permanece totalmente sin clarificar cuando se utiliza esta interpretación. El enfoque estadístico elimina por completo de la descripción ese suceso enigmático.
Pero ahora yo pregunto: ¿Existe realmente algún físico que crea que jamás llegaremos a tener una percepción de estos importantes cambios que se producen en los sistemas únicos, de su estructura y de sus conexiones causales, sin considerar el hecho de que los sucesos únicos han llegado a estar tan cerca de nosotros, gracias a la maravillosa invención de la cámara de Wilson y al contador Geiger? Es posible creer esto, sin incurrir en contradicción desde el punto de vista lógico, pero resulta tan contrario a mi instinto científico que no puedo abandonar la búsqueda de una concepción más completa.
A estas consideraciones debemos agregar las de otra clase, que también parecen indicar que los métodos introducidos por la mecánica cuántica no están en condiciones de proporcionar una base útil para la totalidad de la física. En la ecuación de Schrödinger, el tiempo absoluto y la energía potencial desempeñan un papel decisivo, en tanto que estos dos conceptos han sido considerados como inadmisibles en principio por la teoría de la relatividad. Si queremos escapar a esta dificultad, tendremos que basar toda teoría en el campo y en las leyes de campo, y no en las fuerzas de interacción. Esto nos lleva a aplicar los métodos estadísticos de la mecánica cuántica a los campos, es decir, a sistemas con un número infinito de grados de libertad. Aunque los intentos que hasta aquí se han realizado han estado restringidos a ecuaciones de primer grado que, tal como indica la teoría de la relatividad general, son insuficientes, las complicaciones con las que se ha tropezado resultan ya aterradoras. Y se multiplicarían si quisiéramos obedecer las exigencias de la teoría de la relatividad general, de cuya justificación nadie duda, en principio.
Es cierto que se ha señalado que la introducción de un continuo espacio-tiempo puede ser contraria a la naturaleza en razón de la estructura molecular de todo aquello que ocurre a pequeña escala. Se ha afirmado que tal vez el éxito del método de Heisenberg apunta a un método puramente algebraico de descripción de la naturaleza, es decir, a la eliminación de las funciones continuas del ámbito de la física. De modo que, a pesar de todo, también debería renunciarse al continuo espacio-tiempo. Es concebible que el ingenio humano logre algún día hallar métodos que hagan posible avanzar por ese sendero. En el momento actual, sin embargo, un programa como éste se asemeja al intento de respirar en el vacío.
No cabe duda de que la mecánica cuántica ha logrado explicar una buena proporción de la verdad, y de que ha de ser piedra de toque de cualquier fundamento teórico futuro, porque debe ser deducible de él como un caso límite, tal como la electrostática es deducible de las ecuaciones de Maxwell del campo electromagnético o como la termodinámica es deducible de la mecánica clásica. No obstante, no creo que la mecánica cuántica pueda servir como _punto de partida_ en la búsqueda de este fundamento del mismo modo que, viceversa, no pueden hallarse a partir de la termodinámica (o respectivamente la mecánica estadística) los fundamentos de la mecánica.
A la vista de esta situación, parece ser enteramente justificable considerar con seriedad el problema de si se puede o no, de _alguna_ manera, armonizar los fundamentos de la física de campo con los fenómenos cuánticos. ¿No es ésta la única base que, con los instrumentos matemáticos de que se dispone en el presente, puede ser adaptada a las exigencias de la teoría de la relatividad general? Los físicos de hoy, en su mayoría, creen que ese intento es vano. Esta creencia puede tener sus raíces en el supuesto no justificado de que esa teoría, en una primera aproximación, tendría que llevarnos a las ecuaciones de la mecánica clásica para el movimiento de los corpúsculos o, al menos, a las ecuaciones diferenciales totales. En términos objetivos, hasta hoy no hemos llegado jamás a obtener una adecuada descripción teórica de campo de los corpúsculos libre de singularidades y a priori nada podemos decir acerca del comportamiento de esas entidades. Sin embargo, _una cosa_ es segura: si una teoría de campo proporciona una representación de los corpúsculos libre de singularidades, en ese caso el comportamiento de los corpúsculos en el tiempo está determinado únicamente por las ecuaciones diferenciales del campo.
#### VI. LA TEORÍA DE LA RELATIVIDAD Y LOS CORPÚSCULOS
Demostraré ahora, de acuerdo con la teoría de la relatividad general, que existen soluciones libres de singularidades para las ecuaciones de campo que pueden ser interpretadas como representando los corpúsculos. Aquí me limitaré a las partículas neutras porque, en otra publicación reciente, escrita en colaboración con el doctor Rosen, he tratado ya este problema en detalle y porque los elementos esenciales del problema pueden ser descritos por entero en el caso de dichas partículas.
El campo gravitatorio está descrito por completo mediante el tensor _g_ µν. En los símbolos de tres índices Γσµν aparecen también las contravariantes _g_ µν que se definen como los menores de _g_ µν divididos por el determinante _g_ (= | _g_ αβ |). Con el fin de que los _R ik_ estén definidos y sean finitos, no es suficiente que haya en el entorno de todo punto del continuo un sistema de coordenadas en el que los _g_ µν y sus primeros cocientes diferenciales sean continuos y diferenciables, sino que también es necesario que el determinante _g_ no adquiera valor nulo. Sin embargo, esta última restricción desaparece si se reemplazan las ecuaciones diferenciales _R ik_ = 0 por _g_ 2 _R ik_ = 0, cuyos miembros izquierdos son funciones racionales enteras de _g ik_ y de sus derivadas.
Estas ecuaciones tienen la solución centralmente simétrica dada por Schwarzschild
Esta solución posee una singularidad en _r_ = 2 _m_ , porque el coeficiente de _dr_ 2 (es decir, _g_ 11) se vuelve infinito sobre esta hipersuperficie. Sin embargo, si reemplazamos la variable _r_ por _ρ_ definida por la ecuación
_ρ_ 2 = _r_ – 2 _m_
obtenemos
Esta solución se comporta con regularidad para todos los valores de _ρ_. La anulación del coeficiente de _dt_ 2 (es decir, _g_ 44) para _ρ_ = 0 resulta —es verdad— en la consecuencia siguiente: el determinante _g_ se anula para ese valor. Pero con los métodos adoptados para escribir las ecuaciones de campo, esto no constituye una singularidad.
Si ρ varía de –∞ a +∞, _r_ varía de +∞ a _r_ = 2 _m_ y después otra vez a +∞, en tanto que para los valores de _r_ correspondientes a _r_ < 2 _m_ no existen valores reales correspondientes de _ρ_. Por consiguiente, la solución de Schwarzschild se convierte en una solución regular mediante la representación del espacio físico como compuesto por dos «láminas» idénticas en contacto en toda la hipersuperficie _ρ_ = 0 (es decir, _r_ = 2 _m_ ), sobre la cual el determinante _g_ se anula. Denominaremos «puente» a esa conexión entre las dos láminas (idénticas). De modo que la existencia de ese puente entre las dos láminas en el ámbito finito corresponde a la existencia de una partícula material neutra que se describe libre de singularidades.
La solución al problema del movimiento de las partículas neutras evidentemente equivale al descubrimiento de soluciones de las ecuaciones gravitatorias (escritas libres de denominadores), en cuanto contienen varios puentes.
La concepción esbozada aquí corresponde, a priori, a la estructura atómica de la materia en la medida en que el «puente» es por su naturaleza un elemento discreto. Además, vemos que la masa constante _m_ de las partículas neutras debe necesariamente ser positiva, dado que ninguna solución libre de singularidades puede corresponder a la solución de Schwarzschild para un valor negativo de _m_. Sólo el examen de los distintos problemas de puente puede demostrar si este método teórico proporciona o no una explicación de la igualdad empíricamente demostrada de las masas de las partículas halladas en la naturaleza, y si toma en cuenta los hechos que la mecánica cuántica ha explicado de un modo tan extraordinario.
De una manera análoga es posible demostrar que las ecuaciones combinadas de la gravedad y la electricidad (con la elección adecuada del signo del miembro eléctrico en las ecuaciones gravitatorias) producen una representación-puente libre de singularidades del corpúsculo eléctrico. La más simple de las soluciones de este tipo es la que sirve para una partícula eléctrica sin masa pesante.
Hasta que no se consigan vencer las considerables dificultades matemáticas propias de la solución de los problemas de varios puentes, nada se puede decir respecto a la utilidad de la teoría desde el punto de vista físico. Sin embargo, éste es en realidad el primer intento de elaboración consistente de una teoría de campo, que tiene la posibilidad de explicar las propiedades de la materia. A favor de este intento se puede agregar que se ha basado en las ecuaciones de campo relativistas más simples conocidas.
RESUMEN
La física constituye un sistema lógico de pensamiento que está en estado de evolución, cuyas bases no pueden destilarse —por así decirlo— de la experiencia mediante un método inductivo, sino que sólo pueden ser obtenidas por libre invención. La justificación (contenido de verdad) del sistema reside en la verificación de sus conclusiones por los sentidos, motivo por el cual la relación de éstos con aquéllas sólo puede ser captada en forma intuitiva. La evolución de la ciencia avanza en la dirección de una creciente simplicidad de la base lógica. Con el fin de lograr una mayor aproximación a ese objetivo tenemos que aceptar que la base lógica se separe más y más de los hechos de la experiencia y que el camino mental que une los fundamentos de la física con sus conclusiones, que se correlacionan con las experiencias sensoriales, se alargue y se dificulte de modo continuo.
Nuestra finalidad ha sido esbozar, tan brevemente como fuera posible, el desarrollo de los conceptos fundamentales en su dependencia de los hechos de la experiencia y del esfuerzo por lograr la perfección interna del sistema. Estas consideraciones pretenden iluminar el estado presente del problema, tal como yo lo veo. (Es inevitable que una exposición histórica esquemática esté teñida de subjetivismo.)
Trato de demostrar de qué manera están conectados entre sí y con la naturaleza de nuestra experiencia los conceptos de objeto material, espacio y tiempo objetivo y subjetivo. En la mecánica clásica los conceptos de espacio y tiempo se independizaron. El concepto de objeto material es reemplazado en los fundamentos de la física por el concepto de punto material, medio por el cual la mecánica se convierte fundamentalmente en atomista. La luz y la electricidad producen dificultades insuperables cuando se intenta erigir a la mecánica en base de toda la física. Así nos vemos llevados a la teoría de campo de la electricidad y, más adelante, a procurar que la base de la física entera sea el concepto de campo (después de una tentativa de compromiso con la mecánica clásica). Este esfuerzo conduce a la teoría de la relatividad (evolución de la noción de espacio y de tiempo hacia la de continuo con una estructura métrica).
Trato de demostrar, además, por qué en mi opinión la teoría cuántica no parece capaz de aportar una fundamentación adecuada de la física: tan pronto como se intenta considerar a la descripción teórica cuántica como una descripción _completa_ del sistema o del fenómeno físico individual, surgen contradicciones.
Por otra parte, la teoría de campo es todavía incapaz de explicar la estructura molecular de la materia y de los fenómenos cuánticos. Sin embargo, he mostrado que la convicción en la incapacidad de la teoría de campo para resolver estos problemas a través de sus métodos está basada en un prejuicio.
### 5
### LOS FUNDAMENTOS DE LA FÍSICA TEÓRICA*
La ciencia es el intento de establecer una correspondencia entre la caótica diversidad de nuestra experiencia sensorial y un sistema de pensamiento lógicamente uniforme. En este sistema, experiencias singulares deben estar correlacionadas con la estructura teórica de tal modo que la coordinación resultante sea unívoca y convincente.
Las experiencias sensoriales son la materia dada. Pero la teoría que las va a interpretar es de hechura humana. Es el resultado de un proceso de adaptación extraordinariamente laborioso: hipotético, nunca definitivo, siempre sujeto a cuestionamiento y duda.
El modo científico de formar conceptos difiere del que utilizamos en nuestra vida diaria; no en la base, sino en la definición más precisa de conceptos y conclusiones, la elección más laboriosa y sistemática de material experimental, y la mayor economía lógica. Por esta última entendemos el esfuerzo de reducir todos los conceptos y correlaciones a los mínimos conceptos y axiomas básicos lógicamente independientes.
Lo que llamamos física comprende ese grupo de ciencias naturales que basan sus conceptos en medidas, y cuyos conceptos y proposiciones se prestan a formulación matemática. Su dominio se define en consecuencia como esa parte de la suma total de nuestro conocimiento que puede expresarse en términos matemáticos. Con el progreso de la ciencia, el dominio de la física se ha expandido tanto que sólo parece estar limitado por las limitaciones del propio método.
La mayor parte de la investigación física está dedicada al desarrollo de las diversas ramas de la física, en cada una de las cuales el objeto es la comprensión teórica de campos de experiencia más o menos restringidos, y en cada una de las cuales las leyes y conceptos quedan relacionados con la experiencia de la forma más estrecha posible. Es esta área de la ciencia, con su creciente especialización, la que ha revolucionado la vida práctica en los últimos siglos y ha dado nacimiento a la posibilidad de que el hombre pueda finalmente liberarse de la carga del duro esfuerzo físico.
Por otra parte, desde el principio ha estado siempre presente el intento de encontrar una base teórica unificadora para todas estas ciencias individuales; base que consiste en un mínimo de conceptos y relaciones fundamentales, a partir de los cuales puedan derivarse por procesos lógicos todos los conceptos y relaciones de las disciplinas individuales. Esto es lo que entendemos por la búsqueda de unos fundamentos para el conjunto de la física. La creencia confiada en que este objetivo último puede alcanzarse es la fuente principal de la devoción apasionada que siempre ha animado al investigador. En este sentido es en el que se dedican las observaciones siguientes a los fundamentos de la física.
De lo que se ha dicho es evidente que la palabra «fundamentos» en este contexto no significa algo análogo en todos los aspectos a los cimientos de un edificio. Por supuesto, desde el punto de vista de la lógica las diversas leyes individuales de la física descansan en estos fundamentos. Pero mientras que un edificio puede quedar seriamente dañado por una fuerte tormenta o una inundación, y pese a ello sus cimientos permanecen intactos, en ciencia los fundamentos lógicos están siempre en mayor peligro frente a nuevas experiencias o nuevos conocimientos que las disciplinas en que se ramifican con sus más estrechos contactos experimentales. Es en la relación de los fundamentos con todas las partes individuales donde reside su gran importancia, pero también su mayor peligro ante cualquier nuevo factor. Cuando nos damos cuenta de ello, nos vemos llevados a preguntarnos por qué las llamadas épocas revolucionarias de la ciencia de la física no han cambiado sus fundamentos más a menudo y más completamente de lo que ha sucedido realmente.
El primer intento de fijar unos fundamentos teóricos uniformes fue la obra de Newton. En su sistema todo está reducido a los siguientes conceptos: (1) Masas puntuales con masa invariable; (2) acción a distancia entre cualquier par de masas puntuales; (3) ley de movimiento para la masa puntual. No había, estrictamente hablando, un fundamento global, porque sólo había formulada una ley explícita para las acciones-a-distancia de la gravitación, mientras que para otras acciones-a-distancia no había nada establecido a priori excepto la ley de igualdad de _actio_ y _reactio_. Además, el propio Newton entendió plenamente que tiempo y espacio eran elementos esenciales, como factores físicamente efectivos, de su sistema, aunque sólo sea por implicación.
Esta base newtoniana se mostró extraordinariamente fructífera y fue considerada definitiva a finales del siglo XIX. No sólo daba resultados para el movimiento de los cuerpos celestes, hasta el más mínimo detalle, sino que proporcionaba también una teoría de la mecánica de masas discretas y continuas, una explicación sencilla del principio de conservación de la energía y una completa y brillante teoría del calor. La explicación de los hechos de la electrodinámica dentro del sistema newtoniano resultaba más forzada; lo menos convincente de todo, desde el principio, era la teoría de la luz.
No es sorprendente que Newton no prestase oídos a una teoría ondulatoria de la luz, pues una teoría semejante se adaptaba muy mal a los fundamentos teóricos que él había establecido. La hipótesis de que el espacio estaba lleno de un medio consistente en puntos materiales que propagaba ondas luminosas sin manifestar ninguna otra propiedad mecánica le debía parecer completamente artificial. Los argumentos empíricos más fuertes a favor de la naturaleza ondulatoria de la luz —velocidades fijas de propagación, interferencia, difracción, polarización— eran o bien desconocidos o bien no conocidos en una síntesis bien ordenada. Él tenía razones para aferrarse a su teoría corpuscular de la luz.
Durante el siglo XIX la disputa pareció dirimirse a favor de la teoría ondulatoria. Pese a todo, no apareció ninguna duda seria sobre el fundamento mecánico de la física, en primer lugar porque nadie sabía dónde encontrar un fundamento de otro tipo. Sólo lentamente, bajo la irresistible presión de los hechos, se desarrolló un nuevo fundamento para la física: la física de campos.
De Newton en adelante, la teoría de acción-a-distancia fue constantemente considerada artificial. No faltaron esfuerzos para explicar la gravitación por una teoría cinética, es decir, basada en fuerzas de colisión de partículas de masa hipotética. Pero los intentos eran superficiales y no dieron fruto. El extraño papel desempeñado por el espacio (o el sistema inercial) dentro del fundamento mecánico fue también claramente reconocido, y criticado con especial claridad por Ernst Mach.
El gran cambio vino con Faraday, Maxwell y Hertz —en la práctica de modo semiinconsciente y contra su voluntad—. A lo largo de su vida, los tres se consideraron adeptos a las teorías mecánicas. Hertz había encontrado la forma más simple de las ecuaciones del campo electromagnético, y declaró que cualquier teoría que llevara a dichas ecuaciones era la teoría maxwelliana. Pero hacia el final de su corta vida escribió un artículo en donde presentaba como fundamento de la física una teoría mecánica liberada del concepto de fuerza.
Para nosotros que, por así decir, mamamos las ideas de Faraday con la leche de nuestra madre, es difícil apreciar su grandeza y audacia. Faraday debió haber captado con instinto infalible la naturaleza artificial de todos los intentos de referir los fenómenos electromagnéticos a acciones-a-distancia entre partículas eléctricas que reaccionan mutuamente. ¿Cómo iba a saber cada simple limadura de hierro, entre un montón de ellas dispersas en una hoja de papel, que había partículas eléctricas circulando por un conductor cercano? Todas estas partículas eléctricas juntas parecían crear en el espacio circundante una condición que a su vez producía un cierto orden en las limaduras. Faraday estaba convencido de que, si alguna vez se captara su estructura geométrica y su acción interdependiente, estos estados espaciales, hoy llamados campos, ofrecerían la clave de las misteriosas acciones electromagnéticas. Concebía estos campos como estados de tensión mecánica en un medio que llenaba el espacio, similar a los estados de tensión en un cuerpo elásticamente deformado; en esa época era la única forma en que se podían concebir estados que aparentemente estaban distribuidos de forma continua en el espacio. El tipo peculiar de interpretación mecánica de estos campos seguía estando en el fondo —una especie de compromiso de la conciencia científica a la vista de la tradición mecánica de la época de Faraday—. Con ayuda de estos nuevos conceptos de campo, Faraday consiguió formar un concepto cualitativo del complejo total de efectos electromagnéticos descubiertos por él y sus predecesores. La formulación precisa de leyes espacio-temporales de dichos campos fue obra de Maxwell. ¡Imaginemos lo que sintió cuando las ecuaciones diferenciales que había formulado le mostraron que los campos electromagnéticos se propagan en forma de ondas polarizadas y a la velocidad de la luz! Pocos hombres en el mundo han disfrutado de una experiencia semejante. Es casi seguro que en ese momento trascendental no imaginó que la naturaleza enigmática de la luz, al parecer tan completamente resuelta, seguiría aturdiendo a las generaciones siguientes. Pero los físicos necesitaron algunas décadas para captar la plena importancia del descubrimiento de Maxwell: ¡tan capital era el salto que su genio forzaba en las concepciones de sus colegas! La resistencia a la nueva teoría sólo desapareció una vez que Hertz hubo demostrado experimentalmente la existencia de ondas electromagnéticas.
Pero si el campo electromagnético podía existir como una onda independiente de la fuente material, entonces la interacción electrostática ya no podía explicarse como acción-a-distancia. Y lo que era cierto para la acción eléctrica no podía negarse para la gravitación. Las omnipresentes acciones-a-distancia de Newton dieron paso a campos que se propagaban con velocidad finita.
De los fundamentos de Newton sólo quedaban ahora las masas puntuales materiales sujetas a la ley de movimiento. Pero J. J. Thomson señaló que un cuerpo eléctricamente cargado en movimiento debe, según la teoría de Maxwell, poseer un campo magnético cuya energía actuaría precisamente como lo hace un aumento de la energía cinética del cuerpo. Entonces, si una parte de la energía consiste en energía cinética, ¿no podía ser cierto para la totalidad de la energía cinética? ¿No podría explicarse la propiedad básica de la materia, su inercia, dentro de la teoría de campos? La pregunta llevó al problema de una interpretación de la materia en términos de teoría de campos, cuya solución proporcionaría una explicación de la estructura atómica de la materia. Pronto se comprendió que la teoría de Maxwell no podía proporcionar tal programa. Desde entonces muchos científicos han tratado con celo de completar la teoría de campos mediante una generalización que comprendiera una teoría de la materia; pero hasta ahora tales esfuerzos no han sido coronados por el éxito. Para construir una teoría no basta con tener una idea clara del objetivo; también hay que tener un punto de vista formal que restrinja suficientemente la ilimitada variedad de posibilidades. Hasta ahora no se ha encontrado; en consecuencia, la teoría de campos no ha conseguido proporcionar un fundamento para el conjunto de la física.
Durante varias décadas la mayoría de los físicos se aferraron a la convicción de que se encontraría una subestructura mecánica para la teoría de Maxwell. Pero los resultados insatisfactorios de sus esfuerzos llevaron a la gradual aceptación de los nuevos conceptos del campo como fundamentos irreducibles; en otras palabras, los físicos se resignaron a abandonar la idea de un fundamento mecánico.
Así pues, los físicos adoptaron un programa de teoría de campos. Pero no podía llamarse un fundamento, pues nadie podía decir si una teoría consistente podía explicar alguna vez la gravitación, por un lado, y los componentes elementales de la materia, por otro. En este estado de cosas era necesario considerar las partículas materiales como masas puntuales sujetas a las leyes de movimiento de Newton. Éste fue el proceder de Lorentz al crear su teoría del electrón y la teoría de los fenómenos electromagnéticos de los cuerpos en movimiento.
A este punto habían llegado las concepciones fundamentales cuando se cerraba el siglo. Se hicieron enormes progresos en la penetración teórica y la comprensión de conjuntos enteros de nuevos fenómenos. Pero el establecimiento de un fundamento unificado de la física parecía remoto. Y este estado de cosas se ha visto agravado por desarrollos posteriores. El desarrollo durante el siglo actual se caracteriza por dos sistemas teóricos que en esencia son mutuamente independientes: la teoría de la relatividad y la teoría cuántica. Los dos sistemas no se contradicen directamente, pero parecen poco aptos para fusionarse en una teoría unificada. Debemos discutir brevemente las ideas básicas de estos dos sistemas.
La teoría de la relatividad surgió de los esfuerzos por mejorar, en lo que se refiere a la economía lógica, el fundamento de la física tal como existía en el cambio de siglo. La llamada teoría de la relatividad especial o restringida se basa en el hecho de que las ecuaciones de Maxwell (y con ellas la ley de propagación de la luz en el espacio vacío) se convierten en ecuaciones de la misma forma bajo transformaciones de Lorentz. Esta propiedad formal de las ecuaciones de Maxwell se complementa con nuestro conocimiento empírico bastante seguro de que las leyes de la física son las mismas con respecto a todos los sistemas inerciales. Esto lleva al resultado de que la transformación de Lorentz —aplicada a coordenadas temporales y espaciales— debe gobernar la transición de un sistema inercial a otro. En consecuencia, el contenido de la teoría de la relatividad restringida puede resumirse en una frase: todas las leyes naturales deben satisfacer la condición de ser covariantes con respecto a transformaciones de Lorentz. De esto se sigue que la simultaneidad de dos sucesos distantes no es un concepto invariante, y que las dimensiones de los cuerpos rígidos y la velocidad de los relojes dependen de su estado de movimiento. Otra consecuencia fue una modificación de la ley de movimiento de Newton en casos en que la velocidad de un cuerpo dado no era pequeña comparada con la velocidad de la luz. También se seguía el principio de equivalencia de masa y energía, por el que las leyes de conservación de masa y de energía se convierten en una y la misma. Una vez que se demostró que la simultaneidad era relativa y dependía del sistema de referencia, desapareció toda posibilidad de retener acciones-a-distancia dentro de los fundamentos de la física, puesto que dicho concepto presuponía el carácter absoluto de la simultaneidad (que debía ser posible establecer la posición de dos masas puntuales interactuantes «en el mismo instante»).
La teoría de la relatividad general debe su origen al intento de explicar un hecho conocido desde la época de Galileo y Newton pero que había eludido toda explicación teórica: la inercia y el peso de un cuerpo, en sí mismas dos cosas completamente diferentes, se miden por una y la misma constante, la masa. De esta correspondencia se sigue que es imposible descubrir experimentalmente si un sistema de coordenadas dado está acelerado, o si su movimiento es recto y uniforme y los efectos son debidos a un campo gravitatorio (éste es el principio de equivalencia de la teoría de la relatividad general). Echa por tierra los conceptos de sistema inercial en cuanto entra la gravitación. Cabe comentar aquí que el sistema inercial es un punto débil de la mecánica de Galileo-Newton, pues presupone una misteriosa propiedad del espacio físico que condiciona el tipo de sistema de coordenadas para el que la ley de inercia y la ley newtoniana de movimiento son válidas.
Estas dificultades pueden evitarse mediante el siguiente postulado: las leyes naturales deben formularse de tal manera que su forma sea idéntica para sistemas de coordenadas en cualquier tipo de estado de movimiento. Conseguir esto es la tarea de la teoría de la relatividad general. Por otra parte, de la teoría restringida deducimos la existencia de una métrica riemanniana dentro del continuo espacio-temporal que, según el principio de equivalencia, describe tanto el campo gravitatorio como las propiedades métricas del espacio. Suponiendo que las ecuaciones de campo de la gravitación son ecuaciones diferenciales de segundo orden, la ley del campo queda claramente determinada.
Aparte de este resultado, la teoría libera a la física del campo de la incapacidad de la que adolecía, en común con la mecánica newtoniana, para atribuir al espacio aquellas propiedades físicas independientes que hasta entonces habían quedado ocultas por el uso de un sistema inercial. No obstante, no puede afirmarse que aquellas partes de la teoría de la relatividad general que puedan hoy considerarse definitivas hayan proporcionado a la física un fundamento completo y satisfactorio. En primer lugar, el campo total aparece en ella compuesto de dos partes lógicamente inconexas, la gravitatoria y la electromagnética. Y en segundo lugar, esta teoría, como las teorías de campos anteriores, no ha ofrecido hasta ahora una explicación de la estructura atómica de la materia. Es probable que este fallo esté relacionado con el hecho de que hasta ahora no ha aportado nada a la comprensión de los fenómenos cuánticos. Para abordar estos fenómenos los físicos se han visto llevados a adoptar métodos completamente nuevos, cuyas características básicas discutiremos ahora.
En el año 1900, en el curso de una investigación puramente teórica, Max Planck hizo un descubrimiento muy notable: la ley de radiación de los cuerpos en función de la temperatura no podía derivarse de las leyes de la electrodinámica maxwelliana por sí solas. Para llegar a resultados compatibles con los experimentos relevantes, la radiación de una frecuencia dada tenía que tratarse como si consistiera en átomos de energía de una energía individual _h.v_., donde _h_ es la constante universal de Planck. Durante los años siguientes se demostró que la luz era en todas partes producida y absorbida en tales _quanta_ de energía. En particular, Niels Bohr pudo entender básicamente la estructura del átomo sobre la base de que los átomos sólo pueden tener valores de energía discretos, y las transiciones entre ellos están relacionadas con la emisión o absorción de un _quantum_ de energía. Esto arrojaba nueva luz sobre el hecho de que en su estado gaseoso los elementos químicos y sus compuestos irradian y absorben solamente luz de ciertas frecuencias muy bien definidas. Todo esto era completamente inexplicable dentro del marco de las teorías hasta entonces existentes. Era evidente que, al menos en el campo de los fenómenos atómicos, el carácter de todo lo que sucede está determinado por estados discretos y por transiciones aparentemente discontinuas entre ellos, donde la constante _h_ de Planck desempeña un papel decisivo.
El paso siguiente fue dado por De Broglie. Éste se pregunto cómo podían entenderse los estados discretos con la ayuda de los conceptos entonces en uso, y dio con un paralelismo con las ondas estacionarias, como por ejemplo en el caso de las frecuencias propias de los tubos de órgano y de las cuerdas en acústica. Ciertamente se desconocían acciones ondulatorias del tipo aquí requerido; pero podían construirse, y formularse sus leyes matemáticas, empleando la constante de Planck _h_. De Broglie imaginaba que un electrón en órbita alrededor de un núcleo atómico tenía asociado uno de estos hipotéticos trenes de ondas, e hizo inteligible en cierta medida el carácter discreto de las trayectorias «permitidas» de Bohr por el carácter estacionario de las ondas correspondientes.
Ahora bien, en mecánica el movimiento de los puntos materiales está determinado por las fuerzas o campos de fuerza que actúan sobre ellos. De ello cabía esperar que dichos campos de fuerza también influirían de manera similar en los campos de ondas de De Broglie. Erwin Schrödinger mostró cómo podía tenerse en cuenta esta influencia, reinterpretando mediante un método ingenioso ciertas formulaciones de la mecánica clásica. Incluso consiguió extender la teoría mecánica de las ondas hasta un punto en que sin introducción de ninguna hipótesis adicional se hacía aplicable a cualquier sistema mecánico consistente en un número arbitrario de masas puntuales, es decir, con un número arbitrario de grados de libertad. Esto era posible porque un conjunto de _n_ masas puntuales es, en buena medida, matemáticamente equivalente a una única masa puntual en un espacio de 3 _n_ dimensiones.
Basada en esta teoría se obtuvo una representación sorprendentemente buena de una inmensa variedad de hechos que de otra forma parecían completamente incomprensibles. Pero en un punto, y de forma muy notable, había un fallo: resultó imposible asociar a estas ondas de Schrödinger movimientos definidos de las masas puntuales —y eso, después de todo, había sido el objetivo original de toda la construcción.
La dificultad parecía infranqueable, hasta que fue superada por Born de una forma tan sencilla como inesperada. Los campos de ondas de De Broglie-Schrödinger no debían interpretarse como una descripción matemática de cómo tiene lugar realmente un suceso en el espacio y el tiempo, aunque, por supuesto, hacen referencia a dicho suceso. Más bien son una descripción matemática de lo que puede conocerse realmente acerca del sistema. Sirven sólo para hacer enunciados y predicciones estadísticas sobre los resultados de todas las medidas que pueden realizarse sobre el sistema.
Permítanme ilustrar estas características generales de la mecánica cuántica con un sencillo ejemplo: consideraremos una masa puntual restringida a permanecer dentro de una región _G_ por fuerzas de intensidad finita. Si la energía cinética de la masa puntual está por debajo de un cierto valor, entonces la masa puntual, según la mecánica clásica, nunca puede dejar la región _G_. Pero según la mecánica cuántica, la masa puntual, después de un período no inmediatamente predecible, puede dejar la región _G_ , en una dirección impredecible, y escapar al espacio circundante. Este caso, según Gamow, es un modelo simplificado de la desintegración radiactiva.
El tratamiento teórico cuántico de este fenómeno es como sigue: en el instante _t_ 0 tenemos un sistema ondulatorio de Schrödinger enteramente dentro de _G_. Pero a partir del instante _t_ 0 las ondas salen de _G_ en todas direcciones, de tal manera que la amplitud de la onda saliente es pequeña comparada con la amplitud inicial del sistema ondulatorio dentro de _G_. Cuanto más se dispersan estas ondas hacia el exterior, más disminuye la amplitud de las ondas dentro de _G_ , y consiguientemente la intensidad de las ondas posteriores que salen de _G_. Sólo una vez que ha transcurrido un tiempo infinito se agota el suministro de ondas dentro de _G_ , mientras que fuera la onda se ha difundido en un espacio cada vez mayor.
Pero ¿qué tiene que ver este proceso ondulatorio con el primer objeto de nuestro interés, la partícula originalmente encerrada en _G_? Para responder a esta pregunta debemos imaginar algún montaje que nos permita realizar medidas sobre la partícula. Por ejemplo, imaginemos que en algún lugar del espacio circundante hay una pantalla en la que se quedan pegadas las partículas que entran en contacto con ella. Entonces, a partir de la intensidad de las ondas que inciden en algún punto de la pantalla extraemos conclusiones sobre la probabilidad de que la partícula incida en ese punto de la pantalla en ese instante. En cuanto la partícula ha incidido en un punto concreto de la pantalla, la totalidad del campo de ondas pierde su significado físico; su único propósito era hacer predicciones probabilistas acerca del lugar y del instante en que la partícula incide en la pantalla (o, por ejemplo, su momento en el instante en que incide en la pantalla).
Todos los demás casos son análogos. El objetivo de la teoría es determinar la probabilidad de los resultados de una medida realizada sobre un sistema en un instante dado. No intenta dar una representación matemática de lo que sucede o está realmente presente en el espacio y el tiempo. En esto la teoría cuántica de hoy difiere fundamentalmente de todas las teorías físicas anteriores, tanto teorías mecanicistas como teorías de campos. En lugar de una descripción modelo de sucesos espacio-temporales reales, da las distribuciones de probabilidad de las medidas posibles como funciones del tiempo.
Hay que admitir que la nueva concepción teórica no debe su origen a ningún vuelo de la imaginación sino a la fuerza imperiosa de los hechos de experiencia. Todos los intentos de representar, por recurso directo a un modelo espacio-temporal, las propiedades de la partícula y la onda que se manifiestan en los fenómenos de la luz y la materia, han fracasado hasta ahora. Y Heisenberg ha demostrado de forma convincente, desde un punto de vista empírico, que cualquier decisión respecto a una estructura rigurosamente determinista de la naturaleza está definitivamente descartada debido a la estructura atómica del aparato experimental. Así pues, está probablemente fuera de cuestión que un conocimiento futuro pueda obligar a la física a rechazar nuestro fundamento teórico estadístico en favor nuevamente de uno determinista que trataría directamente con la realidad física. Desde un punto de vista lógico el problema parece ofrecer dos posibilidades, entre las que en principio se nos da una elección. Al final la elección se hará de acuerdo con el tipo de descripción que da la formulación de los fundamentos más simples, lógicamente hablando. De momento carecemos por completo de una teoría determinista que describa directamente los propios sucesos y lo haga en consonancia con los hechos.
Por el momento tenemos que admitir que no poseemos ninguna base teórica general para la física que pueda considerarse su fundamento lógico. La teoría de campos ha fracasado hasta ahora en la esfera molecular. Hay acuerdo general en que el único principio que podría servir como base de la teoría cuántica sería uno que constituyera una traducción de la teoría de campos en el esquema de la estadística cuántica. Nadie puede aventurarse a decir si esto llegará a tener un resultado satisfactorio.
Algunos físicos, entre los que me incluyo, no pueden creer que debamos abandonar, realmente y para siempre, la idea de la representación directa de la realidad física en el espacio y el tiempo; o que debamos aceptar la idea de que los sucesos en la naturaleza son análogos a un juego de azar. Está abierto a cada hombre el elegir la dirección de su esfuerzo; y también cada hombre puede hallar consuelo en el lema de Lessing: la búsqueda de la verdad es más preciosa que su posesión.
### 6
### EL LENGUAJE COMÚN DE LA CIENCIA*
El primer paso hacia el lenguaje consistió en vincular signos conmutables con impresiones sensoriales. Muy probablemente todos los animales sociales han llegado a este tipo primitivo de comunicación, al menos en cierta medida. Un mayor desarrollo se alcanza cuando se introducen y se entienden nuevos signos que establecen relaciones entre aquellos primeros signos que denotan impresiones sensoriales. Alcanzada esta fase ya es posible informar de series de sensaciones de cierta complejidad: podemos decir que ha nacido el lenguaje. Para que el lenguaje lleve a la comprensión debe haber, por una parte, reglas que conciernen a las relaciones entre los signos y, por otra, una correspondencia estable entre signos e impresiones. En su niñez los individuos conectados por el mismo lenguaje captan estas reglas y relaciones básicamente por intuición. Cuando el hombre se hace consciente de las reglas concernientes a las relaciones se establece la denominada gramática del lenguaje.
En una primera fase las palabras pueden corresponder directamente a impresiones. En una fase posterior se pierde esta conexión directa en la medida en que algunas palabras transmiten relaciones con percepciones sólo si se usan en conexión con otras palabras (por ejemplo, «es», «o», «cosa»). Entonces son grupos de palabras antes que palabras individuales los que se refieren a percepciones. Cuando el lenguaje se hace así parcialmente independiente del fondo de impresiones se gana una mayor coherencia interna.
Sólo en este desarrollo posterior, donde se hace frecuente uso de los denominados conceptos abstractos, el lenguaje se convierte en un instrumento de razonamiento en el verdadero sentido de la palabra. Pero es también este desarrollo el que convierte el lenguaje en una peligrosa fuente de errores y engaños. Todo depende del grado en que las palabras y combinaciones de palabras guardan correspondencia con el mundo de las impresiones.
¿Qué es lo que produce una conexión tan íntima entre lenguaje y pensamiento? ¿Es que no hay pensamiento sin lenguaje; y combinaciones de conceptos y combinaciones de conceptos para los que no son necesarias las palabras? ¿No nos hemos esforzado todos, alguna vez, en buscar palabras cuando la conexión entre «cosas» ya estaba clara?
Podríamos sentirnos inclinados a atribuir al acto de pensar una completa independencia del lenguaje si los individuos formaran o fueran capaces de formar sus conceptos sin la guía verbal de su entorno. Pero es muy probable que el desarrollo mental de un individuo, que crece en tales condiciones, fuera muy pobre. Por ello podemos concluir que el desarrollo mental del individuo y su modo de formar conceptos depende en alto grado del lenguaje. Esto nos hace comprender hasta qué punto el mismo lenguaje significa la misma mentalidad. En este sentido pensamiento y lenguaje están unidos.
¿Qué distingue el lenguaje de la ciencia del lenguaje tal como normalmente entendemos este término? ¿Cómo es que el lenguaje científico es internacional? Lo que busca la ciencia es una máxima precisión y claridad de conceptos en lo que concierne a su relación mutua y su correspondencia con datos sensoriales. A modo de ilustración tomemos el lenguaje de la geometría euclidiana y el álgebra. Éstas trabajan con un pequeño número de conceptos (o, en el segundo caso, símbolos) introducidos de manera independiente, tales como el número entero, la línea recta o el punto, y con signos que denotan las operaciones fundamentales, es decir, las conexiones entre dichos conceptos fundamentales. Ésta es la base para la construcción (o, en el segundo caso, definición) de todos los demás enunciados o conceptos. La conexión entre conceptos y enunciados, por una parte, y los datos sensoriales, por otra, se establece a través de actos de recuento y medida cuyo modo de realización está suficientemente bien determinado.
El carácter supranacional de los conceptos científicos y del lenguaje científico se debe al hecho de que han sido establecidos por los mejores cerebros de todos los países y todas las épocas. En soledad y, pese a todo, en un esfuerzo colaborativo en lo que respecta al efecto final, ellos crearon las herramientas espirituales para las revoluciones técnicas que han transformado la vida de la humanidad en los últimos siglos. Su sistema de conceptos ha servido de guía en el desconcertante caos de percepciones, y así hemos aprendido a captar verdades generales a partir de observaciones particulares.
¿Qué esperanzas y temores implica el método científico para la humanidad? No creo que ésta sea la forma correcta de plantear la pregunta. Lo que pueda producir esta herramienta en manos del hombre depende por completo de la naturaleza de los objetivos vivos en dicha humanidad. Una vez que estos objetivos existen, el método científico proporciona medios para realizarlos. Pero el método no puede proporcionar los objetivos mismos. El propio método científico no habría conducido a ninguna parte, ni siquiera habría nacido, sin una apasionada lucha por una comprensión clara.
Creo que nuestra época se caracteriza por la perfección de medios y la confusión de fines. Si buscamos sincera y apasionadamente la seguridad, el bienestar y el libre desarrollo de los talentos de todos los hombres, no nos faltarán los medios de acercarnos a dicho estado. Incluso si sólo una pequeña parte de la humanidad se esfuerza en alcanzar tales objetivos, su superioridad se hará manifiesta a largo plazo.
### 7
### LAS LEYES DE LA CIENCIA Y LAS LEYES DE LA ÉTICA*
La ciencia busca relaciones que se considera que existen independientemente de la búsqueda individual. Esto incluye el caso en el que el propio hombre es el sujeto. O también el caso en que los sujetos de los enunciados científicos son conceptos creados por nosotros mismos, como sucede en matemáticas. No se supone que dichos conceptos corresponden necesariamente a objetos del mundo exterior. Sin embargo, todos los enunciados y leyes científicos tienen una característica en común: son «verdaderos o falsos» (adecuados o inadecuados). En un sentido muy general, nuestra reacción a ellos es «sí» o «no».
El modo científico de pensar tiene una característica adicional. Los conceptos que utiliza para construir sus sistemas coherentes no expresan emociones. Para el científico sólo hay «ser», pero no hay desear, no hay valorar, no hay bien ni mal; no hay propósito. Mientras permanezcamos dentro del ámbito de la ciencia propiamente dicha, nunca podremos encontrar una sentencia del tipo: «No mentirás». Hay una especie de restricción puritana en el científico que busca la verdad: se mantiene apartado de cualquier voluntarismo o emotividad. Dicho sea de paso, este rasgo es el resultado de un lento desarrollo, peculiar del moderno pensamiento occidental.
Podría parecer que esto implica que el pensamiento lógico es irrelevante para la ética. Los enunciados científicos sobre hechos y relaciones no pueden generar directrices éticas. Sin embargo, las directrices éticas pueden hacerse racionales y coherentes mediante el pensamiento lógico y el conocimiento empírico. Si podemos estar de acuerdo en algunas proposiciones éticas fundamentales, entonces otras proposiciones éticas pueden derivarse de ellas con tal de que las premisas originales estén enunciadas de forma suficientemente precisa. Tales premisas éticas desempeñan un papel similar al que desempeñan los axiomas en matemáticas.
Por esto es por lo que no pensamos que carezca de sentido plantear preguntas tales como: ¿Por qué no debemos mentir? Pensamos que tales preguntas tienen sentido porque en todas las discusiones de este tipo se dan tácitamente por aceptadas algunas premisas éticas. Entonces nos sentimos satisfechos cuando conseguimos rastrear la directriz ética en cuestión hasta estas premisas básicas. En el caso de mentir esto podría hacerse de alguna manera como ésta: mentir destruye la confianza en las afirmaciones de otras personas; sin esa confianza, la cooperación social se hace imposible, o al menos difícil; pero dicha cooperación es esencial para hacer la vida humana posible y tolerable. Esto significa que hemos rastreado la regla «No mentirás» hasta las demandas: «La vida humana debe ser preservada» y «El dolor y la pena deben ser reducidos tanto como sea posible».
Pero ¿cuál es el origen de tales axiomas éticos? ¿Son arbitrarios? ¿Se basan en la mera autoridad? ¿Derivan de experiencias de los hombres y están condicionados indirectamente por tales experiencias?
Para la pura lógica todos los axiomas son arbitrarios, incluyendo los axiomas de la ética. Pero no son en absoluto arbitrarios desde un punto de vista psicológico y genético. Se derivan de nuestras tendencias innatas a evitar el dolor y la destrucción, y de la reacción emocional acumulada de muchos individuos ante el comportamiento de sus vecinos.
Es privilegio del genio moral del hombre, encarnado en individuos inspirados, postular axiomas éticos tan generales y tan bien fundados que los hombres los aceptarán en la medida en que están basados en la enorme masa de sus experiencias emocionales individuales. Los axiomas éticos se encuentran y se ponen a prueba de forma no muy diferente de los axiomas de la ciencia. La verdad es aquello que supera el test de la experiencia.
### 8
### UNA DERIVACIÓN ELEMENTAL DE LA EQUIVALENCIA DE MASA Y ENERGÍA*
Esta derivación de la ley de equivalencia tiene dos ventajas. Aunque hace uso del principio de relatividad especial, no presupone la maquinaria formal de la teoría sino que solamente utiliza tres leyes previamente conocidas:
1. La ley de conservación del momento lineal.
2. La expresión para la presión de radiación; es decir, el momento lineal de un complejo de radiación que se mueve en una dirección determinada.
3. La bien conocida expresión para la aberración de la luz (influencia del movimiento de la Tierra en la posición aparente de las estrellas fijas—Bradley).
Consideremos ahora el sistema siguiente. Sea el cuerpo _B_ libremente en reposo en el espacio con respecto al sistema _K_ 0. Dos complejos de radiación _S_ , _S_ ', cada uno de ellos con energía _E_ /2, se mueven en la dirección _x_ 0 positiva y negativa, respectivamente y son eventualmente absorbidos por _B_. Con esta absorción la energía de _B_ aumenta en _E_. Debido a la simetría del proceso, el cuerpo _B_ permanece en reposo respecto a _K_ 0.
Consideremos ahora este mismo proceso con respecto al sistema _K_ , que se mueve con respecto a _K_ 0 a velocidad _v_ constante en la dirección _Z_ 0 negativa. Con respecto a _K_ , la descripción del proceso es como sigue:
El cuerpo _B_ se mueve en la dirección _Z_ positiva con velocidad _v_. Los dos complejos de radiación tienen ahora direcciones con respecto a _K_ que forman un ángulo α con el eje _x_. La ley de aberración afirma que en primera aproximación α = _c/v_ , donde _c_ es la velocidad de la luz. De la consideración con respecto a _K_ 0 sabemos que la velocidad _v_ de _B_ permanece inalterada por la absorción de _S_ y _S_ '.
Aplicamos ahora a nuestro sistema la ley de conservación del momento con respecto a la dirección _z_ en el sistema de coordenadas _K_.
I. _Antes de la absorción_ : sea _M_ la masa de _B_ ; _Mv_ es entonces la expresión del momento de _B_ (según la mecánica clásica). Cada uno de los complejos tiene una energía _E_ /2 y con ello, por una conclusión bien conocida de la teoría de Maxwell, tiene un momento _E_ /2 _c_. Estrictamente hablando éste es el momento de _S_ con respecto a _K_ 0; sin embargo, cuando _v_ es pequeña con respecto a _c_ , el momento con respecto a _K_ es el mismo salvo una cantidad de segundo orden de magnitud ( _v_ 2/ _c_ 2 frente a 1). La componente _z_ de este momento es _E_ /2 _c_ sen α o con buena precisión (salvo cantidades de un orden de magnitud superior) ( _E_ /2 _c_ )α o ( _E_ /2)( _v_ / _c_ 2). _S_ y _S_ ' juntos tienen por lo tanto un momento _Ev_ / _c_ 2 en la dirección _z_. El momento total del sistema antes de la absorción es entonces
II. _Después de la absorción_ : sea _M_ ' la masa de _B_. Anticipamos aquí la posibilidad de que la masa aumente con la absorción de la energía _E_ (esto es necesario para que el estado final de nuestra consideración sea consistente). El momento del sistema después de la absorción es entonces
_M'v_
Supongamos ahora la ley de conservación del momento y apliquémosla con respecto a la dirección _z_. Esto da la ecuación
o
Esta ecuación expresa la ley de la equivalencia entre energía y masa. El incremento de energía _E_ está relacionado con el incremento de masa _E_ / _c_ 2. Puesto que la energía según la definición usual deja libre una constante aditiva, podemos escoger esta última de modo que
_E_ = _Mc_ 2
### Notas
. S. Hawking, ed., _A Stubbornly Persistent Illusion_ , Running Press, Philadelphia-Londres, 2007.
* «Elektrodynamik bewegter körper», _Annalen der Physik_ , 17, 1905.
. No discutiremos aquí la imprecisión inherente al concepto de simultaneidad de dos sucesos que tienen lugar en la misma posición (aproximadamente), lo que sólo puede ser eliminado mediante una abstracción.
. «Tiempo» aquí significa tanto «tiempo del sistema en reposo» como «la posición de las manecillas del reloj en movimiento localizado en el lugar en cuestión».
. Por ejemplo, un cuerpo que tiene una forma esférica cuando se examina en reposo.
. Si, por ejemplo, _X_ = _Y_ = _Z_ = _L_ = _M_ = 0 y _N_ ≠ 0, entonces es evidente por razones de simetría que si ν cambia de signo sin cambiar su valor numérico, entonces _Y_ ' también debe cambiar de signo sin cambiar su valor numérico.
* «Ist die Trägheit eines Körpers von seimen Energienhalt abhängig?», _Annalen der Physik_ , 18, 1905.
. El principio de la constancia de la velocidad de la luz está contenido, naturalmente, en las ecuaciones de Maxwell.
* «Über den Einfluss der Schwerkraft auf die Ausbreitung des Lichtes», _Annalen der Physik_ , 35, 1911.
. A. Einstein, _Jahrbuch für Radioakt und Elektronik_ , 4, 1907.
. Por supuesto, no podemos reemplazar cualquier campo gravitatorio _arbitrario_ por un estado de movimiento del sistema sin un campo gravitatorio, como tampoco, por una transformación de relatividad, podemos transformar en reposo cualquier tipo de movimiento de todos los puntos de un medio.
. Las dimensiones de S1 y S2 se consideran infinitamente pequeñas en comparación con _h_.
. Véase _supra_.
. L. F. Jewell ( _Journ. de Phys_., 6, 1897, p. 84) y especialmente C. Fabry y H. Boisson ( _Comptes Rendus_ , 148, 1909, pp. 688-690) han encontrado realmente tales desplazamientos de las líneas espectrales finas hacia el extremo rojo del espectro, del orden de magnitud aquí calculado, pero lo han atribuido a un efecto de la presión en la capa absorbente.
* «Die Grundlage der allgemeinen Relativitätstheorie», _Annalen der Physik_ , 49, 1916.
. Por supuesto, una respuesta puede ser satisfactoria desde el punto de vista de la epistemología, y pese a todo ser físicamente errónea si está en conflicto con otras experiencias.
. Eötvös ha demostrado experimentalmente que el campo gravitatorio tiene esta propiedad con gran exactitud.
. Suponemos la posibilidad de verificar la «simultaneidad» de sucesos inmediatamente próximos en el espacio, o —por hablar con más precisión— para inmediata proximidad o coincidencia en el espacio-tiempo, sin dar una definición de este concepto fundamental.
. La unidad de tiempo debe escogerse de modo que la velocidad de la luz _in vacuo_ medida en el sistema de coordenadas «local» sea igual a la unidad.
. Mediante producto exterior del vector con componentes arbitrarias A11, A12, A13, A14 por el vector con componentes 1, 0, 0, 0, producimos un tensor con componentes .
A11 | A12 | A13 | A14
---|---|---|---
0 | 0 | 0 | 0
0 | 0 | 0 | 0
0 | 0 | 0 | 0
Mediante la suma de cuatro tensores de este tipo, obtenemos el tensor Aµν con cualesquiera componentes asignadas.
. Los matemáticos han demostrado que ésta es también una condición _suficiente_.
. La relación Bρµστ = 0 sólo subsiste, por el enunciado 12, entre las segundas (y primeras) derivadas.
. Propiamente hablando, esto sólo puede afirmarse del tensor
Gµν \+ λgµν _g_ αβ Gαβ
donde λ es constante. Si, sin embargo, hacemos este tensor = 0, volvemos a las ecuaciones Gµν = 0.
. La razón para la introducción del factor –2κ se hará evidente más adelante.
. _g_ ατTσµ = Tστ, y _g_ σβ Tσα = Tσβ deben ser tensores simétricos.
. Sobre esta cuestión, véase H. Hilbert, _Nachr. d. K. Gesellsch. d. Wiss. zu Göttingen, Math.-phys. Klasse_ , 1915, p. 3.
. Para un observador que utiliza un sistema de referencia en el sentido de la teoría de la relatividad especial para una región infinitamente pequeña, y que se mueve con éste, la densidad de energía T44 es igual a ρ – _p_. Esto da la definición de ρ. Así pues, ρ no es constante para un fluido incompresible.
. Sobre el abandono de la elección de coordenadas con _g_ = –1, quedan cuatro funciones del espacio con libertad de elección, correspondientes a las cuatro funciones arbitrarias a nuestra disposición en la elección de coordenadas.
. Según E. Freundlich, observaciones espectroscópicas de ciertos tipos de estrellas fijas indican la existencia de un efecto de este tipo, pero todavía no se ha realizado un test crucial de esta consecuencia.
. Para el cálculo me remito a los artículos originales: A. Einstein, _Sitzungsber. D. Preuss. Akad. Wiss_., 1915, p. 831, y K. Schwarzschild, _ibid_., 1916, p. 189.
* «Hamiltonsches Princip und allgemeine Relativitätstheorie», _Sitzungsberichte der Preussischen Akad.Wissenschaften_ , 1916.
. Cuatro artículos de Lorentz en las Publicaciones de la Koninkl. Akad. yan Wetensch. te Amsterdam, 1915 y 1916; D. Hilbert en _Göttingen Nachr_., 1915, parte 3.
. No se hace uso por el momento del carácter tensorial de las _g_ µν.
. Por brevedad, los símbolos de suma se omiten en las fórmulas. Siempre hay que sumar sobre los índices que aparecen dos veces en un término. Así, en (4), por ejemplo, denota el término
. Aquí hay que encontrar la razón por la que el postulado de relatividad general conduce a una teoría muy precisa de la gravitación.
. Efectuando la integración parcial, obtenemos = √– _g_ _g_ _µν_ [{ _µα_ , _β_ } { _νβ_ , _α_ } – { _µν_ , _α_ } { _αβ_ , _β_ }].
. Por la introducción de y en lugar de y .
* «Kosmologische Betrachtungen zur allgemeinen Relativitätstheorie», _Sitzunsberichte der Preussischen Akad. d. Wissenschaften_ , 1917.
. ρ es la densidad media de materia, calculada para una región que es grande comparada con la distancia entre estrellas fijas vecinas, pero pequeña en comparación con las dimensiones del sistema estelar completo.
. De Sitter, _Akad. van Wetensch. Te Amsterdam_ , 8 Nov., 1916.
* «Spielen Gravitationsfelder im Aufber der materiellen Elementarteilchen eine wesentliche Rolle?», _Sitzungsberichte der Preussischen Akad. d. Wissenschaften_ , 1919.
. Véase, por ejemplo, A. Einstein, _Sitzungsber d. Preuss. Akad. D. Wiss_., 1916, pp. 187, 188.
. Véase D. Hilbert, Göttinger Nachr., 20 Nov., 1915.
. Véase H. Weyl, «Raum, Zeit, Materie», párrafo 33.
* Título original de la obra: _Über die spezielle und allgemeine Relativitätstheorie_.
. De esta manera se le asigna también a la línea recta un objeto de la naturaleza. Tres puntos de un cuerpo rígido _A_ , _B_ , _C_ se hallan situados sobre una línea recta cuando, dados los puntos _A_ y _C_ , el punto _B_ está elegido de tal manera que la suma de las distancias _AB_ y _BC_ es lo más pequeña posible. Esta definición, defectuosa desde luego, puede bastar en este contexto.
. Se ha supuesto, sin embargo, que la medición es exacta, es decir, que da un número entero. De esta dificultad se deshace uno empleando escalas subdivididas, cuya introducción no exige ningún método fundamentalmente nuevo.
. No es preciso entrar aquí con más detenimiento en el significado de «coincidencia espacial», pues este concepto es claro en la medida en que, en un caso real, apenas habría división de opiniones en torno a su validez.
. No es sino en la teoría de la relatividad general, estudiada en la segunda parte del libro, donde se hace necesario afinar y modificar esta concepción.
. Es decir, una curva a lo largo de la cual se mueve el cuerpo.
. Suponemos además que cuando ocurren tres fenómenos _A_ , _B_ , _C_ en lugares distintos y _A_ es simultáneo a _B_ y _B_ simultáneo a _C_ (en el sentido de la definición anterior), entonces se cumple también el criterio de simultaneidad para la pareja de sucesos _A_ - _C_. Este supuesto es una hipótesis física sobre la ley de propagación de la luz; tiene que cumplirse necesariamente para poder mantener en pie la ley de la constancia de la velocidad de la luz en el vacío.
. ¡Desde el punto de vista del terraplén!
. El centro de los vagones primero y centésimo, por ejemplo.
. En el Apéndice 1 se da una derivación sencilla de la transformación de Lorentz.
. Fizeau halló , donde es el índice de refracción del líquido. Por otro lado, debido a que es muy pequeño frente a 1, se puede sustituir (B) por , o bien, con la misma aproximación , lo cual concuerda con el resultado de Fizeau.
. _E_ 0 es la energía absorbida respecto a un sistema de coordenadas que se mueve con el cuerpo.
. Respecto a un sistema de coordenadas solidario con el cuerpo.
. La teoría de la relatividad general propone la idea de que las masas eléctricas de un electrón se mantienen unidas por fuerzas gravitacionales.
. Cf. la exposición algo más detallada en el Apéndice 2.
. La objeción adquiere especial contundencia cuando el estado de movimiento del cuerpo de referencia es tal que para mantenerlo no requiere de ninguna influencia exterior, por ejemplo en el caso de que el cuerpo de referencia rote uniformemente.
. La existencia de la desviación de la luz exigida por la teoría fue comprobada fotográficamente durante el eclipse de Sol del 30 de mayo de 1919 por dos expediciones organizadas por la Royal Society bajo la dirección de los astrónomos Eddington y Crommelin.
. Esto se sigue por generalización del razonamiento expuesto en el epígrafe 20.
. El campo se anula en el centro del disco y aumenta hacia afuera proporcionalmente a la distancia al punto medio.
. En todo este razonamiento hay que utilizar el sistema de Galileo _K_ (que no rota) como cuerpo de coordenadas, porque la validez de los resultados de la teoría de la relatividad especial sólo cabe suponerla respecto a _K_ (en relación a _K'_ existe un campo gravitatorio).
. Nuestro problema se les planteó a los matemáticos de la siguiente manera. Dada una superficie —por ejemplo, la de un elipsoide— en el espacio de medida tridimensional euclidiano, existe sobre ella una geometría bidimensional, exactamente igual que en el plano. Gauss se planteó el problema de tratar teóricamente esta geometría bidimensional sin utilizar el hecho de que la superficie pertenece a un continuo euclidiano de tres dimensiones. Si imaginamos _que en la superficie_ (igual que antes sobre la mesa) realizamos construcciones con varillas rígidas, las leyes que valen para ellas son distintas de las de la geometría euclidiana del plano. La superficie no es, respecto a las varillas, un continuo euclidiano, ni tampoco se pueden definir coordenadas cartesianas _en la superficie_. Gauss mostró los principios con arreglo a los cuales se pueden tratar las condiciones geométricas en la superficie, señalando así el camino hacia el tratamiento riemanniano de continuos no euclidianos multidimensionales. De ahí que los matemáticos tengan resueltos desde hace mucho los problemas formales a que conduce el postulado de la relatividad general.
. Cf. Apéndices 1 y 2. Las relaciones (11a) y (12) deducidas allí para las coordenadas valen también para _diferencias_ de coordenadas, y por tanto para diferenciales de las mismas (diferencias infinitamente pequeñas).
. _Justificación_. Según la teoría newtoniana, en una masa _m_ van a morir una cierta cantidad de «líneas de fuerza» que provienen del infinito y cuyo número es proporcional a la masa _m_. Si la densidad de masa ρ0 en el universo es por término medio constante, entonces una esfera de volumen _V_ encierra por término medio la masa ρ0 _V_. El número de líneas de fuerza que entran a través de la superficie _F_ en el interior de la esfera es, por tanto, proporcional a ρ0 _V_. Por unidad de superficie de la esfera entra, pues, un número de líneas de fuerza que es proporcional a o ρ0 _R_. La intensidad del campo en la superficie tendería a infinito al crecer el radio de la esfera _R_ , lo cual es imposible.
. Para el «radio» _R_ del mundo se obtiene la ecuación
Utilizando el sistema cegesimal, tenemos que = 1,08 . 1027; ρ es la densidad media de materia.
* «Äther und Relativitätstheorie», Julius Springer, Berlín, 1920. Versión publicada de la conferencia inaugural como catedrático extraordinario pronunciada en la Universidad Imperial de Leiden el 27 de octubre de 1920.
* «Geometry and Experience», versión extensa de un escrito dirigido a la Academia Prusiana de Ciencias: Berlín, 27 de enero de 1921.
. Esto es inteligible sin cálculo —pero sólo en el caso bidimensional— si volvemos una vez más al caso del disco en la superficie de la esfera.
* «Space and Time in Pre-Relativity Physics», cortesía de Princeton University Press.
. Esta relación debe ser válida para una elección arbitraria del origen de la dirección (razones ∆ _x_ 1 : ∆ _x_ 2 : ∆ _x_ 3) del intervalo.
. En realidad hay ecuaciones.
. Existen así dos tipos de sistemas cartesianos que se denominan sistemas «dextrógiros» y «levógiros». La diferencia entre ambos es familiar para cualquier científico o ingeniero. Es interesante señalar que estos dos tipos de sistemas no pueden definirse geométricamente, sino sólo el contraste entre ambos.
. La ecuación _a_ 'στξ'σξ'τ = 1 puede, por (5), ser reemplazada por _a_ ' _σ_ _r b_ _µ_ _r b_ _νσ_ ξ _σ_ ξr= 1, de donde se sigue inmediatamente el resultado.
. Las leyes de la física podrían expresarse de modo que fueran covariantes respecto a la transformación (3) incluso si hubiera una dirección preferida en el espacio; pero tal expresión no sería adecuada en este caso. Si hubiera una dirección preferida en el espacio, la descripción de los fenómenos naturales se simplificaría dando una orientación definida al sistema de coordenadas con respecto a dicha dirección. Pero si, por el contrario, no hay una dirección singular en el espacio, no es lógico formular las leyes de la naturaleza de modo que quede oculta la equivalencia entre sistemas de coordenadas con diferentes orientaciones. Encontraremos de nuevo este punto de vista en las teorías de la relatividad especial y general.
. Estas consideraciones familiarizarán al lector con las operaciones tensoriales sin las dificultades especiales del tratamiento tetradimensional; consideraciones correspondientes en la teoría de la relatividad especial (interpretación del campo de Minkowski) presentarán entonces menos dificultades.
. Esto ya no es tan cierto en la actualidad, después de que se ha visitado la Luna y se han enviado sondas espaciales a otros astros del sistema solar.
. Como efectivamente ha ocurrido: los aviones supersónicos no constituyen actualmente ninguna novedad, y las velocidades alcanzadas por los satélites artificiales son mucho mayores.
. En la actualidad, y por métodos artificiales, se han podido producir elementos más pesados. La inestabilidad de sus núcleos es la causa de que no se puedan encontrar en la naturaleza.
. En las últimas décadas la situación ha cambiado. Las mejoras introducidas en las técnicas experimentales y los nuevos descubrimientos en el terreno de la astrofísica han permitido mejorar la base experimental de la teoría general de la relatividad.
. En el lenguaje actual de la física se la conoce como «función de onda».
* Paul A. Schilpp, ed., _Albert Einstein. Autobiographical Notes_ , Open Court, La Salle, Ill., 1949.
. Quedarse en el grupo más restringido y basar simultáneamente la teoría general de la relatividad en la estructura más complicada resulta de una ingenuidad inconsecuente. Los pecados son pecados por más que sean cometidos por hombres respetables.
. Opino que la teoría aquí propuesta tiene bastantes probabilidades de confirmarse si el camino de una representación exhaustiva de la realidad física sobre la base del continuo es practicable.
* «Relativity: Essence of the Theory of Relativity», _The American People's Encyclopedia_ , XVI, Chicago, 1949.
*« _E = Mc 2_», en _Science Illustrated_ , abril de 1946.
* «My Theorie», _Times_ , 28 de noviembre de 1919.
. También este criterio ha sido confirmado desde entonces. _(N. del ed.)_
* «Physik und Realität», _The Journal of the Franklin Institute 221_ , n.° 3 (marzo de 1936), pp. 313-347.
. Está en la naturaleza de las cosas la posibilidad de que seamos capaces de hablar de esos objetos sólo mediante conceptos de nuestra creación, conceptos que en sí mismos no son objeto de definición. Sin embargo, es esencial que hagamos uso exclusivo de conceptos acerca de cuya relación con nuestra experiencia no se abriguen dudas.
. Este defecto de la teoría sólo podría ser eliminado por una formulación de la mecánica que adjudicara validez a todo _B_ 0. Éste es uno de los pasos que conducen a la teoría de la relatividad general. Un segundo defecto, también eliminado sólo por la introducción de la teoría de la relatividad general, estriba en que no existe ninguna razón dada por la mecánica para la igualdad de la masa pesante e inercial del punto material.
. Porque de acuerdo con una consecuencia de la teoría de la relatividad la energía de un sistema completo (en reposo) es igual a su inercia (como conjunto). Ésta, sin embargo, ha de tener un valor preciso.
. Una medición sobre _A_ , por ejemplo, implica una transición hacia un conjunto menor de sistemas. Éste (su función ψ también, en consecuencia) depende del punto de vista según el cual se lleve a cabo esa reducción del conjunto de sistemas.
* «The Fundaments of Theoretical Physics», _Science_ , 91, mayo de 1940.
* «The Common Language of Science», _Advancement of Science_ , Londres, vol. 2, n.° 5.
* «The Laws of Science and the Laws of Ethics», prólogo a Philipp Frank, _Relativity. A. Richer Truth_ , Boston, 1950.
* «An Elementary Derivation of the Equivalence of Mass and Energy», _Technion Journal_ , 1946.
_La gran ilusión_
Edición de Stephen Hawking
No se permite la reproducción total o parcial de este libro, ni su incorporación a un sistema informático, ni su transmisión en cualquier forma o por cualquier medio, sea éste electrónico, mecánico, por fotocopia, por grabación u otros métodos, sin el permiso previo y por escrito del editor. La infracción de los derechos mencionados puede ser constitutiva de delito contra la propiedad intelectual (Art. 270 y siguientes del Código Penal)
Diríjase a CEDRO (Centro Español de Derechos Reprográficos) si necesita reproducir algún fragmento de esta obra.
Puede contactar con CEDRO a través de la web www.conlicencia.com o por teléfono en el 91 702 19 70 / 93 272 04 47
El editor hace constar que ha sido imposible localizar a todos y cada uno de los autores, cedentes y herederos de esta obra, por lo que manifiesta la reserva de derechos de los mismos.
Título original: _A stubbornly persistent illusion_
© del diseño de la portada, Departamento de Arte y Diseño, Área Editorial Grupo Planeta
© de la imagen de la portada, Serzh
© 2007, Stephen Hawking
La traducción castellana de la Introducción general y de la «Nota Introductoria» a cada parte del libro ha corrido a cargo de Ubaldo Iriso Ariz (© 2008). Los datos sobre el resto de traducciones se registran también en la mencionada sección. Los títulos originales de los ensayos que conforman este volumen se hallarán en nota a pie de página de cada sección.
© Editorial Planeta S. A., 2016
Av. Diagonal, 662-664, 08034 Barcelona (España)
Crítica es un sello editorial de Editorial Planeta, S. A.
editorial@ed-critica.es
www.ed-critica.es
Primera edición en libro electrónico (epub): enero de 2016
ISBN: 978-84-9892-925-6 (epub)
Conversión a libro electrónico: Newcomlab, S. L. L.
www.newcomlab.com
|
{
"redpajama_set_name": "RedPajamaBook"
}
| 1,768
|
Le en droit québécois a été marqué par l'essor de l'État providence dans les années 1960 et 1970 de même que de nombreux débats sur le statut politique du Québec.
Au niveau du droit privé, après une trentaine d'années de travaux, le Parlement du Québec adopte le nouveau Code civil du Québec qui entre en vigueur le .
Années 1900
Années 1910
Années 1920
Années 1930
Années 1940
Années 1950
Années 1960
Années 1970
Années 1980
Années 1990
Notes et références
Voir aussi
Luttes féministes pour l'admission des femmes au barreau du Québec
Bibliographie
20
Quebec
2000
Droit
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 376
|
Q: in swift, how do i get the string value from a print_r statement in php? print_r($response);
I have used Alamofire to call a GET request from php code that uses print_r (as above)
I use this to get the 'outputString' in Swift:
let outputString = NSString(data: data!, encoding:NSUTF8StringEncoding)
however, I get an error if i use '.rangeOfString' to see if a string is in the response string, how do i do this?
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 3,706
|
"Is it possible to miss a place you've never been? To mourn a time you never lived?"
I didn't have high expectations going into Oblivion. The premise seemed derivative, and the trailer felt like a mishmash of every other sci-fi flick known to man. So imagine my surprise when the film turned out to be not only quite entertaining, but highly engrossing and thought-provoking.
First off, Oblivion is visually staggering. Joseph Kosinki, the director, does an impeccable job of bringing Earth in 2077 to life. From sweeping landscapes to magnificent CGI that never pulls you out of the story, the entire production is a thoroughly immersive experience. And that mansion in the sky? Well let's just say I'm somewhat hopeful Earth gets wiped out soon if that's where I'll be living.
It's fascinating how Tom Cruise still manages to be so likeable as a movie star. The guy's public persona has endured a lot of hoopala over the last decade, so it's a genuine achievement that he can still carry a movie so proficiently. Cruise is relatable (and charismatic) in the most extraordinary of circumstances, and that really makes you root for the guy. In addition, he deftly balances the role of Jack Harper with the required badassery and a number of poignant emotional beats.
The film's most impressive accomplishment is that fact that it tackles some powerful themes and doesn't stumble while exploring them. Is love rooted in a tangible presence? Or can it live past the corporeal form if the memories survive? Oblivion's script raises and tackles the question with a hopeful and strangely uplifting perspective. No spoilers here (for once) but the film's twists are certainly surprising, and supply the narrative with some truly unexpected turns.
I do have to mention that trying to unravel the film's mysteries is a bit of a confusing ordeal, and I had to Wikipedia the plot afterwards to make sure I was connecting everything properly. Nevertheless, the ride is well worth the investment.
Remarkably ambitious and often quite affecting, Oblivion is an effective blockbuster with real heart to boot.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,217
|
Hitler's Peace
By Philip Kerr
Category: Literary Fiction | Suspense & Thriller
About Hitler's Peace
The New York Times bestselling author of the Bernie Gunther novels reimagines the end of World War 2 in this gripping standalone spy thriller.
Autumn 1943. Since Stalingrad, Hitler has known that Germany cannot win the war. The upcoming Allied conference in Teheran will set the ground rules for their second front-and for the peace to come. Realizing that the unconditional surrender FDR has demanded will leave Germany in ruins, Hitler has put out peace feelers. (Unbeknownst to him, so has Himmler, who is ready to stage a coup in order to reach an accord.) FDR and Stalin are willing to negotiate. Only Churchill refuses to listen.
At the center of this high-stakes game of deals and doubledealing is Willard Mayer, an OSS operative who has been chosen by FDR to serve as his envoy. A cool, self-absorbed, emotionally distant womanizer with a questionable past, Mayer has embraced the stylish philosophy of the day, in which no values are fixed. He is the perfect foil for the steamy world of deception, betrayals, and assassinations that make up the moral universe of realpolitik.
With his sure hand for pacing, his firm grasp of historical detail, and his explosively creative imagination about what might have been, Philip Kerr has fashioned a totally convincing thinking man's thriller in the great tradition of Eric Ambler and Graham Greene.
Also by Philip Kerr
See all books by Philip Kerr
About Philip Kerr
Philip Kerr is the New York Times bestselling author of the acclaimed Bernie Gunther novels, three of which—Field Gray, The Lady from Zagreb, and Prussian Blue—were finalists for the Edgar® Award for Best Novel. Kerr has also won several Shamus Awards and… More about Philip Kerr
Published by G.P. Putnam's Sons
Aug 01, 2006 | 464 Pages | 5-1/16 x 7-3/4 | ISBN 9780143036951
People Who Read Hitler's Peace Also Read
Praise for Hitler's Peace
"A scandalous yet plausible scenario…a thriller [in which] the historical dice are well-shaken."—Los Angeles Times
"[Kerr] quantum leaps the limitations of genre fiction. Most thrillers insult your intelligence; Kerr assaults your ignorance."— Esquire
You began your career with a trilogy of historical crime novels set in Berlin. Since then, you've written in genres ranging from futuristic thrillers to a children's book. Why, after more than a dozen years, have you chosen to return to World War II?
Even when I was writing my Bernie Gunther novels, I kind of skirted around the war. I did one set in 1936, one in 1938, and one in 1947. The war informed those stories, but none of them was about the war, per se. To some extent I've always tried to avoid writing about the Second World War because so many other people have done it. I suppose I could never think of a story or an event that interested me until I started to read about the Teheran Conference of 1943. Most people remember Yalta and Potsdam, but no one remembers Teheran; and yet Teheran was where many of the most important decisions were made. Out of that interest came a simple 'what if' premise and the rest followed.
After having worked in so many genres, how has your style changed? Do you feel better equipped to tackle this subject matter than you did at the beginning of your career?
I don't ever pay attention to genres. They don't interest me. I could think of nothing worse than being forced to live in the ghetto of crime writing for example. If I do inhabit a genre it is the genre of telling a story. It may no longer be fashionable in these post-modern times to be interested in story, but there it is. I'm a better writer than I used to be. I know that.
Hitler's Peace seamlessly melds the factual and fictional. In writing about historical characters like Roosevelt, Churchill, and Himmler, how true are you to their known activities and characteristics?
I do a lot of reading to try to get under the skin of those characters. To that extent, writing a novel is a little like method-acting. I hope the portrayals are very accurate. For example, Hitler is a much more articulate and compelling character than the carpet-chewing madman we think we know.
Did you have any indication that the events described in your novel could have happened, or are they strictly a product of your imagination?
Some of the events did happen. I always try to find a story in the margin of history. Between what's known and what's not known. Certainly there was one occasion in the thirties when Churchill almost met Hitler. And contrary to what most people believe, Hitler and Stalin had a genuine admiration for each other. The trick of writing a novel like this is to make it believable to yourself first. After that it becomes easier to convince the reader that such events as these could have happened. By the time I finished writing the book I almost wanted to ask myself why the events described hadn't happened.
Do you describe any incidents in the book that most readers would be surprised to learn actually happened?
Yes. For example, there was the Willie D. Porter incident. The USS Iowa set sail for North Africa in late 1943, carrying the President and the Joint Chiefs to North Africa for the Big Three meeting with Stalin and Roosevelt. They nearly didn't get there at all. A day or so out of port, one of the Iowa's escort destroyers fired a torpedo at the mother ship, which narrowly missed. It was an accident. But it seems that American "friendly fire" is hardly a new phenomenon.
I think most people would be surprised to discover how Roosevelt decided to charm Stalin at the expense of his relationship with Churchill. At a time when we hear a lot about the so-called "special relationship" between Britain and USA, it's instructive to remember that the special relationship all but collapsed at Teheran. President Bush holds Churchill up as an example of a great world leader; but it was Churchill's private opinion that, at Teheran, Roosevelt betrayed him on a personal level. It might interest people to know that Churchill never attended Roosevelt's funeral, in 1945, which might help to explain why Johnson felt no need to attend Churchill's funeral in 1965.
As a genre, do you think alternate histories offer lessons about the past, or should they be construed strictly as entertainment?
Entertainment first. But sometimes fiction provides history's best viewpoint. For example, you won't read a better account of the battle of Waterloo than the one contained in Hugo's novel, Les Miserables. Hugo was never there, of course. His account was made up. Yet it is certainly the most compelling one.
What's next for you? Do you plan to continue writing fiction set in this time period, or will you try your hand at different genres?
Another Bernie Gunther novel, and another children's book.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,708
|
\section{Introduction}
The potential for significant discrepancies between the spectral types of late-type pre-main sequence (pre-MS) stars as determined via visible-wavelength (hereafter ``optical'') vs.\ near-infrared observations was first recognized and documented more than 15 years ago \citep{1998ASPC..154.1709G}. Clearly, such discrepancies have important implications for studies that seek to use theoretical pre-main sequence (pre-MS) evolutionary tracks to infer the ages and masses of pre-MS stars based on spectral types --- and, hence, effective temperatures --- that have been determined via near-infrared spectroscopy.
A case in point is the well-studied, nearby T Tauri star TW Hya. Most investigators have adopted an optical spectral type of K7Ve for TW Hya, following the original determination by \citet{herbig78}. Subsequent optical spectroscopic studies have supported or bracketed Herbig's determination; e.g., \citet{2006A&A...460..695T} found a spectral type of K6Ve, while \citet{2013ApJS..208....9P} found K8IVe. However, based on analysis of a near-infrared ($\sim$1--5 $\mu$m) spectrum, \citet{2011ApJ...732....8V} determined the spectral type of TW Hya to be much later; they found M2.5V. \citet{2011ApJ...732....8V} used this result to reassess the mass and age of TW Hya, based on comparisons with pre-MS tracks. They concluded that the (optical spectral type-based) mass and age of TW Hya had previously been overestimated, and proposed revising these estimates from $\sim$0.8 $M_\odot$ to $\sim$0.4 $M_\odot$ and from $\sim$8 Myr to $\sim$3 Myr, respectively. \citet{2011ApJ...732....8V} argued that these revised estimates were marginally consistent with existing constraints on the mass and age of TW Hya.
Thanks to the (far more) stringent constraints on its fundamental properties, the short-period binary T Tauri system V4046 Sgr AB \citep[optical spectral types K5+K7;][]{2004A&A...421.1159S,2011MNRAS.417.1747D} provides an important test case for the study of potential discrepancies between near-IR and optical spectral type determinations. The estimated age and distance of V4046 Sgr are relatively well-established (i.e., 12--21 Myr and $\sim$73 pc, respectively) on the basis of its membership in the $\beta$ Pic Moving Group and the presence of a comoving, early-M companion \citep{2008hsf2.book..757T,2011ApJ...740L..17K,2014MNRAS.438L..11B}. More importantly, the masses of the components of V4046 Sgr AB were precisely determined by \citet{2012ApJ...759..119R} --- i.e., 0.90 $M_\odot$ (A) and 0.85 $M_\odot$ (B), with uncertainties of $\sim$0.05 $M_\odot$ --- from analysis of interferometric CO imaging of its extended, inclined, circumbinary disk (which yields a dynamical mass of $\sim$1.75 $M_\odot$) combined with optical radial velocity measurements of the central, short ($\sim$2.4 d) period binary.
\citet{2012ApJ...759..119R} further demonstated that these dynamical masses are in excellent agreement with the predictions of models describing pre-MS evolution, given photospheric temperatures corresponding to optical spectral types of K5+K7.
Hence, we set out to determine whether the apparent spectral type discrepancies just described for TW Hya might manifest themselves similarly in the case of V4046 Sgr\footnote{For a review of the V4046 Sgr binary/circumbinary disk system and a comparative summary of the properties of the TW Hya and V4046 Sgr systems, the reader is referred to \citet{kastner13} and \citet{kastner14}, respectively.} and, if so, what this might imply for near-IR-based determinations of pre-MS masses and ages more generally.
\section{Observations}
We observed V4046 Sgr with the medium-resolution SpeX spectrograph \citep{2003PASP..115..362R} on the NASA Infrared Telescope Facility (IRTF). This was the same telescope/instrument combination used by \citet{2011ApJ...732....8V} in their study of TW Hya. Data for V4046 Sgr were obtained on June 28, 2013. We obtained ten individual 120s exposures of V4046 Sgr in pairing mode using the short-wavelength cross-dispersed (SXD) mode of SpeX, and twenty 5 s exposures using the long wavelength cross-dispersed (LXD2.1) mode. The slit width was set to 0.3$''$, yielding a resolution of $\sim$2000 for the SXD spectra and $\sim$2500 for the LXD2.1 spectra. Together, these observations cover the entire 0.8 -- 5.0 $\mu$m spectral range. The airmass during these observations was 1.65-1.7, and the sky was clear, with seeing at 1.2$''$. Observations of the A0V star HD 171296 were obtained the same night, to determine the strengths of telluric absorption features \citep{2003PASP..115..389V}. Other calibration data (flat fields and arc frames) were obtained immediately after on source observations in each spectral mode. Standard data reduction was performed with Spextool v3.4 \citep{2004PASP..116..362C}.
The SXD and LXD data were cleaned and median combined, and telluric features were removed from all orders. Low signal-to-noise ratio data between 1.355--1.405 $\mu$m and 1.870--1.930 $\mu$m were eliminated. The reduced SXD and LXD spectra were then merged. The resulting complete IRTF/SpeX spectrum of V4046 Sgr AB is presented in Fig.~\ref{fig:FullSpec}.
\section{Results}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=5in]{SPEX_Fig1.png}
\caption{The complete IRTF/SpeX spectrum of V4046 Sgr, with prominent emission lines indicated.
}
\label{fig:FullSpec}
\end{center}
\end{figure}
A relatively cursory (visual) comparison of the IRTF/SpeX infrared spectrum of V4046 Sgr AB with those of IRTF standard stars --- which are main sequence (MS) stars with optically-determined spectral types \citep{2009ApJS..185..289R} --- suggests that its near-infrared spectrum more closely resembles those of borderline late-K/early-M MS stars than that of the K5 type of its dominant component (i.e., the slightly more massive component A). More specifically, the broad, shallow absorption from TiO bandheads at $\sim$0.85 $\mu$m is indicative of a late K or early M type star, and the overall shape of the continuum from 0.8--1.8 $\mu$m and 1.9--2.6 $\mu$m resembles those of K7 and M1.5 IRTF spectral standards, respectively. As in the case of TW Hya, the hydrogen emission-line spectrum characteristic of an actively accreting T Tauri star is superimposed on this relatively red continuum (Fig.~\ref{fig:FullSpec}).
\begin{table}
\begin{center}
\caption{\sc IR Absorption Line Equivalent Width Comparisons
\label{tbl:EWs}
\vspace{.05in}
\begin{tabular}{lccccc}
\hline
\hline
& & \multicolumn{3}{c}{Equivalent widths$^a$} & \\
Species & $\lambda$ & V4046 Sgr & K5-K7 & M0-M3 & Closest matches \\
& ($\mu$m) & (\AA) & (\AA) & (\AA) & \\
\hline
K {\sc i} & 1.253 & 0.44$\pm$0.01 & 0.4--0.5 & 0.4--1.0 & K5--M0.5 \\
Mg {\sc i} & 1.183 & 1.37$\pm$0.01 & 1.5--2.0 & 0.8--1.5 & M0--M1.5 \\
Mg {\sc i} & 1.504 & 5.54$\pm$0.04 & 7.0--8.0 & 2.0--6.0 & M0.5--M2 \\
Mg {\sc i} & 1.711 & 3.01$\pm$0.03 & 3.3--3.8 & 1.5--3.8 & M1--M2 \\
Na {\sc i} & 1.140 & 2.46$\pm$0.03 & 1.7--2.7 & 2.0--4.5 & K7--M2 \\
FeH & 0.990 & 1.53$\pm$0.02 & 1.0--1.8 & 2.0--5.0 & K7--M0 \\
\hline
\end{tabular}
\end{center}
\footnotesize {\sc Note:} a) EWs for K5-K7 and M0-M3 spectral
standards from \citet[][their Fig.\ 3]{2011ApJ...732....8V}.\\
\end{table}
To make the comparison between V4046 Sgr AB and IRTF spectral type standards more quantitative, we adopted the IDL-based tools\footnote{See https://github.com/awmann/metal.} described in \citet{2013AJ....145...52M} to perform measurements of the equivalent widths (EWs) of selected absorption features in the IRTF/SpeX spectrum of V4046 Sgr AB that are particularly sensitive diagnostics of spectral type \citep{2009ApJS..185..289R}. Selected EW measurements are presented in Table~\ref{tbl:EWs}, alongside those obtained from the IRTF/SpeX spectra of late K and early M standard stars by \citet{2011ApJ...732....8V}. It is readily apparent from these comparisons that V4046 Sgr AB, like TW Hya, displays a near-infrared spectral type significantly later than the spectral type combination determined via optical spectroscopy. Specifically, based on the aforementioned visual inspection and the EW results in Table~\ref{tbl:EWs}, we judge that the IRTF/SpeX-determined composite spectral type of V4046 Sgr AB lies in the range M0--M1 --- i.e., 3--5 spectral subclasses later than the optically-determined composite spectal type of K5+K7.
\section{Discussion}
The 3--5 spectral subclass discrepancy we have found between optical and near-IR (IRTF/SpeX-determined) spectral types for V4046 Sgr AB is very similar to that found for TW Hya by \citet{2011ApJ...732....8V}. However, in contrast to the case of the (nearly pole-on, single) TW Hya star/disk system, the component masses of the V4046 Sgr AB (binary plus circumbinary disk) system are very well constrained, thanks to its intermediate system inclination \citep[$33.5^\circ$;][]{2012ApJ...759..119R}.
Given these constraints, the optically determined spectral type combination of K5+K7 and, hence, ``mean'' corresponding effective temperature of $T_{eff}$ $\sim$4300 K \citep{2011MNRAS.417.1747D} is in much better agreement with the predictions of pre-MS evolutionary tracks, given the likely system age range (12--21 Myr), than is the ``mean'' of $T_{eff}\sim3500$ K that corresponds to the early-M spectral type implied by our analysis of IRTF/SpeX spectroscopy (Fig.~\ref{fig:HRdiagrams}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=5in]{HRdiagramsFig2.png}
\caption{H-R diagram positions of V4046 Sgr AB and its wide-separation M1 companion (V4046 Sgr C[D]) as inferred optically (thin black crosses) and from IRTF/SpeX data (\S 3; thick red crosses) overlaid on pre-MS
evolutionary tracks from ({\it left}) \citet{2000A&A...358..593S} and ({\it right}) \citet{1998A&A...337..403B}. Masses (solid lines) run from 0.2 to 1.0 $M_\odot$ in intervals
of 0.1 $M_\odot$, and the isochrones (dashed lines) are for ages 5, 10, 15, 25, and 40 Myr. Adapted from \citet{2011ApJ...740L..17K}.
}
\label{fig:HRdiagrams}
\end{center}
\end{figure}
These results further imply that the SPEX-based inference of an M2.5 spectral type for TW Hya \citep{2011ApJ...732....8V} should not be used to conclude that this heavily scrutinized pre-MS star is younger and less massive than implied by its optical spectral type \citep[see also discussions of the mass and age of TW Hya in][]{2012ApJ...744..162A,2014A&A...563A.121D}.
We note that \citet{2013ApJ...771...45D} found good fits either for a visual/near-IR ``composite'' of K7+M2 or for a ``compromise'' spectral type of M0 to an HST/STIS spectrum of TW Hya that featured very broad spectral coverage (extending from $\sim$5000--10000 \AA). They infer a mass as small as 0.55 $M_\odot$, if the underlying star is better represented by the cooler (M2) component, which would be the case if the composite spectral type can be ascribed to the effects of accretion (see below). However, our IRTF/SpeX results for V4046 Sgr suggest that an earlier spectral type, hence higher mass, is preferred for TW Hya as well.
The likely causes of the systematic discrepancies between optical and near-IR spectral types are starspots and surface gravity effects, as discussed by \citet{1998ASPC..154.1709G} and \citet{2013ApJ...769...73M}. \citet{2013ApJ...771...45D} propose an additional explanation, in which accretion {\it hot} spots make a cooler photosphere appear warmer as one moves blueward in the optical from the near-IR. However, the nonaccreting (weak-lined) TTS in Taurus studied by \citet{1998ASPC..154.1709G} also appear too cool in the near-IR relative to MS stars of the same optically-determined spectral types, casting doubt on the importance of accretion in producing the spectral type discrepancies for the \citep[weakly accreting;][]{2011A&A...526A.104C} TW Hya and V4046 Sgr AB. An important test case in this regard is V4046 Sgr C[D], the comoving companion to AB \citep{2011ApJ...740L..17K}; V4046 Sgr C[D] is a close-binary, weak-lined T Tauri system with a composite optical spectral type of M1e. Given that AB and C[D] are presumably coeval and their relative luminosities are well-established (Fig.~\ref{fig:HRdiagrams}), comparison of the results of near-infrared spectroscopy of the (accreting) V4046 Sgr AB and (nonaccreting) C[D] systems should provide stringent constraints on the physical origins of optical vs.\ near-IR spectral type discrepancies for pre-MS stars.
\section{Conclusions}
We have shown that the close binary pre-MS star system V4046 Sgr AB exhibits a (composite binary) spectral type of early M in the near-IR, i.e., significantly (3--5 subtypes) later than the mid/late-K spectral types previously determined via analysis of its optical spectrum. This discrepancy is very similar to that displayed by the single pre-MS star TW Hya \citep{2011ApJ...732....8V}. It is likely that the combination of large surface area starspots and low surface gravities, both of which are characteristic of pre-MS stars, results in a trend of later spectral type with increasing wavelength, moving from the optical to the near-infrared regime.
Regardless, it is apparent --- given the well-constrained mass and age of the V4046 Sgr binary --- that the optical spectral type determination for V4046 Sgr AB is in better agreement with the predictions of evolutionary models than our near-infrared spectral type determination. {\it These results suggest that the use of near-infrared spectra as a primary means to determine pre-MS star spectral types may lead to significant underestimates of photospheric temperatures, which can in turn produce significant, systematic errors in inferred pre-MS star masses and ages.}
Methods similar to those employed here for V4046 Sgr and by \citet{2011ApJ...732....8V} for TW Hya should be applied to additional pre-MS stars, both accreting and non-accreting, whose masses can be determined via independent means \citep[e.g., via CO kinematics or, for the growing sample of known eclipsing binary T Tauri systems, via binary orbital parameters;][]{2014ATsir1610....1I}. Given such near-IR spectral type determinations for a large and diverse sample of pre-MS stars with precisely-determined masses and well-established ages and luminosities, we can more fully characterize and calibrate optical vs.\ near-IR spectral type discrepancies for pre-MS stars --- which, based on the two cases considered here, would appear to be $\sim$3--5 spectral subclasses for weakly accreting, roughly solar-mass pre-MS stars with ages in the range $\sim$10--20 Myr.
\acknowledgments{Support for this research is provided by by National Science Foundation grant AST--1108950 to RIT. C.T. Smith's research at RIT was supported by a NSF Research Experience for Undergraduates program grant to RIT's Chester F. Carlson Center for Imaging Science.}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,390
|
\section{Introduction}
Magnetotransport phenomena in ferromagnetic semiconductor, such as
anisotropic magnetoresistance
(AMR)\cite{Thomson546,smit1951,ieee,jaoul1977,Baxter2002,rushforth1938amc,kato2008iam,rushforth2009oac,Tang2003,Pappert2007,Lim2006,kovalev},
have attracted significant attentions due to applications in the
emerging field of spintronics \cite{wolf2001rmc,Jungwirth2006}.
AMRs, including longitudinal and transverse AMRs, are the response
of magnetoresistance to the relative angle between magnetization and
current in magnetic materials. Both the longitudinal and transverse
conductivities show the symmetric feature: $\sigma_{xx}(\bm
M_0)=\sigma_{xx}(-\bm M_0)$ and $\sigma_{yx}(\bm
M_0)=\sigma_{yx}(-\bm M_0)$, where the magnetization $\bm M_0$ is
usually in the two-dimensional plane ($x$-$y$ plane). However, one
should note that for ordinary charge Hall effect (including
anomalous Hall effect \cite{ahe}) the transverse conductivity obeys
the antisymmetric relation, $\sigma_{yx}(\bm M_0)=-\sigma_{yx}(-\bm
M_0)$. Here $\bm M_0$ is normal to the plane.
Experimentally, AMR has been extensively studied in diluted magnetic
semiconductors recently. Rushforth {\it et al.} investigated the
physical origin of the noncrystalline and crystalline components of
AMR in diluted magnetic semiconductors \cite{rushforth1938amc}. Shin
{\it et al.} explored the temperature dependence of AMR in
ferromagnetic (Ga,Mn)As films \cite{Shin2007}. A giant transverse
AMR was also observed in this ternary ferromagnetic semiconductor
(Ga,Mn)As \cite{Pappert2007,Tang2003}. In contrast to the extensive
experimental studies of AMR, the theoretical interpretation of AMR
is relatively poor. The experimental analyses are usually based on a
phenomenological treatment \cite{smit1951,ieee}. The full Boltzmann
theory simulations have been made to study the origin of the sources
of AMR in $p$-type magnetic semiconductor
\cite{rushforth1938amc,rushforth2009oac,Karel}. Kato. {\it et al.}
analyzed the intrinsic AMR in spin-polarized two-dimensional gas
(2DEG) with Rashba spin-orbit coupling (SOC) \cite{kato2008iam}.
They showed that AMR vanishes unless the relaxation time is
spin-related. Recently, Trushin {\it et al.} studied AMR for Rashba
or Dresselhaus spin-orbit splitting electron system with polarized
magnetic impurities \cite{Trushin2009}. In the above theoretical
studies, the microscopic mechanism of AMR is considered as due to
the anisotropic carriers lifetime, arising from the combined effect
of the SOC and the scattering by the polarized magnetic impurities.
The spin-dependent scattering is the essential factor in AMR
\cite{kovalev,rushforth1938amc,rushforth2009oac,kato2008iam,Trushin2009,vborn}.
In most studies, the nonmagnetic disorder potential is taken as
$\delta$-form type. However in realistic heterostructure, the
electron density is not large
enough to screen the nonmagnetic impurities, where the interaction between
electron and disorder is long-ranged. Hence, the effect of
electron-impurity scattering on AMR is far from being understood, completely.
In this paper, we employ the kinetic equation approach to
investigate AMR in two-dimensional electron system in the presence
of Rashba-type spin-orbit interaction and in-plane magnetization. We
show that the combined effect of SOC, in-plane magnetization, and
nonmagnetic long-range impurity can lead to AMR. At the same time,
numerical evaluation demonstrates the disorder-distance- and
electron-density-related feature of AMR. The present study may
provide another mechanism of AMR in spin-orbit interaction 2DEG with
in-plane magnetization. The nonmagnetic wave-vector-dependent
disorders, coupling to the SOC and magnetization, induces the
magnetotransport anisotropy.
\section{Theoretical approach}
We consider a 2DEG confined in a [001]-grown III-V semiconductor
heterostructure with Rashba SOC and a homogenous in-plane
magnetization $\bm M_0$. The $x$ and $y$ axes are taken along [100]
and [010] direction, respectively. Hence, the noninteracting
one-particle Hamiltonian can be written as
\begin{equation}\label{ham}
\check{H}=\frac{k^2}{2m}+\alpha(\hat z\times \bm \sigma)\cdot{\bm
k}-M_x\sigma_x-M_y\sigma_y.
\end{equation}
Here $m$ is the electron effective mass, $\bm
\sigma\equiv(\sigma_x,\sigma_y,\sigma_z)$ are the Pauli matrices,
${\bm k}\equiv(k\cos\theta_{\bm k},k\sin\theta_{\bm k})$ is the
two-dimensional electron wave vector, $\alpha$ is the Rashba SOC
parameter, and $\bm M\equiv(M_x,M_y)=M(\cos\xi,\sin\xi)=g\mu_B\bm
M_0\equiv g\mu_BM_0(\cos\xi,\sin\xi)$ with ${g}$ as the effective
$g$-factor, $\mu_B$ as
the Bohr magneton, and $\xi$ as the angle between the magnetization $\bm M_0$ and [100]-axis.
The above Hamiltonian \eqref{ham} can be diagonalized into
$\hat{H}=U_{\bm k}^\dag \check{H}U_{\bm k}={\rm
diag}[\varepsilon_{1}(\bm k),\varepsilon_{2}(\bm k)]$ in the
helicity basis with the help of the following local unitary
transformation
\begin{equation}\label{}
U_{\bm k}=\frac{1}{\sqrt{2}}\left(
\begin{array}{cc}
1 & 1 \\
ie^{i\chi_{\bm k}} & -ie^{i\chi_{\bm k}} \\
\end{array}
\right).
\end{equation}
Here the energy dispersion $\varepsilon_{\mu}(\bm
k)=\frac{k^2}{2m}+(-1)^\mu\varepsilon_{\rm RM}(\bm k)$ with
$\varepsilon_{\rm RM}(\bm k)=\sqrt{\alpha^2k^2+M^2+2\alpha
kM\sin(\xi-\theta_{\bm k})}$, $\mu=1,2$ as the helix band index, and
\begin{equation}\label{}
\chi_{\bm k}=\tan^{-1}\frac{\alpha k\sin\theta_{\bm
k}-M\cos\xi}{\alpha k\cos\theta_{\bm k}+M\sin\xi}.
\end{equation}
Now we consider the quasi-two-dimensional system is driven by a weak
dc electric field $\bm E$ along [100] direction. Obviously, in order
to carry out the evaluation of the AMR, it is necessary to determine
the matrix electron distribution function. The kinetic equation for
the $2\times2$ matrix distribution function $\rho(\bm k)$ in the
stationary linear
response regime can
be
derived, where the elastic electron-impurity scattering is taken
into account in the self-consistent Born approximation
\cite{liu195329,liu2005des,lin2006she}. Following the procedure of
these papers, the distribution
function can be obtained as, $\rho(\bm k)=\rho^{(0)}(\bm k)+\rho^{(1)}(\bm k)+\rho^{(2)}(\bm
k)$, with equilibrium distribution function $\rho^{(0)}(\bm k)={\rm diag}\big\{n_{\rm F}[\varepsilon_{1}(\bm k)],n_{\rm F}[\varepsilon_{2}(\bm k)]\big\}$, and $n_{\rm F}(x)$ as the Fermi-Dirac function. Here
$\rho^{(1)}(\bm k)$ and $\rho^{(2)}(\bm k)$ are
collision-unrelated and collision-related matrix distribution functions in
the first order of electric field, respectively. The
collision-unrelated distribution function $\rho^{(1)}(\bm k)$ is off-diagonal matrix with the
elements given by
\begin{equation}\label{}
\rho^{(1)}_{12}(\bm k)=\rho^{(1)}_{21}(\bm k)=\frac{eE_0}{4\varepsilon_{\rm RM}}\frac{\partial \chi_{\bm k}}
{\partial k_x}\big\{n_{\rm F}[\varepsilon_{1}(\bm k)]-n_{\rm F}[\varepsilon_{2}(\bm k)]\big\},
\end{equation}
where $E_0$ is the strength of the electric field. This distribution
function is associated with the interband transition between two
spin-orbit-coupled bands, making no contribution to charge
conductivity. However, it is important for spin Hall effect, which
results in the collision-independent intrinsic spin Hall effect
\cite{ShuichiMurakami09052003,sinova2004uis,lin2006she}. The
collision-related distribution function $\rho^{(2)}(\bm k)$ is
determined by the coupled equations
\begin{align}
eE_0\frac{\partial n_{\rm F}(\varepsilon_{\bm k\mu})}{\partial k_x}&=\pi\sum_{\bm q\mu'}\vert V(\bm k-\bm q)\vert^2
\Omega_{\mu\mu'}\nonumber\\\times&\left[\rho_{\mu\mu}^{(2)}(\bm k)-
\rho_{\mu'\mu'}^{(2)}(\bm q)\right]\delta\big[\varepsilon_{\mu}(\bm k)-\varepsilon_{\mu'}(\bm q)\big], \label{eq1}\\
4\varepsilon_{\rm RM}(\bm k){\rm Re}\rho_{12}^{(2)}(\bm k)&=\pi\sum_{\bm q\mu\mu'}\vert V(\bm k-\bm q)\vert^2
\bar{\Omega}_{\mu\mu'}\nonumber\\\times&\left[\rho_{\mu\mu}^{(2)}(\bm k)-\rho_{\mu'\mu'}^{(2)}(\bm q)
\right]\delta\big[\varepsilon_{\mu}(\bm k)-\varepsilon_{\mu'}(\bm q)\big].\label{eq2}
\end{align}
Here $\Omega_{\mu\mu'}=1+(-1)^{\mu+\mu'}\cos(\chi_{\bm k}-\chi_{\bm
q})$ and $\bar{\Omega}_{\mu\mu'}=(-1)^{\mu'}\sin(\chi_{\bm
k}-\chi_{\bm q})$. $V(\bm k-\bm q)$ is the nonmagnetic impurity
scattering potential. ${\rm Re}\rho_{12}^{(2)}(\bm k)$ represents
the real part of the off-diagonal distribution function
$\rho_{12}^{(2)}(\bm k)$. One should note that here the weak
scattering limit is assumed, where we restrict ourselves to the
leading order of the impurity concentration. In this case, the
imaginary part of the off-diagonal distribution function
$\rho_{12}^{(2)}(\bm k)$ can be ignored completely \cite{liu195329}.
In the above kinetic equations, both the interband and the intraband
transitions are considered.
In order to study AMR, it is necessary to evaluate the drift
velocity. In spin basis, the two in-plane matrix velocity operators
read
\begin{equation}\label{}
\check{v}_x=\left(
\begin{array}{cc}
\frac{k_x}{m} & i\alpha \\
-i\alpha & \frac{k_x}{m} \\
\end{array}
\right),
\end{equation}
\begin{equation}\label{}
\check{v}_y=\left(
\begin{array}{cc}
\frac{k_y}{m} & \alpha \\
\alpha & \frac{k_y}{m} \\
\end{array}
\right).
\end{equation}
It is clear that the velocity operators in spin basis are
independent of the magnetization. Moreover, the expressions of the
velocity operators are the same as the ones of semiconductor
heterostructure with Rashba spin-orbit interaction in the absence of
magnetization. However, in the helicity basis, the single-particle
operators of velocity $\hat v_i=U_{\bm k}^\dag \check v_iU_{\bm k}$
($i=x,y$), rely on the magnetization through the energy spectrum and
the angle $\chi_{\bm k}$, and are given by
\begin{equation}\label{}
\hat{v}_x=\left(
\begin{array}{cc}
\frac{\partial \varepsilon_{1}(\bm k)}{\partial k_x} & i\alpha\sin\chi_{\bm k} \\
-i\alpha\sin\chi_{\bm k} & \frac{\partial \varepsilon_{2}(\bm k)}{\partial k_x}\\
\end{array}
\right),
\end{equation}
\begin{equation}\label{}
\hat{v}_y=\left(
\begin{array}{cc}
\frac{\partial \varepsilon_{1}(\bm k)}{\partial k_y} & -i\alpha\cos\chi_{\bm k} \\
i\alpha\cos\chi_{\bm k} & \frac{\partial \varepsilon_{2}(\bm k)}{\partial k_y} \\
\end{array}
\right).
\end{equation}
One find that the off-diagonal elements of in-plane velocity
operators are also nonvanishing in helicity basis. The corresponding
macroscopical drift velocities are obtained by taking the
statistical average over them, $v_i=\frac{1}{N}\sum_{\bm k}{\rm
Tr}[\rho(\bm k)\hat v_i]$, and expressed as
\begin{equation}
v_i = \frac{1}{N}\sum_{\bm k\mu}\frac{\partial \varepsilon_\mu(\bm k)}{\partial k_i}\rho_{\mu\mu}^{(2)}(\bm
k).\label{vel}
\end{equation}
Here $N$ is the electron density. It can be seen that the average
velocities only depend on the diagonal element of velocity
operators. One should emphasize that in clean limit approximation,
the imaginary part of the off-diagonal element of collision-related
distribution function vanishes. Hence, the drift velocities only
relate to the diagonal elements of velocity operators and the
diagonal elements of distribution function. And the expressions of
average velocities become the same as the usual form of two band
system without interband coupling. We only need Eq. \eqref{eq1} to
determine the diagonal elements of distribution function. The real
part of off-diagonal elements, ${\rm Re}\rho_{12}^{(2)}(\bm k)$, is
essential for calculation of spin Hall effect
\cite{liu2005des,lin2006she} and anomalous Hall effect
\cite{liu195329}. The longitudinal and transverse conductivities are
defined by $\sigma_{xx}=Nev_x/E_0$ and $\sigma_{yx}=Nev_y/E_0$,
respectively.
For AMR, one can find that the longitudinal and transverse
conductivities obey the symmetric relations: $\sigma_{xx}(\bm
M_0)=\sigma_{xx}(-\bm M_0)$ and $\sigma_{yx}(\bm
M_0)=\sigma_{yx}(-\bm M_0)$. We can understand these properties as
follows: The eigenenergy $\varepsilon_\mu(\bm M_0,\bm k)$ and angle
$\chi_{\bm k}(\bm M_0)$ satisfy $\varepsilon_\mu(-\bm M_0,-\bm
k)=\varepsilon_\mu(\bm M_0,\bm k)$ and $\chi_{-\bm k}(-\bm
M_0)=\pi+\chi_{\bm k}(\bm M_0)$. Therefore distribution function
satisfies $\rho_{\mu\mu}^{(2)}(-\bm M_0,-\bm
k)=-\rho_{\mu\mu}^{(2)}(\bm M_0,\bm
k)$. Note that for brevity, the argument $\bm M_0$ for eigenenergy,
distribution function and so on, is dropped elsewhere.
When $\bm M_0\rightarrow-\bm M_0$, we make transformation
$\bm k\rightarrow-\bm k$ in Eq. (\ref{vel}). This transformation will
not change the total integral, hence the conductivities satisfy the
symmetric property. This property is in vivid contrast to the one of
anomalous Hall effect, where the transverse conductivity obeys the
antisymmetric relation. Now we remark these relations from the point
of view of the time reversal symmetry. For AMR, the conductivities
are related to the diagonal elements of distribution function, which
are proportional to a momentum-dependent effective transport
relaxation time $\tau_{\rm tr}$ (see footnote\footnote{Actually, the
diagonal elements of distribution function are proportional to the
quantity with dimension of time, relating to several band-dependent
relaxation times \cite{liu195329}. We call this quantity as
effective transport relaxation time.}). However, the anomalous Hall
conductivity relies on the off-diagonal elements. The off-diagonal
elements do not depend on this effective transport relaxation time,
directly, and the disorder plays only an intermediate role
\cite{liu195329}. Under time reversal, for AMR, $\sigma_{xx}(-\bm
M_0,-\tau_{\rm tr})=-\sigma_{xx}(\bm M_0,\tau_{\rm tr})$ and
$\sigma_{yx}(-\bm M_0,-\tau_{\rm tr})=-\sigma_{yx}(\bm M_0,\tau_{\rm
tr})$. By considering the relations between the conductivities and
the effective relaxation time, one can obtain the symmetric feature.
While for anomalous Hall conductivity $\sigma_{yx}(-\bm
M_0)=-\sigma_{yx}(\bm M_0)$ under time reversal, and the
antisymmetric relation is obtained directly.
\section{Numerical results}
We numerically investigate the combined effect of Rashba SOC,
magnetization, and long-range nonmagnetic electron-impurity
scattering on the AMR in InAs/InSb heterostructure. The long-range
electron-impurity collision is considered as due to remote
nonmagnetic charged impurities separated at a distance $s$ from the
interface. The potential takes the form $V({ q})=U({q})/\kappa({
q})$ with $|U({ q})|^2=n_i\left( \frac{e^2}{2\epsilon_0\kappa q}
\right)^2 e^{-2qs}I(q)^2$ \cite{Stern,lei1985tdb}. Here $I(q)$ is
the form factor and $\kappa({ q})$ is the
factor related to the Coulomb screening, the expressions
of which can be found in Ref. \cite{lei1985tdb}. $n_i$ is
the density of nonmagnetic remote impurity. In the calculation the electron effective mass is taken as
$m=0.04m_{\rm e}$ with $m_{\rm e}$ as the free electron
mass. The dielectric
constant of InAs $\kappa=15.15$. In the numerical analysis, we have
assumed that $\varepsilon_{\rm F}>M$, {\it i.e.} both the minority
and the majority
bands are occupied, with $\varepsilon_{\rm F}$ as the Fermi energy.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.3\textwidth]{rfcs}
\end{center}
\caption{The relative fractional change in the longitudinal (a) and
transverse (b) conductivities as functions of the angle $\xi$ between magnetization
in the plane and [100]-axis for various remote impurity distances.
The thin wine lines are obtained for $\delta$-form short-range
electron-disorder collision. The electron density
$N=1.0\times10^{11} \,{\rm cm}^{-2}$. Rashba constant
$\alpha=3.0\times 10^{-11}\,{\rm eVm}$ and the magnetization
$M=2\,{\rm meV}$.}
\label{fig:rfcs}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.3\textwidth]{amr}
\end{center}
\caption{AMR is shown as functions of spin-orbit interaction parameter
for various magnetizations (a) and as functions of magnetization for
various Rashba coupling parameters (b) at fixed distance $s= 30\,{\rm nm}$.
Here the electron concentration $N=1.0\times10^{11} {\rm cm}^{-2}$. The thin wine lines indicate the corresponding AMR for short-range
electron-impurity scattering.}
\label{fig:amr}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.35\textwidth]{amrN}
\end{center}
\caption{Dependencies of AMR
on SOC constant $\alpha$ for different electron densities. The magnetization $M=1.5\,{\rm
meV}$ and the remote impurity distance $s= 50\,{\rm nm}$. The inset
shows AMR as functions of electron density when Rashba spin-orbit
splitting $\alpha=4.0\times 10^{-11}\,{\rm eVm}$.}
\label{fig:amrN}
\end{figure}
We first consider the relative fractional changes in the
conductivity (RFC), which are defined by
\begin{eqnarray}
{\rm RFCx} &=& \frac{\Delta\sigma_{xx}}{\sigma_{av}}-\left(\frac{\Delta\sigma_{xx}}{\sigma_{av}}\right)_{\rm min}, \\
{\rm RFCy} &=& \frac{\Delta\sigma_{yx}}{\sigma_{av}}-\left(\frac{\Delta\sigma_{yx}}{\sigma_{av}}\right)_{\rm
min},
\end{eqnarray}
where $\Delta\sigma_{xx}=\sigma_{xx}-\sigma_{av}$,
$\Delta\sigma_{yx}=\sigma_{yx}-\sigma_{av}$, and $\sigma_{av}$ is
the average value of the longitudinal conductivity as the
magnetization is rotated through $360^\circ$ with respect to
[100]-axis. The subscript ``min" means the corresponding minimum
value of the fractional change. It is seen that RFC is independent
of impurity density.
In Fig. \ref{fig:rfcs}, the relative fractional changes in the
longitudinal and
transverse conductivities are shown as functions of the angle between magnetization
in the plane and [100]-axis for this nonmagnetic remote disorder.
The corresponding thin wine solid line is obtained for $\delta$
shape short-range electron-disorder collision, $V(q)=V_0$,
independent of momentum. It is clear that the longitudinal
conductivity shows the strong anisotropy for magnetization aligned
along various direction when the nonmagnetic disorder is
long-ranged. However, the anisotropy completely vanishes for
short-range electron-impurity scattering, in agreement with the
previous studies \cite{kato2008iam,Trushin2009}. The degree of this
anisotropy depends strongly on the smoothness of the remote
disorder. With the rise of the impurity distance, the degree of the
anisotropy first enhances, and then drops when the distance is large
enough. In Fig. \ref{fig:rfcs} (b), it is seen that the transverse
conductivity also indicates the anisotropy for various direction of
magnetization. The remote disorders can affect the degree of the
anisotropy of transverse conductivity, similar to the longitudinal
conductivity. It is also found that, when the nonmagnetic disorder
becomes short-ranged, the anisotropy of transverse conductivity also
vanishes completely. This confirms that the combined effect of
Rashba SOC, in-plane magnetization, and nonmagnetic remote disorder
could lead to AMR. Our numerical evaluation shows that these
longitudinal and transverse AMRs are consistent with the standard
phenomenology due to symmetry arguments
\cite{rushforth1938amc,Trushin2009}:
${\Delta\sigma_{xx}}/{\sigma_{av}} = C_I \cos 2\xi $,
${\Delta\sigma_{yx}}/{\sigma_{av}} = C_I \sin 2\xi$. Here $C_I$ is a
dimensionless constant and is sometimes called noncrystalline
coefficient in literatures. One note that crystalline AMR
coefficient vanishes in this case, which is a special property of
the Rashba model. It is not valid for systems with other SOC, such
as Dresselhaus SOC \cite{Trushin2009}.
Now we limit ourselves to AMRs defined as the relative change
between longitudinal resistivities for magnetization along and
normal to the current direction. We take the current direction along
[100]-axis. In this case, it is found that the transverse
conductivity vanishes. Hence, AMR is given by
\begin{equation}\label{}
{\rm
AMR}=\frac{\rho_{xx}^{\parallel}-\rho_{xx}^{\perp}}{(\rho_{xx}^{\parallel}+\rho_{xx}^{\perp})/2}
=2\frac{\sigma_{xx}^{\perp}-\sigma_{xx}^{\parallel}}{\sigma_{xx}^{\parallel}+\sigma_{xx}^{\perp}}.
\end{equation}
Here $\sigma_{xx}^{\parallel}$ and $\sigma_{xx}^{\perp}$ are the
corresponding longitudinal conductivities for $\bm M\parallel\bm J$
and $\bm M\perp\bm J$ with $\bm J$ as the current density. Also one
find that this definition of AMR is disorder-density-independent.
In Fig. \ref{fig:amr}(a), we plot AMRs as functions of spin-orbit
interaction constant
for various magnetizations at fixed impurity distance $s= 30\,{\rm
nm}$. It is seen that the magnitude of magnetization can affect
AMR strongly. A large AMR ($\sim24\%$) can be observed for Rashba coupling parameter up
to $6\times10^{11}\,{\rm eVm}$ at $M=2.5\,{\rm meV}$. With the
increment of Rashba SOC coefficient, AMR ascends and may
saturate at strong coupling. We also evaluate the dependencies
of AMR on the magnitude of magnetization for various SOC constants in Fig. \ref{fig:amr}(b). With the rise of
magnetization, AMR first increases, and then decreases.
However, AMR is always positive. The strength of
the spin-orbit interaction can affect both the value and the
position of maximum AMR. For short-range
nonmagnetic impurity, AMR vanishes completely.
In order to investigate the density-related feature of AMR, in Fig.
\ref{fig:amrN}, AMRs are calculated for various electron density for
fixed magnetization and disorder distance. It is clear that AMR is
very sensitive to the electron concentration. With the increasing
density, AMR, arising from the electric remote scattering in SOC
semiconductor with in-plane magnetization, drops quickly. For
$N=5.0\times10^{11}\,{\rm cm}^{-2}$ at large coupling constant,
${\rm AMR}\sim0.45\%$. It is small but still measurable
experimentally \cite{rushforth1938amc}. The inset shows the
dependence of AMR on electron density for $\alpha=4\times
10^{-11}\,{\rm eVm}$. This implies vanishing AMR in the limit of
$k_{+}(\theta_{\bm k})\approx k_{-}(\theta_{\bm k})$ with
$k_{\pm}(\theta_{\bm k})$ as the two angle-dependent Fermi wave
vectors.
We now make some comments on the experiments to confirm the present
results. Since we deals with AMR arising from nonmagnetic disorder,
the nonmagnetic $n$-type InAs-based heterojunction can be well
satisfied. The in-plane magnetization may be induced by an in-plane
magnetic field. The magnitude of magnetic field corresponds to a
magnetization $M=1\,{\rm meV}$ is $2.16\,\rm T$ (in InAs-based
heterojunction, the effective $g$-factor $g=8$ \cite{Smith1987}).
The Rashba SOC constant can be tuned by controlling the gate
voltage. The usual Hall setup is well satisfied to measure this AMR.
We note that Papadakis {\it et al.} reported the observation of AMR
of two-dimensional holes in nonmagnetic GaAs \cite{Papadakis2000}.
Our present study may provide a possible interpretation of that
novel phenomenon. However, careful theoretical investigation should
be made for $p$-type semiconductor systems.
\section{Conclusion}
In summary, AMR for two-dimensional electron systems with a
Rashba-type spin-orbit splitting and an in-plane magnetization is
investigated for nonmagnetic impurity collision. It is found that
the combined effect of SOC, in-plane magnetization, and electric
remote disorder leads to AMR. The disorder distance can strongly
affect the degree of anisotropy of longitudinal and transverse
conductivity. The strong density-related character of AMR is also
demonstrated.
\begin{acknowledgments}
CMW thanks M. Trushin for useful discussions. CMW acknowledges
support from Research Startup Funds of AYNU.
\end{acknowledgments}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,620
|
<?php
namespace Database\Seeders;
use Illuminate\Database\Seeder;
class DatabaseSeeder extends Seeder
{
/**
* Run the database seeds.
*
* @return void
*/
public function run()
{
$this->call(UserSeeder::class);
$this->call(FriendsSeeder::class);
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,981
|
Q: Umbraco gets error on dev server I just published my local version to a dev server. My local is working perfectly, but the dev has an error:
Could not load file or assembly 'System.Net.Http.Formatting, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)
Does anybody know how can I fix it?
I started to use VS 2015, because of nuget version. Now I got some different error in the output box:
No way to resolve conflict between "System.Net.Http.Formatting, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" and "System.Net.Http.Formatting, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35". Choosing "System.Net.Http.Formatting, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" arbitrarily.
1> Consider app.config remapping of assembly "System.Net.Http.Formatting, Culture=neutral, PublicKeyToken=31bf3856ad364e35" from Version "5.0.0.0" [] to Version "5.2.3.0" [F:\Projects\frs\frs\bin\System.Net.Http.Formatting.dll] to solve conflict and get rid of warning.
1> Consider app.config remapping of assembly "System.Web.WebPages, Culture=neutral, PublicKeyToken=31bf3856ad364e35" from Version "1.0.0.0" [] to Version "2.0.0.0" [F:\Projects\frs\frs\bin\System.Web.WebPages.dll] to solve conflict and get rid of warning.
1> Consider app.config remapping of assembly "System.Web.Mvc, Culture=neutral, PublicKeyToken=31bf3856ad364e35" from Version "3.0.0.0" [] to Version "4.0.0.0" [F:\Projects\frs\frs\bin\System.Web.Mvc.dll] to solve conflict and get rid of warning.
A: Maybe this will help
https://our.umbraco.org/forum/getting-started/installing-umbraco/53820-Could-not-load-file-or-assembly-SystemNetHttp-after-upgrade-to-v-621
<dependentAssembly>
<assemblyIdentity name="System.Net.Http" publicKeyToken="b03f5f7f11d50a3a" culture="neutral" />
<bindingRedirect oldVersion="0.0.0.0-4.0.0.0" newVersion="4.0.0.0" />
to this:
<dependentAssembly>
<assemblyIdentity name="System.Net.Http" publicKeyToken="b03f5f7f11d50a3a" culture="neutral" />
A: I had the same problem with Umbraco. I have installed Umbraco v7.4.3 and I tried the Ucalendar Package in it.
You have to remove the Package and restore the app_data and config folders.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,698
|
\section{Introduction}
\label{sec:Introduction}
In the Standard Model (SM), the observed failure of parity conservation in
the low-energy interactions of nucleons and nuclei follows
from a subtle interplay of electroweak and strong interaction effects,
with the nonperturbative nature of the strong interaction acting to confound
the theoretical interpretation of the effects
observed thus far. The low-energy nature of these studies
has meant that
its theoretical description has focused on phenomenological realizations in hadronic degrees of
freedom, with the long-held expectation that the charged pion exchange
interaction should strongly dominate~\cite{Desplanques:1979hn}.
Despite the small mass of the pion relative to the $1\, \rm GeV$ scale,
this has not proven to be the case;
rather, isoscalar and isotensor interactions,
also appear to play important phenomenological
roles~\cite{Phillips:2014kna,Schindler:2015nga,Gardner2017paradigm,NPDGamma:2018vhh,n3He:2020zwd}.
Direct theoretical insight on
the relative importance of isoscalar,
isotensor, and isovector parity-violating nucleon-nucleon (NN) interactions has
come from the analysis of NN amplitudes
in pionless effective field theory (EFT)
in the large number of colors ($N_c$) limit~\cite{Zhu:2009,Phillips:2014kna,Schindler:2015nga}.
In this paper we revisit this issue
within a framework that makes direct contact to the degrees
of freedom of the
Standard Model (SM) Lagrangian. That is, starting with the
effective, flavor-conserving, parity-violating Hamiltonian of quarks apropos to NN
interactions at the weak scale,
we use renormalization group techniques, including leading-order
QCD evolution and operator mixing effects, matching across heavy-quark thresholds,
to determine the effective weak Hamiltonian for $u,d$, and $s$ quarks at the $2\,\rm GeV$ scale.
Work of this kind exists in the literature, starting with that of Ref.~\cite{Desplanques:1979hn},
though the existing work has either made additional calculational
approximations~\cite{Desplanques:1979hn,Dubovik:1986pj,Tiburzi:2012xx} or has
specialized to the isovector case~\cite{Dai:1991bx,kaplan1993analysis,Tiburzi:2012hx}.
In this work we consider all three isosectors, and since
the
renormalization group effects we consider
respect isospin symmetry,
we are able to perform the calculational checks illustrated
in Fig.~\ref{Fig:rectangle} --- we can determine the isospin-separated effective Hamiltonian
just below the $W$ mass scale and evolve it to hadronic scales {\it or} we can evolve the full
effective Hamiltonian and effect the isospin separation at the same low-energy scale and find the same result.
As an application, we employ the weak effective Hamiltonian we have constructed to compute
the parity-violating meson-NN coupling constants that appear in the parity-violating
hadronic Hamiltonian of Desplanques, Donoghue, and Holstein (DDH)~\cite{Desplanques:1979hn},
for comparison with recent experimental results.
We find that the experimental observables
sensitive to
the isoscalar and isovector sectors are well-described by our results.
\begin{figure}
\centering
\begin{tikzpicture}[scale=3,line width=1pt
\node (1) {$C(M_W)$};
\node (2) [ right of=1, xshift=2cm] {$C^{I=1}(M_W)$};
\node (4) [below of=1] {$C(\Lambda)$};
\node (3) [ right of=4, xshift=2cm] {$C^{I=1}(\Lambda$)};
\draw [arrow] (1) --node[anchor=east] {RG} (4);
\draw [arrow] (1) --node[anchor= south] {I=1 extract} (2);
\draw [arrow] (2) --node[anchor=west] {RG} (3);
\draw [arrow] (4) --node[anchor= north] {I=1 extract} (3);
{(\textit{h\,k\,l})};
\end{tikzpicture}
\hspace{1.5cm}
\begin{tikzpicture}[scale=3,line width=1pt
\node (1) {$C(M_W)$};
\node (2) [right of=1, xshift=3cm] {$C^{I=0\oplus 2}(M_W)$};
\node (4) [below of=1] {$C(\Lambda)$};
\node (3) [right of=4, xshift=3cm] {$C^{I=0\oplus2}(\Lambda$)};
\draw [arrow] (1) --node[anchor=east] {RG} (4);
\draw [arrow] (1) --node[anchor= south] {I=0$\oplus$2 extract} (2);
\draw [arrow] (2) --node[anchor=west] {RG} (3);
\draw [arrow] (4) --node[anchor= north] {I=0$\oplus$2 extract} (3);
{(\textit{h\,k\,l})};
\end{tikzpicture}
\centering
\caption{Illustration of renormalization group flow in leading-order QCD with and without isospin separation.}
\label{Fig:rectangle}
\end{figure}
\section{Effective Hamiltonian framework}
We start by
building an effective theory at the $W$ mass scale,
comprised of five open flavors of quarks. We then
proceed to use QCD renormalization group (RG) techniques
to evolve it to
hadronic energy scales, $\Lambda \sim 2\, \rm GeV$, for which
only the three dynamical quarks, u, d, and s are pertinent.
Thus we begin by considering just these three flavors.
Summing the contributions from all the $\Delta S=0$ tree-level diagrams, we get the Hamiltonian at the $W$ mass scale.
We extract and retain the parity violating (PV) parts from each amplitude to get the PV Hamiltonian. It helps to keep the charged- and neutral-gauge boson exchange sectors separate:
\begin{equation}\label{PVH}
\mathcal{H}^{\rm PV}_{\rm eff} = \mathcal{H}^{\rm PV}_Z + \mathcal{H}^{\rm PV}_W \,,
\end{equation}
where for the $Z^0$ sector
\begin{equation}
\begin{split}
&\mathcal{H}^{\rm PV}_Z(M_W) = \frac{G_F s_w^2}{3\sqrt{2}} \left( \Theta_1 - 3( \frac{1}{2s_w^2}-1) \Theta_5\right)\\
\Theta_1 &= [(\bar{u}u)_V+(\bar{d}d)_V+(\bar{s}s)_V]^{\alpha\alpha}[(\bar{u}u)_A-(\bar{d}d)_A-(\bar{s}s)_A]^{\beta\beta}\\
\Theta_5 &= [(\bar{u}u)_V-(\bar{d}d)_V-(\bar{s}s)_V]^{\alpha\alpha}[(\bar{u}u)_A-(\bar{d}d)_A-(\bar{s}s)_A]^{\beta\beta}\,. \\
\end{split}
\end{equation}
Note, e.g., that $(\bar{u}{u})_V^{\alpha\alpha} (\bar{d}{d})_A^{\beta\beta}\equiv
(\bar{u}^\alpha\gamma^\mu {u}^\alpha) (\bar{d}^\beta \gamma_{\mu}\gamma_5 {d}^\beta)$ and
that
our enumeration of the
different 4-quark operators in this section
anticipates later
developments.
For the $W^\pm$ sector, we include the pertinent
Cabibbo angle
contributions:
\begin{equation}
\begin{split}
\mathcal{H}^{\rm PV}_W(M_W) &= \frac{G_F s_w^2}{3\sqrt{2}} \left(\frac{-3}{s_w^2} (\cos^2 \theta_c)\Theta_9+\frac{-3}{s_w^2}(\sin^2 \theta_c)
\Theta_{11} \right)\\
\Theta_9 &= (\bar{u}d)_V^{\alpha \alpha}(\bar{d}u)_A^{\beta \beta}+(\bar{d}u)_V^{\alpha \alpha}(\bar{u}d)_A^{\beta \beta}\\
\Theta_{11} &= (\bar{u}s)_V^{\alpha \alpha}(\bar{s}u)_A^{\beta \beta}+(\bar{s}u)_V^{\alpha \alpha}(\bar{u}s)_A^{\beta \beta} \,
\end{split}
\end{equation}
with $\lambda^2 \equiv \sin^2 \theta_c=0.2253$, so that
our expression is accurate to ${\cal O}(\lambda^4)$.
Moreover,
$\alpha$ and $\beta$ are color indices,
$s_w^2=0.231$, and
$G_F=1.166 \times 10^{-5}\, \rm{GeV^{-2}}$~\cite{Zyla:2020zbs}.
At leading order (LO), the QCD corrections to operators
we consider
arise from gluon loops as shown in Fig.~\ref{LO-QCD}.
\begin{figure}[!h]
\centering
\includegraphics[width=9cm]{qcd_correc1.png.jpg}
\caption{QCD corrections to a four-quark
weak process in
${\cal O}(\alpha_s)$.}
\label{LO-QCD}
\end{figure}
In the $Z$ exchange sector, the following operators mix and form a closed set
under such corrections:
\begin{equation}\label{z ops}
\begin{split}
\Theta_1 & = [(\bar{u}u)_V+(\bar{d}d)_V+(\bar{s}s)_V]^{\alpha\alpha}[(\bar{u}u)_A-(\bar{d}d)_A-(\bar{s}s)_A]^{\beta\beta}\\
\Theta_2 & = [(\bar{u}u)_V+(\bar{d}d)_V+(\bar{s}s)_V]^{\alpha\beta}[(\bar{u}u)_A-(\bar{d}d)_A-(\bar{s}s)_A]^{\beta\alpha}\\
\Theta_3 & = [(\bar{u}u)_A+(\bar{d}d)_A+(\bar{s}s)_A]^{\alpha\alpha}[(\bar{u}u)_V-(\bar{d}d)_V-(\bar{s}s)_V]^{\beta\beta}\\
\Theta_4 & = [(\bar{u}u)_A+(\bar{d}d)_A+(\bar{s}s)_A]^{\alpha\beta}[(\bar{u}u)_V-(\bar{d}d)_V-(\bar{s}s)_V]^{\beta\alpha}\\
\Theta_5 & = [(\bar{u}u)_V-(\bar{d}d)_V-(\bar{s}s)_V]^{\alpha\alpha}[(\bar{u}u)_A-(\bar{d}d)_A-(\bar{s}s)_A]^{\beta\beta}\\
\Theta_6 & = [(\bar{u}u)_V-(\bar{d}d)_V-(\bar{s}s)_V]^{\alpha\beta}[(\bar{u}u)_A-(\bar{d}d)_A-(\bar{s}s)_A]^{\beta\alpha}\\
\Theta_7 & = [(\bar{u}u)_A+(\bar{d}d)_A+(\bar{s}s)_A]^{\alpha\alpha}[(\bar{u}u)_V+(\bar{d}d)_V+(\bar{s}s)_V]^{\beta\beta}\\
\Theta_8 & = [(\bar{u}u)_A+(\bar{d}d)_A+(\bar{s}s)_A]^{\alpha\beta}[(\bar{u}u)_V+(\bar{d}d)_V+(\bar{s}s)_V]^{\beta\alpha} \,,
\end{split}
\end{equation}
whereas
the operators from $W$ exchange form a separate group with
\begin{equation}\label{w ops}
\begin{split}
\Theta_9 &= (\bar{u}d)_V^{\alpha \alpha}(\bar{d}u)_A^{\beta \beta}+(\bar{d}u)_V^{\alpha \alpha}(\bar{u}d)_A^{\beta \beta}\\
\Theta_{10} &= (\bar{u}d)_V^{\alpha \beta}(\bar{d}u)_A^{\beta \alpha}+(\bar{d}u)_V^{\alpha \beta}(\bar{u}d)_A^{\beta \alpha}\\
\Theta_{11} &= (\bar{u}s)_V^{\alpha \alpha}(\bar{s}u)_A^{\beta \beta}+(\bar{s}u)_V^{\alpha \alpha}(\bar{u}s)_A^{\beta \beta}\\
\Theta_{12} &= (\bar{u}s)_V^{\alpha \beta}(\bar{s}u)_A^{\beta \alpha}+(\bar{s}u)_V^{\alpha \beta}(\bar{u}s)_A^{\beta \alpha} \,.
\end{split}
\end{equation}
The extension to include heavier quarks is made possible by the structure shared by \textit{u-like} quarks and \textit{d-like} quarks. For example, once we include all five flavors,
$\Theta_1$ becomes
\begin{equation}
\Theta_1= [(\bar{u}u)_V+(\bar{c}c)_V+(\bar{d}d)_V+(\bar{s}s)_V+(\bar{b}b)_V]^{\alpha\alpha}[(\bar{u}u)_A+(\bar{c}c)_A-(\bar{d}d)_A-(\bar{s}s)_A-(\bar{b}b)_A]^{\beta\beta} \,.
\end{equation}
Next, using the results from Ref.~\cite{Miller1983anomalousdim} we calculate the anomalous dimension matrix which represents the QCD mixing of the above operators. This serves as a necessary ingredient in performing a RG analysis:
\begin{equation}
\begin{split}
\gamma_Z(\mu) &=-\frac{g_s}{8\pi^2}\begin{pmatrix}
\frac{2}{9} & \frac{-2}{3} & 1 &-3 & 0 & 0 & 0 & 0\\
-\frac{3}{2}+\frac{2}{9}n_f & \frac{9}{2}-\frac{2}{3}n_f & \frac{-3}{2} & \frac{-7}{2} & 0 & 0 & 0 & 0\\
\frac{11}{9} & \frac{-11}{3} & 0 & 0 & 0 & 0 & 0 & 0\\
\frac{-3}{2} & \frac{-7}{2} & \frac{-3}{2} & \frac{9}{2} & 0 & 0 & -\frac{2}{9}n_Q &\frac{2}{3}n_Q\\
0 & 0 & 0 & 0 & 1 & -3 & \frac{2}{9} & -\frac{2}{3}\\
-\frac{2}{9}n_Q & \frac{2}{3}n_Q & 0 & 0 & -3 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & \frac{11}{9} & -\frac{11}{3}\\
0 & 0 & 0 & 0 & 0 & 0 & \frac{2n_f}{9}-3 & 1-\frac{2n_f}{3} \\
\end{pmatrix} \,,
\end{split}
\end{equation}
where $n_f$ is the number of dynamical quarks at the considered energy scale and $n_Q=n_d - n_u$, the difference in open $d$-like and $u$-like flavors. Finally,
\begin{equation}\label{w gamma}
\begin{split}
\gamma_W(\mu) &=-\frac{g_s}{8\pi^2}
\begin{pmatrix}
1 & -3 & 0 & 0\\
-3 & 1 & 0 & 0\\
0 & 0 & 1 & -3\\
0 & 0 & -3 & 1
\end{pmatrix} \,.
\end{split}
\end{equation}
We now turn to our numerical analysis.
\section{Renormalization group flow}
We start by writing the PV Hamiltonian in Eq.~(\ref{PVH}) compactly as
\begin{equation}
\mathcal{H}_{\rm eff}^{\rm PV}(\mu) = \frac{G_Fs_w^2}{3\sqrt{2}}\sum_{i=1}^{12} C_i(\mu)\Theta_i \,.
\end{equation}
The Wilson coefficients $C_i$ flow between different energy scales according to the equation
\begin{equation}
\Vec{C}(\mu) = \exp\left[\int_{g_s(M_W)}^{g_s(\mu)}dg \frac{\gamma^T(\mu)}{\beta(g_s)}\right]\Vec{C}(M_W) \,.
\end{equation}
Here the $\gamma$ matrix from the $W$ and $Z$ sectors are combined as $\gamma = \gamma_z \oplus \gamma_w$ and the QCD $\beta$ function is
\begin{equation}
\beta(g_s) = -\frac{g_s^3}{48\pi^2}(33-2n_f) \,.
\end{equation}
As our work is limited to a LO analysis,
we use the one-loop expression for
the strong-coupling parameter $\alpha_s(\mu)=g_s^2/4\pi$:
\begin{equation}
\alpha_s(\mu) = \frac{4\pi}{\beta_0\, {\rm ln} (\mu^2/\Lambda^2)} \quad {\rm with} \quad \beta_0 = \frac{1}{3}(33-2n_f) \,,
\end{equation}
but we have used the two-loop expression for $\alpha_s(\mu)$ to set the QCD scale parameters in different energy ranges. With the input value at $Z^0$ mass
scale, $\alpha_s(M_z) = 0.117$~\cite{Zyla:2020zbs},
the criterion of continuity across the
heavy quark flavor thresholds
leads to the following scale parameters: for five flavor QCD, $\Lambda_{5}=0.214 \,\rm GeV$, for four flavor QCD, $\Lambda_{4}= 0.307 \,\rm GeV$, and for three flavor QCD, $\Lambda_{3}=0.352 \,\rm GeV$. The resulting strong interaction strength ratios are
\begin{equation}
\frac{\alpha_s(M_b=4.18\,\rm GeV)}{\alpha_s(M_W=80.379\,\rm GeV)}= 2.09; \quad \frac{\alpha_s(M_c=1.27\,\rm GeV)}{\alpha_s(M_b)}= 1.88; \quad \frac{\alpha_s(2\,\rm GeV)}{\alpha_s(M_c)}=0.74 \,.
\end{equation}
Performing the RG flow of the Wilson coefficients at energy scale $M_W$, we obtain the coefficients at hadronic scales:
\begin{equation}
C(M_W) = \begin{pmatrix}
1\\
0\\
0\\
0\\
-3.49\\
0\\
0\\
0\\
-13.0\cos^2\theta_c\\
0\\
-13.0\sin^2\theta_c\\
0
\end{pmatrix} \overset{{\rm RG}}{\longrightarrow}
C(2\,\rm GeV) = \begin{pmatrix}
1.09\\
0.018\\
0.199\\
-0.583\\
-4.36\\
1.72\\
-0.170\\
0.332\\
-16.2 \cos^2\theta_c\\
6.38 \cos^2\theta_c\\
-16.2 \sin^2\theta_c\\
6.38 \sin^2\theta_c
\end{pmatrix} \,,
\end{equation}
where the coefficients have been
simplified with the substitution $s_W^2=0.231$. It should be noted that we perform RG-flow below $2\,\rm GeV$ to integrate out the \textit{charm} quark and then run upwards
so as to work with a $N_f=2+1$ theory at $2\,\rm GeV$. An alternate would be to evolve
to $2\, \rm GeV$ with a $N_f=2+1+1$ theory and consider
the $u,d,s$ contributions to $H_{\rm eff}$ only.
The resulting Wilson coefficients are very similar
in the two approaches but adopt the first approach for
definiteness.
We conclude this section by comparing our results
with those of Ref.~\cite{Desplanques:1979hn}, in which the QCD evolution effects were addressed
via a phenomenological enhancement factor, $K$:
\begin{equation}
K = 1+\frac{g^2(\mu^2)}{16\pi^2}\Bigg( 11-\frac{2}{3}n_f \Bigg)\,\ln\Bigg( \frac{M^2_W}{\mu^2}\Bigg) \,,
\end{equation}
where $\mu$ is any energy scale below $M_W$.
In that work, the Wilson coefficients at hadronic scales, corresponding to $K\approx4$, are found to be, e.g.,
$C_1^{\rm DDH}=-1.15$, $C_3^{\rm DDH} = 0$, $C_5^{\rm DDH}=3.95$, $C_7^{\rm DDH}=0.44$, and $C_9^{\rm DDH}=14.67 \cos^2{(\theta_c)}$,
where there is an overall sign difference due to the different
conventions used. We observe that the additional QCD effects
we have included play an important numerical role.
\section{Isosector extractions}
With the full PV effective Hamiltonian in hand, we extract the contributions from the individual isosectors. For example,
for the isovector sector:
\begin{equation}
\mathcal{H}_{\rm eff}^{\rm PV}(\mu) = \frac{G_Fs_w^2}{3\sqrt{2}}\sum_{i=1}^{12} C_i(\mu) \Theta_i \longrightarrow \mathcal{H}^{I=1}_{\rm eff}(\mu) = \frac{G_Fs_w^2}{3\sqrt{2}}\sum_{i=1}^{12} C^{I=1}_i(\mu) \Theta_i^{I=1}
\end{equation}
Considering the
operators in Eqs.(\ref{z ops}, \ref{w ops}) we see
$\Theta_{1-4,11,12}$ contribute one operator each: $C_j\Theta_j \rightarrow C^{I=1}_j\Theta^{I=1}_j$ with $C_j=C^{I=1}_j$.
Operators $\Theta_5$ and $\Theta_6$ contribute two operators each:
$C_5\Theta_5 \rightarrow C^{I=1}_5\Theta_5^{I=1} + C^{I=1}_7\Theta_7^{I=1}$ with $C^{I=1}_5 = C^{I=1}_7 = -C_5$
and $C_6\Theta_6 \rightarrow C^{I=1}_6\Theta_6^{I=1} + C^{I=1}_8\Theta_8^{I=1}$
with $C^{I=1}_6 = C^{I=1}_8 = -C_6$.
Operators $\Theta_{7-10}$ have no contributions to isovector sector. Thus the extracted isovector operator set is
\begin{equation}\label{isovec op set}
\begin{split}
\Theta_1&^{I=1}= [(\bar{u}u)_V+(\bar{d}d)_V+(\bar{s}s)_V]^{\alpha\alpha}[(\bar{u}u)_A-(\bar{d}d)_A]^{\beta\beta}\\
\Theta_2&^{I=1}= [(\bar{u}u)_V+(\bar{d}d)_V+(\bar{s}s)_V]^{\alpha\beta}[(\bar{u}u)_A-(\bar{d}d)_A]^{\beta\alpha}\\
\Theta_3&^{I=1}= [(\bar{u}u)_A+(\bar{d}d)_A+(\bar{s}s)_A]^{\alpha\alpha}[(\bar{u}u)_V-(\bar{d}d)_V]^{\beta\beta}\\
\Theta_4&^{I=1}= [(\bar{u}u)_A+(\bar{d}d)_A+(\bar{s}s)_A]^{\alpha\beta}[(\bar{u}u)_V-(\bar{d}d)_V]^{\beta\alpha}\\
\Theta_5&^{I=1}= (\bar{s}s)_V^{\alpha\alpha}[(\bar{u}u)_A-(\bar{d}d)_A]^{\beta\beta}\\
\Theta_6&^{I=1}= (\bar{s}s)_V^{\alpha\beta}[(\bar{u}u)_A-(\bar{d}d)_A]^{\beta\alpha}\\
\Theta_7&^{I=1}= (\bar{s}s)_A^{\alpha\alpha}[(\bar{u}u)_V-(\bar{d}d)_V]^{\beta\beta}\\
\Theta_8&^{I=1}= (\bar{s}s)_A^{\alpha\beta}[(\bar{u}u)_V-(\bar{d}d)_V]^{\beta\alpha}\\
\Theta_9&^{I=1}= (\bar{u}s)_V^{\alpha\alpha}(\bar{s}u)_A^{\beta\beta} + (\bar{s}u)_V^{\alpha\alpha}(\bar{u}s)_A^{\beta\beta}\\
\Theta_{10}&^{I=1}= (\bar{u}s)_V^{\alpha\beta}(\bar{s}u)_A^{\beta\alpha} + (\bar{s}u)_V^{\alpha\beta}(\bar{u}s)_A^{\beta\alpha}\\
\end{split} \,,
\end{equation}
and the extracted isovector Wilson coefficients at high and low energies are:
\begin{equation}\label{vec wil coef}
C^{I=1}(M_W) = \left( \begin{array}{c}
1\\
0\\
0\\
0\\
3.49\\
0\\
3.49\\
0\\
-13.0\sin^2\theta_c\\
0
\end{array} \right)
\quad \mathrm{and} \quad C^{I=1}(2\,\rm GeV)=
\left( \begin{array}{c}
1.09\\
0.018\\
0.199\\
-0.583\\
4.36\\
-1.72\\
4.36\\
-1.72\\
-16.2\sin^2\theta_c\\
6.38\sin^2\theta_c
\end{array} \right) \,.
\end{equation}
Alternatively, an
isovector RG analysis can be performed directly to get $\Vec{C}^{I=1}(2\,\rm GeV)$ from $\Vec{C}^{I=1}(M_W)$ using the anomalous dimension matrix corresponding to the operator set in Eq.(\ref{isovec op set}):
\begin{equation}
\begin{split}
\gamma_Z^{I=1} &=-\frac{g_s}{8\pi^2}\begin{pmatrix}
\frac{2}{9} & \frac{-2}{3} & 1 &-3 & 0 & 0 & 0 & 0\\
-\frac{3}{2}+\frac{2}{9}n_f & \frac{9}{2}-\frac{2}{3}n_f & \frac{-3}{2} & \frac{-7}{2} & 0 & 0 & 0 & 0\\
\frac{11}{9} & \frac{-11}{3} & 0 & 0 & 0 & 0 & 0 & 0\\
\frac{-3}{2} & \frac{-7}{2} & \frac{-3}{2} & \frac{9}{2} & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & -3\\
\frac{2}{9}n_Q & -\frac{2}{3}n_Q & 0 & 0 & \frac{-3}{2} & \frac{9}{2} & \frac{-3}{2} & \frac{-7}{2}\\
0 & 0 & 0 & 0 & 1 & -3 & 0 & 0\\
0 & 0 & 0 & 0 & \frac{-3}{2} & \frac{-7}{2} & \frac{-3}{2} & \frac{9}{2}\\
\end{pmatrix}
\end{split}
\end{equation}
The corresponding W-sector matrix can be easily obtained by referring to Eq.~(\ref{w gamma}).
The set of Wilson coefficients
obtained via RG flow exactly matches the extracted coefficients sets in Eq.~(\ref{vec wil coef}), in agreement with Fig. (\ref{Fig:rectangle}). A purely
isovector Z-sector RG analysis was performed in Ref.~\cite{Dai:1991bx}.
Our
results are in agreement when the same inputs are used.
We can make similar extractions in the $I=0 \oplus 2$ sector. The corresponding operators are
\begin{equation}\label{isoeven ops}
\begin{split}
\Theta_1&^{I=0 \oplus 2}= [(\bar{u}u)_V+(\bar{d}d)_V+(\bar{s}s)_V]^{\alpha\alpha}[(\bar{s}s)_A]^{\beta\beta}\\
\Theta_2&^{I=0 \oplus 2}= [(\bar{u}u)_V+(\bar{d}d)_V+(\bar{s}s)_V]^{\alpha\beta}[(\bar{s}s)_A]^{\beta\alpha}\\
\Theta_3&^{I=0 \oplus 2}= [(\bar{u}u)_A+(\bar{d}d)_A+(\bar{s}s)_A]^{\alpha\alpha}[(\bar{s}s)_V]^{\beta\beta}\\
\Theta_4&^{I=0 \oplus 2}= [(\bar{u}u)_A+(\bar{d}d)_A+(\bar{s}s)_A]^{\alpha\beta}[(\bar{s}s)_V]^{\beta\alpha}\\
\Theta_5&^{I=0 \oplus 2}= [(\bar{u}u)_V-(\bar{d}d)_V]^{\alpha\alpha}[(\bar{u}u)_A-(\bar{d}d)_A]^{\beta\beta}+(\bar{s}s)_V^{\alpha\alpha}(\bar{s}s)_A^{\beta\beta}\\
\Theta_6&^{I=0 \oplus 2}= [(\bar{u}u)_V-(\bar{d}d)_V]^{\alpha\beta}[(\bar{u}u)_A-(\bar{d}d)_A]^{\beta\alpha}+(\bar{s}s)_V^{\alpha\beta}(\bar{s}s)_A^{\beta\alpha}\\
\Theta_7&^{I=0 \oplus 2}= [(\bar{u}u)_V+(\bar{d}d)_V+(\bar{s}s)_V]^{\alpha\alpha}[(\bar{u}u)_A+(\bar{d}d)_A+(\bar{s}s)_A]^{\beta\beta}\\
\Theta_8&^{I=0 \oplus 2}= [(\bar{u}u)_A+(\bar{d}d)_A+(\bar{s}s)_A]^{\alpha\beta}[(\bar{u}u)_V+(\bar{d}d)_V+(\bar{s}s)_V]^{\beta\alpha}\\
\Theta_9&^{I=0 \oplus 2}= (\bar{u}d)_V^{\alpha\alpha}(\bar{d}u)_A^{\beta\beta} + (\bar{d}u)_V^{\alpha\alpha}(\bar{u}d)_A^{\beta\beta}\\
\Theta_{10}&^{I=0 \oplus 2}= (\bar{u}d)_V^{\alpha\beta}(\bar{d}u)_A^{\beta\alpha} + (\bar{d}u)_V^{\alpha\beta}(\bar{u}d)_A^{\beta\alpha}\\
\end{split} \,,
\end{equation}
and the extracted Wilson coefficients for
the $I=0 \oplus 2$ sector at high and low energies are
\begin{equation}\label{sca ten wil coef}
C^{ I=0\oplus2}(M_W) = \left( \begin{array}{c}
-1\\
0\\
0\\
0\\
-3.49\\
0\\
0\\
0\\
-13.0\cos^2 \theta_c\\
0\\
\end{array} \right)
\quad \mathrm{and}\quad C^{ I=0\oplus2}(2\,\rm GeV)=
\left( \begin{array}{c}
-1.09\\
-0.018\\
-0.199\\
0.583\\
-4.36\\
1.72\\
-0.170\\
0.332\\
-16.2\cos^2 \theta_c\\
6.38\cos^2\theta_c\\
\end{array} \right)
\end{equation}
If one wishes to perform a
RG analysis of the isoeven sectors to obtain $\Vec{C}^{I=0\oplus2}(2\rm GeV)$ from $\Vec{C}^{I=0\oplus2}(M_W)$, the anomalous dimension matrix for the operator set in Eq.(\ref{isoeven ops}) is
\begin{equation}
\begin{split}
\gamma^{I=0\oplus2}_Z &=-\frac{g_s}{8\pi^2}\begin{pmatrix}
\frac{2}{9} & \frac{-2}{3} & 1 &-3 & 0 & 0 & 0 & 0\\
-\frac{3}{2}+\frac{2}{9}n_f & \frac{9}{2}-\frac{2}{3}n_f & \frac{-3}{2} & \frac{-7}{2} & 0 & 0 & 0 & 0\\
\frac{11}{9} & \frac{-11}{3} & 0 & 0 & 0 & 0 & 0 & 0\\
\frac{-3}{2} & \frac{-7}{2} & \frac{-3}{2} & \frac{9}{2} & 0 & 0 & \frac{2}{9}n_Q &-\frac{2}{3}n_Q\\
0 & 0 & 0 & 0 & 1 & -3 & \frac{2}{9} & -\frac{2}{3}\\
\frac{2}{9}n_Q &-\frac{2}{3}n_Q & 0 & 0 & -3 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & \frac{11}{9} & -\frac{11}{3}\\
0 & 0 & 0 & 0 & 0 & 0 & \frac{2n_f}{9}-3 & 1-\frac{2n_f}{3}\\
\end{pmatrix}
\end{split} \,,
\end{equation}
and the W-sector matrix can be easily obtained from Eq.~(\ref{w gamma}). Again, the set of Wilson coefficients sobtained via RG flow exactly matches the extracted coefficients sets in
Eq.~(\ref{sca ten wil coef}),
in agreement with Fig. (\ref{Fig:rectangle}).
\section{Estimates of the parity-violating meson-nucleon coupling constants}
We now apply our effective Hamiltonian to the computation of the parity-violating
meson-nucleon coupling constants of isospin $I$, $h_M^I$, which appear in the
Hamiltonian in hadronic degrees of freedom
of Ref.~\cite{Desplanques:1979hn}, $\mathcal{H}_{\rm DDH}$. To do this we first compute
the quark-level matrix elements
$\langle M N' | \mathcal{H}_{\rm eff}^I | N\rangle$ within the factorization, or vacuum
saturation, approximation~\cite{Michel:1964zz,Bauer:1986bm}.
For example, the matrix element of a parity-violating four-quark operator
to yield a neutral vector meson from a nucleon state is of form
\begin{equation}
\bra{V N'}(\bar{q_1}q_2)_V(\bar{q_3}q_4)_A\ket{N} = \bra{V}(\bar{q_1}q_2)_V\ket{0}\bra{N'}(\bar{q_3}q_4)_A\ket{N} \,.
\end{equation}
The factorization approximation is heuristic; we note extensive
tests thereof within the context of heavy hadron
decays~\cite{Ali:1998eb,Cheng_Tseng1998PhRvD..58i4005C,Diehl_Hiller2001JHEP...06..067D}.
In the current context, its use allow us to glean concrete estimates.
To determine particular $h_M^I$ we match
quark-level and hadron-level matrix elements via
$\langle M N' | \mathcal{H}_{\rm eff}^I | N\rangle = \langle M N' | \mathcal{H}_{\rm DDH} | N\rangle$.
Thus, for example, the pion contribution to hadronic parity violation stems from
\begin{equation}
\mathcal{H}^{\pi}_{\rm DDH} = i h^1_{\pi}(\pi^{+}\bar{p}n-\pi^{-}\bar{n}p) \,,
\end{equation}
implying that the parity-violating pion-meson nucleon coupling is defined as
\begin{equation}
-i h_{\pi}^1 \bar{u}_n u_p = \bra{n\pi^{+}}\mathcal{H}_{\rm eff}^{I=1}\ket{p}\,,
\label{pimatch}
\end{equation}
where $u_N$ with $N\in p,n$ is a Dirac spinor. Here we employ
the Fierz identities, where we note the useful compilation of Ref.~\cite{Nieves:2003in},
on the $\Theta_i^{I=1}$ operators, rearranging them to find scalar-pseudoscalar contributions.
Defining $\langle 0 | (\bar d u)_A (0) | \pi^+ (p) \rangle = i p^\mu f_\pi$
we use the quark-field equations of motion to find
\begin{equation}
\bra{\pi^+} (\bar{u} \gamma_{5} d) \ket{0} = \frac{m_{\pi}^2 f_{\pi}}{i(m_u+m_d)} \,.
\end{equation}
Using the factorization approximation on the matrix element of Eq.~(\ref{pimatch}) we have
\begin{equation}
h_{\pi}^1 \bar{u}_n u_p=\frac{2G_Fs_w^2}{3\sqrt{2}}\left(\frac{C_{1}^{I=1}}{3}+C_{2}^{I=1}-\frac{C_{3}^{I=1}}{3}+C_{4}^{I=1}\right)\frac{m_{\pi}^2 f_{\pi}}{(m_u+m_d)}
\bra{n}\bar{d}u\ket{p}\,,
\label{evalhpi1}
\end{equation}
in agreement with Ref.~\cite{kaplan1993analysis}, though in its numerical evaluation we use
a
computation of the isovector scalar charge $g_s^{u-d}$ within lattice QCD (LQCD)~\cite{Aoki:2021kgd}, where
$\bra{n}\bar{d}u\ket{p}\equiv g_s^{u-d}\bar{u}_nu_p$,
rather than an estimate of it using SU(3) flavor techniques. We employ ``unquenched'' LQCD computations so that the effects
of the light sea quarks are allowed to appear, noting that these are characterized by $N_f$, the number of dynamical
quark flavors in the simulation. As per Ref.~\cite{Aoki:2021kgd}, we suppose simulations with $N_f=2+1+1$ are
more
realistic but that $N_f=2+1$ simulations
are typically more precise.
The evaluation of Eq.~(\ref{evalhpi1}) is sensitive to the precise value
of $m_\pi^2/(m_u + m_d)$, where we evaluate
the light quark masses
in LQCD. This ratio gives a large enhancement, and its assessment should be made with care.
Here we use $m_\pi=135\,\rm MeV$, because the LQCD simulations used do not include electromagnetism,
and
the charged-pion
decay constant $f_\pi=130\,\rm MeV$. As for the light quark masses,
we use the renormalization-group-invariant (RGI) mass
$(m_u+m_d) = 2(4.695 (56)_m (54)_\Lambda\,\rm MeV)$ for $N_f=2+1$ ~\cite{Aoki:2021kgd},
which is an appealing choice because it is scale and scheme independent, thus
allowing
us to avoid extreme sensitivity to the
choice of scale. In this case, combining errors in quadrature, we find
$m_\pi^2/(m_u + m_d) = 1941 (32)\,\rm MeV$,
whereas using the result from a $N_f=2+1$ simulation in
the ${\overline{\rm{MS}}}$ scheme at a scale of $2\,\rm GeV$,
$(m_u+m_d) = 2(3.381(40)\,\rm MeV)$~\cite{Aoki:2021kgd},\footnote{We note in this
scheme at this scale that the PDG compilation recommends
$(m_u+m_d) = 2(3.45^{+0.55}_{-0.15}\,\rm MeV)$~\cite{Zyla:2020zbs}; we note, too,
$(m_u+m_d) = 2(3.75 (0.45)\,\rm MeV)$ using scalar sum rules
and chiral perturbation theory~\cite{Jamin:2006tj}.}
we find $2695 (32) \,\rm MeV$. We can also assess this ratio through the
use of the Gell-Mann--Oakes--Renner (GOR)
relation~\cite{Gell-Mann:1968hlm,Gasser:1983yg,Gasser:1984gg}. The GOR relation captures the pion mass with a correction of within
a few percent~\cite{Jamin:2002ev,Bernard:2006gx,Bordes:2010wy,McNeile:2012xh}, where
the concomitant quark condensate $B\equiv |\Sigma|/ F^2$, with
$\Sigma = |\langle 0 | \bar u u | 0 \rangle|$ and $F$ the pion decay constant in the chiral limit,
can all be computed in LQCD. Using Ref.~\cite{Aoki:2021kgd}, in the SU(2) chiral limit
and $N_f=2+1$ we have $2686 (...)\,\rm MeV$, whereas in the SU(3) chiral limit we have
$2281 (...) \,\rm MeV$, a difference reflecting the role of the strange sea quarks in
its numerical evaluation.
We employ the result with the RGI quark mass in what follows.
Turning to the isovector quark scalar charge of the nucleon, we use the $N_f=2+1$
result
$g_s^{u-d}=1.06(10)(06)_{sys}$~\cite{park2021precision}, noting that this
compares favorably with the result
$g_s^{u-d}=1.02(11)$ determined from strong-isospin breaking in the nucleon mass from
LQCD~\cite{Gonzalez-Alonso:2013ura}, whereas the SU(3) estimate in Ref.~\cite{kaplan1993analysis}
yields $0.6$.
Finally, we find
\begin{equation}
h_{\pi}^1 = 3.06 (34) \times 10^{-7} \,,
\label{eq:hpi1}
\end{equation}
where the error is determined from that in the LQCD inputs we employ. We note,
for reference,
the experimental result $h_\pi^1 = 2.6(1.2)_{\rm stat}(0.2)_{\rm sys}\times10^{-7}$ determined from the
parity-violating gamma asymmetry in ${\vec n} p \to d \gamma$~\cite{NPDGamma:2018vhh}.
We now turn to the assessment of other meson-nucleon coupling constants, starting with the
remaining $I=1$ couplings. For the $\rho^0$ meson, e.g., $\bra{\rho^0 N}\mathcal{\cal H}_{\rm eff}^{I=1}\ket{N}=h_{\rho}^1\epsilon^{*\mu}_{\rho}(\bar{u}_N u_N)_A$.
With $ \bra{\rho^0}(\bar{u}u)_V-(\bar{d}d)_V\ket{0}\equiv \sqrt{2}\epsilon^{*\mu}_{\rho}f_{\rho}m_{\rho}$,
$m_\rho= 775.4 \, {\rm MeV}$~\cite{Zyla:2020zbs}, and $f_\rho=210 \,{\rm MeV}$~\cite{Ali:1998eb}
and using the quark axial charges of the nucleon from LQCD~\cite{Aoki:2021kgd}
\begin{align}\label{lqcd charges}
\begin{split}
&\bra{p}(\bar{u}u)_A\ket{p} = g_A^{u}(\bar{u}_p u_p)_A \,;\quad g_A^u = 0.777 (25)(30)
\,\, [0.847 (18)(32)]\,,
\\
&\bra{p}(\bar{d}d)_A\ket{p} = g_A^{d}(\bar{u}_p u_p)_A \,;\quad g_A^d =-0.438 (18) (30)
\,\, [-0.407 (16)(18)] \,,
\\
&\bra{p}(\bar{s}s)_A\ket{p}= g_A^{s}(\bar{u}_p u_p)_A
\,;\quad g_A^s=-0.053 (8) \,\, [-0.035 (6)(7)] \,,
\end{split}
\end{align}
in the $\overline{\rm MS}$ scheme at $\mu=2\,\rm GeV$ from $N_f=2+1+1$~\cite{Lin:2018obj} [$N_f=2+1$~\cite{Liang:2021pql}] flavor
simulations, we have
\begin{equation}
\begin{split}
h_\rho^1
&=\frac{G_Fs_w^2}{3}f_\rho m_\rho \Bigg(\left(C_{3}^{I=1}+\frac{C_{4}^{I=1}}{3}\right)
(g_A^u + g_A^d) \\
&+ \left(C_{3}^{I=1}+\frac{C_{4}^{I=1}}{3}+C_{7}^{I=1}+\frac{C_{8}^{I=1}}{3}+C_{9}^{I=1}+\frac{C_{10}^{I=1}}{3}\right) g_A^s\Bigg) \,,
\end{split}
\end{equation}
and with Eq.~(\ref{lqcd charges}) this yields
\begin{equation}
h^1_{\rho} = -0.294[-0.193] \times 10^{-7} \,
\end{equation}
For the $\omega$ meson, $\bra{\omega N}\mathcal{H}_{\rm eff}^{I=1}\ket{N}=h_{\omega\,N}^1\epsilon^{*\mu}_{\omega}(\bar{u}_Nu_N)_A$.
With $\bra{\omega}(\bar{u}u)_V+(\bar{d}d)_V\ket{0}\equiv\sqrt{2}\epsilon^{*\mu}_{\omega}f_{\omega}m_{\omega}$, $m_\omega= 782.65 \,{\rm MeV}$~\cite{Zyla:2020zbs}, and $f_\omega=195 \,{\rm MeV}$~\cite{Ali:1998eb}, we have
\begin{equation}
h_{\omega\,N}^1
=\frac{G_Fs_w^2}{3}f_\omega m_\omega \Bigg(\left(C_{1}^{I=1}+\frac{C_{2}^{I=1}}{3}\right)\eta_N (g_A^u - g_A^d) + \left(C_{9}^{I=1}+\frac{C_{10}^{I=1}}{3}\right)g_A^s \Bigg) \,,
\end{equation}
where $\eta=\pm 1$ for a proton or neutron state, respectively.
With Eqs.(\ref{lqcd charges})
\begin{equation}
h^1_{\omega\,p}= + 1.825[1.884]\times 10^{-7} \,;\,
h^1_{\omega\,n} = - 1.828[-1.886]\times 10^{-7}\,,
\end{equation}
where the difference in their magnitudes speaks to the role of charged-current effects.
Similarly we can make use of $H_{\rm eff}^{I=0\oplus2}$ to determine
$\bra{\omega N}\mathcal{H}^{I=0\oplus2}\ket{N}=h_{\omega}^0\epsilon^{*\mu}_{\omega}(\bar{u}_Nu_N)_A
$. Thus
\begin{equation}
\begin{split}
h_{\omega}^0 &=\frac{G_Fs_w^2}{3} f_\omega m_\omega \Bigg(\left(C_{7}^{0+2}+\frac{C_{8}^{0+2}}{3}+C_{9}^{0+2}+\frac{C_{10}^{0+2}}{3}\right) (g_A^u + g_A^d)\\
&+ \left(C_{1}^{0+2}+\frac{C_{2}^{0+2}}{3}+C_{7}^{0+2}+\frac{C_{8}^{0+2}}{3}\right) g_A^s\Bigg) \,,
\end{split}
\end{equation}
and with Eqs.(\ref{lqcd charges}) this gives
\begin{equation}
h^0_{\omega} = +0.270[0.297]\times 10^{-7} \,.
\end{equation}
To determine the isocalar and isotensor $\rho$ couplings from ${\cal H}_{\rm eff}^{I=0\oplus2}$
we note from ${\cal H}_{\rm DDH}$~\cite{Desplanques:1979hn} that
\begin{equation}\label{simul}
h_{\rho}^{0}+\frac{1}{\sqrt{6}}h_{\rho}^{2} = h_{\rho^0}^{0\oplus 2}\,;\quad
\sqrt{2} h_{\rho}^{0} - \frac{1}{\sqrt{12}}h_{\rho}^{2} = h_{\rho^-}^{0\oplus 2}
\,.
\end{equation}
Computing $h^{0\oplus2}_{\rho^0}$, with $\bra{\rho^0 N}\mathcal{H}_{\rm eff}^{I=0\oplus2}\ket{N}=h_{\rho^0}^{0\oplus2}\eta_N \epsilon^{*\mu}_{\rho}(\bar{u}_Nu_N)_A$,
\begin{equation}
h_{\rho^0}^{0\oplus2}
=\frac{G_F s_w^2}{3}f_\rho m_\rho
\left(C_{5}^{I=0+2}+\frac{C_{6}^{I=0+2}}{3}-\frac{C_{9}^{I=0+2}}{6}-\frac{C_{10}^{I=0+2}}{2}\right) (g_A^u - g_A^d) \,,
\end{equation}
which, with Eqs.(\ref{lqcd charges}), implies
\begin{equation}
h^{0\oplus2}_{\rho^0} = -7.55 [-7.80] \times 10^{-7}\,.
\end{equation}
Computing $h^{0\oplus2}_{\rho^-}$, with
$\bra{\rho^- p}\mathcal{H}_{\rm eff}^{I=0\bigoplus2}\ket{n}=h_{\rho^-}^{0\oplus2}\epsilon^{*\mu}_{\rho}(\bar{u}_Nu_N)_A$,
noting $\bra{\rho^-}(\bar{d}u)_v\ket{0}=\epsilon^{*\mu}_{\rho}f_{\rho}m_{\rho}$,
and using the quark isovector axial charge in LQCD in ${\overline {\rm MS}}$ at $2\,\rm GeV$
from a $N_f=2+1$~\cite{park2021precision} [$N_f=2+1+1$~\cite{Gupta:2018qil}] flavor simulation, namely,
\begin{equation}
\bra{p}(\bar{u}d)_A\ket{n} = g_A^{u-d}(\bar{u}_p u_n)_A; \quad g_A^{u-d} = 1.31(06)(05)_{\rm sys} \,[1.218 (25)(30)_{\rm sys}]
\,,
\label{lqcdiso}
\end{equation}
we have
\begin{equation}
h^{0\oplus2}_{\rho^-} =\frac{G_F s_w^2}{3\sqrt{2}}f_\rho m_\rho
\left(\frac{-C_{5}^{I=0+2}}{3}-C_{6}^{I=0+2}+\frac{C_{7}^{I=0+2}}{3}+C_{8}^{I=0+2}+C_{9}^{I=0+2}+\frac{C_{10}^{I=0+2}}{3}\right)g_A^{u-d}\,.
\end{equation}
With Eqs.(\ref{lqcdiso}), this
implies
\begin{equation}
h^{0+2}_{\rho^-} = -18.1 [-16.8] \times 10^{-7}\,.
\end{equation}
Solving Eq.~(\ref{simul}) we find
\begin{equation}
h_{\rho}^{0} = - 11.05 \times10^{-7}\,
;\\ \quad h_{\rho}^{2} = + 8.57 \times10^{-7}\,.
\end{equation}
Although our determinations have been made at a scale
of $2\,\rm GeV$, we follow the spirit of DDH~\cite{Desplanques:1979hn} and compare our results
with the constraints on the coupling constants
that emerge from experiments at
much lower energies.
\section{Comparisons with experiment}
Comparing with the constraints on the
parity-violating vector-meson-nucleon coupling constants that emerge from the
combined analysis of the ${\vec n} p \to d \gamma$~\cite{NPDGamma:2018vhh}
and ${\vec n}\, ^3{\rm He} \to t \gamma$~\cite{n3He:2020zwd} experiments, within the theoretical framework of Ref.~\cite{Viviani:2010qt}, we have $h_\pi^1$ and
$h_{\rho-\omega} \equiv h_\rho^0 + 0.605 h_\omega^0 -
0.605 h_\rho^1 -1.316 h_\omega^1 + 0.026 h_\rho^2
=(-17.0 \pm 6.56)\times 10^{-7}$~\cite{n3He:2020zwd},
for which we compute
$h_{\rho-\omega} = (-12.9 \pm\dots) \times 10^{-7}$,
so that both this and
our
$h_\pi^1$, Eq.~(\ref{eq:hpi1}), are within $\pm 1\sigma$ of the experimentally determined parameters.
Moreover, we evaluate the asymmetry
in ${\vec n}\, ^3{\rm He} \to t \gamma$
as $-0.69 \times 10^{-8}$ in the framework of Ref.~\cite{Viviani:2010qt} but as $1.6 \times 10^{-8}$
in the framework of Ref.~\cite{Viviani:2014zha},
as per Eqs.(8,9) of Ref.~\cite{n3He:2020zwd}, to compare
with the experimental result
$(1.55\pm 0.97 {\rm(stat)} \pm 0.24
{\rm (sys)})\times 10^{-8}$~\cite{n3He:2020zwd}.
Evidently the value of
the asymmetry is
sensitive to
a partial cancellation of the various contributions~\cite{n3He:2020zwd}.
The $h_\pi^1$ determination from the ${\vec n} p \to d \gamma$ experiment,
$h_\pi^1 = 2.6(1.2)_{\rm stat}(0.2)_{\rm sys}\times10^{-7}$~\cite{NPDGamma:2018vhh}
is in slight tension with the value determined by the
non-observation of the
photon circular polarization in $^{18}F$ radiative decay from the
1.081 MeV $J^P T =0^- 0$ state, reflecting an absence of mixing with the
nearby 1.042 MeV $0^+ 1$ state, yielding the bound
$|h_\pi^1| < 1.3 \times 10^{-7}$ at 67\% CL~\cite{Haxton:2013aca}.
The $^{18}{\rm F}$ system is special in that the theoretical uncertainties can be largely
controlled through the experimental assessment
of the pertinent nuclear matrix element, after
an isospin rotation, from a
well-measured
$\beta^+$-decay transition in $^{18}{\rm Ne}$~\cite{Haxton:1981sf,Adelberger:1983zz,Adelberger:1985ik}.
Thus the error in each $h_\pi^1$ assessment is thought to be statistics dominated.
Other reliably calculated, parity-violating
observables that depend on the couplings probed
in the few-body reactions
include the longitudinal asymmetry in
elastic $\vec{p}-\alpha$ scattering at $46 \,{\rm MeV}$,
$A_L [\vec{p}\alpha]$,
and the gamma asymmetry in $^{19}{\rm F}$ decay, $A_\gamma [^{19}{\rm F}]$.
Using the expressions in Ref.~\cite{Haxton:2013aca}
we find
$-2.6 \times 10^{-7}$, to compare with
$A_L [\vec{p}\alpha]_{\rm expt}=-(3.3 \pm 0.9)\times 10^{-7}$\cite{Lang,Henneck}, and
$-6.7 \times 10^{-5}$, to compare with
$A_\gamma [^{19}{\rm F}]_{\rm expt}=-(7.4 \pm 1.9)\times 10^{-5}$\cite{Adelberger,Elsener}.
In what follows, we
consider the broader implications of
our assessments of the parity-violating meson-nucleon coupling constants.
Working within the context of the DDH potential, with parameters
$g_{\pi NN}^2/4\pi =14.4$,
$g_{\rho}^2/4\pi =0.62$, $g_{\omega}^2/4\pi =9g_{\rho}^2/4\pi$,
$\chi_\rho=3.70$, and $\chi_\omega=-0.12$, we
compute the Danilov parameters to find
\begin{equation}
\begin{split}
&\Lambda_0^{^1S_0-^3P_0}=-g_\rho(2+\chi_\rho)h_\rho^0-g_\omega(2+\chi_\omega)h_\omega^0
\to 176 \,[210] \\
&\Lambda_0^{^3S_1-^1P_1}=-3g_\rho\chi_\rho h_\rho^0+g_\omega\chi_\omega h_\omega^0
\to 343\, [360]
\\
&\Lambda_1^{^1S_0-^3P_0}=-g_\rho(2+\chi_\rho )h_\rho^1-g_\omega(2+\chi_\omega )h_\omega^1
\to 4.67 \, [21]
\\
&\Lambda_1^{^3S_1-^3P_1}=\frac{g_{\pi N N}}{\sqrt{2}}\left({\frac{m_\rho}{m_\pi}}\right)^2\!h_\pi^1
+g_\rho(h_\rho^1 -{h_\rho^1}')
-g_\omega h_\omega^1
\to 859 \, [1340]
\\
&\Lambda_2^{^1S_0-^3P_0}= -g_\rho(2+\chi_\rho )h_\rho^2
\to -137\, [160] \,,
\end{split}
\label{eq:danilov}
\end{equation}
where we neglect ${h_\rho^1}'$~\cite{Holstein:1981, Haxton:2013aca} and provide
our numerical values, with
the DDH ``best values~\cite{Desplanques:1979hn}'' given in brackets --- and all in units of $10^{-7}$.
Following the large $N_c$ analysis of Ref.~\cite{Gardner2017paradigm}, we compute
\begin{equation}
\Lambda_0^+ \equiv \frac{1}{4}\Lambda_0^{^1S_0-^3P_0}
+ \frac{3}{4}\Lambda_0^{^3S_1-^1P_1} \to 301 \,\,;\,\,
\Lambda_0^- \equiv \frac{1}{4}\Lambda_0^{^3S_1-^1P_1}
- \frac{3}{4}\Lambda_0^{^1S_0-^3P_0} \to -46 \,,
\end{equation}
and recall the scaling predictions
$\Lambda_0^+\sim N_c$, $\Lambda_2^{^1S_0-^3P_0} \sim N_c\sin^2 \theta_w$,
$\Lambda_0^-\sim 1/N_c$, $\Lambda_1^{^1S_0-^3P_0} \sim \sin^2 \theta_w$,
$\Lambda_1^{^3S_1-^3P_1} \sim \sin^2 \theta_w$~\cite{Phillips:2014kna,Schindler:2015nga,Gardner2017paradigm}. Certainly the value of $h_\pi^1$ we
compute yields a value of $\Lambda_1^{^3S_1-^3P_1}$ at odds with the large $N_c$ expectation,
though $\Lambda_1^{^3S_1-^3P_1}|_{h_\pi^1=0}= -31$.
We now turn to other observables, starting with the
parity-violating longitudinal asymmetry in low-energy
$\vec{p}p$ scattering, $A_L (\vec{p} p)$, for which the Danilov parameters associated
with $S-P$ interference should suffice. Fixed target
$\vec{p}p$ experiments
at beam energies of 13.6 MeV, 15 MeV, and 45 MeV can be
analyzed within a DDH framework~\cite{Carlson:2001ma} to
yield~\cite{Haxton:2013aca}
\begin{equation}
\frac{2}{5} \Lambda_0^+ + \frac{1}{\sqrt{6}} \Lambda_2^{^1S_0-^3P_0} + \left[
\Lambda_1^{^1S_0-^3P_0} -\frac{6}{5} \Lambda_0^- \right]
= 419\pm 43 \,,
\end{equation}
which we evaluate as $120 - 56 + 60 =124$.
Thus our results in this case do not compare favorably.
For context, we note that
an analysis of this observable in chiral
effective theory shows that correlated
two-pion exchange (TPE) also
plays an important role~\cite{Viviani:2014zha},
bringing in an interaction largely controlled
by $h_\pi^1$ as well, although TPE is not present in
the DDH framework.
As for the other observables we have
considered, the value of $h_\pi^1$ plays an
important numerical role, with the subleading
contributions, which are largely isovector, and
the leading ones, which are isoscalar, playing
comparable numerical roles. Thus
although our original assessment of the Danilov
parameters, with the exception of the one
in which $h_\pi^1$ appears,
are crudely consistent with large $N_c$ scaling,
it appears that
the large $N_c$
relationships are not effective in predicting the aggregate
size of the various contributions. In this the
parameter $h_\pi^1$ drives this conclusion, making
its computation within LQCD~\cite{Wasem:2011tp,Feng:2017iqb,Sen:2021dcb} or an
improved experimental assessment of it, possibly through a
next-generation
$\vec{n} p \to d \gamma$ experiment, extremely
welcome.
\section{Summary}
We have determined the effective
weak Hamiltonian for parity-violating, $\Delta S=0$
hadronic processes
in the Standard Model at a renormalization scale of $2\, \rm GeV$.
To do this, we have made a
complete, leading-order renormalization
group analysis in QCD, starting from just below the $W$ mass
scale, including operator mixing and
evolution through heavy-flavor thresholds, as well as
neutral- and charged-current effects, for all possible
isospins ($I=0,1,2$) of the four-quark operators.
In our analysis we have found it convenient to separate
the $I=1$ and $I=0\oplus 2$ sectors, and
we have validated our procedures by showing that
we recover the same low-energy effective Hamiltonian
regardless of the order in which we (i) evolve to low-energy
scales or (ii) project on operators with even or odd isospin.
This is the first time that this has been done.
We use the resulting effective Hamiltonian
to determine the parity-violating meson-nucleon coupling constants, $h_\pi^1$, $h_\rho^{0,1,2}$,
$h_\omega^{0,1}$, familiar from the DDH framework,
employing the
factorization {\it Ansatz} and
assessments of the pertinent quark charges of the nucleon in lattice QCD at the $2\,\rm GeV$ scale. Working further,
we have found that our assessment of
$h_\pi^1$ and $h_{\rho-\omega}$ agree within $1\sigma$
of their experimental determinations in few-body
nuclear systems~\cite{NPDGamma:2018vhh,n3He:2020zwd},
though both our $h_\pi^1$ result and the size of the
asymmetry in $\vec{n} p \to d \gamma$~\cite{NPDGamma:2018vhh}
are in tension with
the null result from the study of
$P_\gamma [^{18}{\rm F}]$~\cite{Haxton:1981sf,Adelberger:1983zz,Adelberger:1985ik}.
Turning to the study of the parity-violating asymmetries
in low-energy $\vec{p}p$ scattering, which is
sensitive to the $I=2$ Davilov parameter $\Lambda_2^{^1S_0-^3P_0}$ as well,
we do not find agreement with experiment. The analysis
of this process within chiral effective theory, however,
suggests that TPE, an effect not included in the DDH potential, plays an important role~\cite{Viviani:2014zha},
and this can also modify the $I=1$ Danilov parameters,
though it may be that our factorization
assessment of $h_\rho^2$, or of neglected higher order
effects in $\alpha_s$, and thus of
$\Lambda_2^{^1S_0-^3P_0}$ that is to blame.
We note that the
parameter $h_{\rho-\omega}$ depends only very weakly on
the $I=2$ sector.
Five independent parameters characterize low-energy
hadronic parity violation, and the use of
pionless effective theory in the large $N_c$ limit gives
insight into the relative size of the
contributions~\cite{Zhu:2009,Phillips:2014kna,Schindler:2015nga,Gardner2017paradigm}. Yet these are scaling relationships,
rather than numerical predictions, and we have noted
that our numerical assessments in Eq.~(\ref{eq:danilov}),
save for the $I=1$ parameter containing $h_\pi^1$,
compare favorably with those expectations.
Indeed, the efficacy of the large $N_c$ predictions
in this context appear to
depend critically on the precise value of $h_\pi^1$,
with further input from either LQCD or experiment important
to a definitive test. Despite this, the application of
our results, within the DDH framework, to
parity-violating observables in $A>3$ systems
suggest that it is not effective, because the
subleading pieces are not only quite large, but they are
also needed for theoretical compatibility with the
observed effects. This outcome is nevertheless
suggestive that
the systematic study of
hadronic parity violation in $A>3$ systems,
for which studies in molecular
systems show great
promise~\cite{Altuntas:2018ots,Karthein}, may be
within reach.
On the theoretical side, the construction of the
complete effective weak Hamiltonian at a scale of
$\mu =2\,\rm GeV$ should support LQCD studies
of two-nucleon matrix elements~\cite{Nicholson:2021zwi}, enabling further
theoretical studies in which the factorization
approximation of the hadronic matrix elements
would finally be no longer necessary.
\begin{comment}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 637
|
<?php
/**
* Parent criterion.
* <p>A criterion of this type can only be created using an ID. A criterion of this type is only excludable.
* <span class="constraint AdxEnabled">This is disabled for AdX when it is contained within Operators: ADD, SET.</span>
* @package Google_Api_Ads_AdWords_v201605
* @subpackage v201605
*/
class ParentCriterion extends Criterion
{
const WSDL_NAMESPACE = "https://adwords.google.com/api/adwords/cm/v201605";
const XSI_TYPE = "Parent";
/**
* @access public
* @var tnsParentParentType
*/
public $parentType;
/**
* Gets the namesapce of this class
* @return string the namespace of this class
*/
public function getNamespace()
{
return self::WSDL_NAMESPACE;
}
/**
* Gets the xsi:type name of this class
* @return string the xsi:type name of this class
*/
public function getXsiTypeName()
{
return self::XSI_TYPE;
}
public function __construct($parentType = null, $id = null, $type = null, $CriterionType = null)
{
parent::__construct();
$this->parentType = $parentType;
$this->id = $id;
$this->type = $type;
$this->CriterionType = $CriterionType;
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,741
|
Wiener & Co | Amsterdam, The Netherlands | HOSPER
November 3, 2014 Damian Holmes
Oostenburgereiland – a history of shipyards and slipways
The inner city Wiener location on Amsterdam's island Oostenburg will be transformed into a housing location. In an open planning process with surrounding inhabitants, potential new inhabitants, the municipality and Heijmans Woningbouw a plan has been made to realize 70 dwellings in the area. The architectural office Arons & Gelauff is responsible for the urban plan and the architecture, the outdoor space and the tree containers have been designed by HOSPER.
The Oostenburgereiland, and the site of Wiener & Co in particular, has a rich history of shipyards and slipways. The design for the new dwellings is consistent with this character. The plan is organized around a circuit of "slopes" and "courtyards". The water dwellings and the water itself can be reached via these semi public green spaces. Thanks to the new underground parking garage, under the courtyard, the area is car-free.
Blocks perpendicular to the water
The entrances to the water dwellings are situated around the courtyard. These single-family houses are oriented towards the water at a perpendicular angle. They are positioned in such a way that the space where the slipways where in the past remains open so that the neighborhood gains access to the water. On the corner of the building block, along the water, a small block with loft apartments will be built. These apartments will be situated directly along a square with full grown plane trees and have a beautiful view of the Ezelsbrug (bridge).
Apartments will also be realized along the Oostenburgervoorstraat, housed in sturdy warehouse like buildings with an average height of 5 stories. The buildings connect to the scale and size of the existing buildings in the street. The public space along the Oostenburgervoorstraat will be enlivened by commercial spaces and entrances in the plinth.
Organization strengthens perpendicular orientation
Design principle for the external space is to strengthen the perpendicular orientation of the dwelling blocks. Tree-containers with different heights naturally create more or less private spaces around the entrances of the dwellings. The edges of the containers are widened at specific places so that they can be used as benches.
The difference between the slipways and the courtyards is emphasized by the difference in paving material and the distinct use of plants and trees. The slipways have a public character, whilst the courtyards will be more private and gardenlike.
Amsterdam Wiener & Co
Oostenburg, Amsterdam
Landscape Architect | HOSPER
Designers | Hanneke Kijne, Petrouschka Thumann, Raquel van Donselaar
Partners | Arons & Gelauff, Heijmans woningbouw, ANAE straatmeubilair
client | Terra Ontwikkeling
size | 2.3 ha (+ furniture: tree containers)
year of design | 2011 – 2013
hosper
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,421
|
\section{Introduction}
In the history of physics, the well-known Maxwell's demon was proposed to act as a rebel against the authority of the thermodynamic 2nd law \cite{0}. It decreases the entropy in a thermally isolated system, and finally rescues the whole universe from the heat death. Despite the myth of its existence \cite{1,2}, the demon reflects the habitus of universal system especially at micro scales: the system interacting with a demon becomes open and thus behaves far away from thermal equilibrium. There is a deep connection between the nonequilibrium thermodynamics involved the Maxwell's demon and the information theory, such as in the much-studied cases of Szilard engine \cite{3} and Landauer principle \cite{4,5}. The physical nature of information may be revealed by the study on the demon. For this reason many efforts have been devoted to this direction. The related works have shown their importance in the theoretic and experimental areas of nano and mesoscopic system analysis and control\cite{6,7,8,9,10,11,12,13}.
As a central concept in modern thermodynamics, the \emph{entropy production} quantifies the energy dissipation and nonequilibriumness in a stochastic system. One of the fundamental properties of the entropy production is that it follows the \emph{Jarzynski equality} \cite{14} or the integral \emph{fluctuation theorem}, which is regarded as the generalized 2nd law from a microscopic perspective. To analyze the demon's effect, several pioneering works attempted to construct an \emph{improper} entropy production which disobeys the Jarzynski equality \cite{15,16,17,18,19,20}. This thought follows the original idea of Maxwell. One representative of such construction was given by Sagawa and Ueda \cite{21,22}, where a fluctuation theorem (Sagawa-Ueda theorem) has been developed for the improper entropy production by taking into account the information acquired by the demon. Correspondingly a generalized 2nd law arises from this fluctuation theorem: the demon cannot extract work more than the acquired information on average. This result gives plausible interpretation on the Szilard's engine and many other models respectively. However, there are still unsolved problems in the frameworks of such kind for the following reasons.
First, the improper entropy production arises because the system dynamics is measured in an inconsistent manner where a part of the demon's contribution is missing. Thus, the improper entropy production measures the energy dissipation and nonequilibriumness incorrectly. Intuitively the demon controls not only the system state but also the energy exchanges such as the work and heat between the system and the baths. Thus, the demon contributes to the entropies in both the system and the baths. With this thought, one can construct different improper entropy productions by neglecting any part of demon's contribution (from either the system or the baths, or parts of them). Correspondingly, there exists different fluctuation theorems for these entropy productions which can lead to different 2nd law inequalities for work or heat. The first question is which inequality is more appropriate? Second, the equality in a 2nd law inequality always represents the thermal equilibrium state of the system. However, a system is supposed to be in a nonequilibrium state when controlled by a demon. This indicates that if the demon works efficiently the equality in the 2nd law in previous frameworks does not always hold. It has been reported by several works \cite{23,24,25,26} in the examples of the information processing that the upper bound of the extracted work is less than the bound predicted by the Sagawa-Ueda theorem. This reveals the fact that when the system is at a controlled nonequilibrium state, there exists an additional energy dissipation which is not estimated by the previous frameworks. The second question is where this energy dissipation is originated from?
The motivation of this paper is to draw a clearer picture of the Maxwell's demon. We note the fact that the controlled system actually follows the 2nd law when the dynamics is properly measured. One can quantify the correct entropy productions at different coarse-grained levels for the demon's control. Every improper entropy production can give rise to a missing part of the demon's contribution. None of these entropy productions fulfills the task of complete characterization of the demon unless we take the total contribution into account. The puzzle of the demon obviously involves the interactions between the system and the demon during the whole dynamics. In the thermodynamics it is appropriate to describe these interactions by using the informational correlation -- the dynamical mutual information \cite{27,28,29}, defined as $i=\log\frac{p[x(t)|y(t)]}{p[x(t)]}$, where $x(t)$ and $y(t)$ represent the two simultaneous trajectories of the two interacting systems respectively, $p$ denotes the probability (density) of the trajectories. With this quantification at the trajectory level, it is natural to introduce the concept of \emph{dissipative information}\cite{27,28,29,30} to quantify the time-irreversibility of the dynamical mutual information,
\begin{equation}
\label{1}
\sigma_I=i-\widetilde{i},
\end{equation}
where $\widetilde{i}=\log\frac{p[\widetilde{x}(t)|\widetilde{y}(t)]}{p[\widetilde{x}(t)]}$ is the dynamical mutual information along the time-reversed trajectories. We will show that $\sigma_I$ rightly quantifies the demon's total contribution. For a complete thermodynamical description, one should develop a set of fluctuation theorems which involves not only the entropy production in the system but also dissipative information, rather than the construction of improper entropy productions. The fluctuation theorems on the entropy productions reflect the nonequilibrium dynamics in the controlled system. Different from the ordinary fluctuation theorems for one single system, the fluctuation theorem on the dissipative information quantifies the nonequilibriumness of the interactions or binary relations. It is thus reasonable to believe that, when the demon works efficiently there exists an intrinsic nonequilibrium state (due to the binary relations) characterized by a positive averaged dissipative information. This is the source of the inevitable energy dissipation in many cases of the demon.
\section{Fluctuation Theorems and Inequalities}
Let us consider that a demon controls a system which is coupled with several thermal baths. The system and the demon are initially at the states $x_0$ and $y$ respectively. Then the demon performs a control to the system with a protocol $\Gamma(y)$ based on $y$. For simplicity the correspondence between $y$ and $\Gamma(y)$ is assumed to be bijective. Consequentially the system's trajectory $x(t)$ is correlated to the demon state $y$. As a reasonable assumption, the demon does not alter the control protocol while the demon state $y$ is unchanged during the dynamics. Driven by thermal baths, the stochasticity of the system allows the time-reversal trajectory $\widetilde{x}(t)\equiv x(\tau-t)$ to be under the identical protocol. Here the initial state of $\widetilde{x}(t)$ corresponds exactly to the final state of $x(t)$, denoted by $x_t$.
When $\Gamma(y)$ or $y$ is displayed explicitly in the system dynamics, an entropy production can be given by log ratio between the probabilities (densities) of $x(t)$ and $\widetilde{x}(t)$ conditioning on $y$,
\begin{equation}
\label{2}
\sigma_{X|Y}=\log\frac{p[x(t)|y]}{p[\widetilde{x}(t)|y]}=\Delta s_{X|Y}+\delta s_{X|Y}\tag{2},
\end{equation}
where the subscript $X|Y$ means that the thermodynamical entity of the system ($X$) is controlled by a given protocol of the demon ($Y$). Besides $\sigma_{X|Y}$ can be viewed as the total stochastic entropy change consisting of the contributions from the system and the baths at the microscopic level\cite{31}. This is because the total entropy change can be given by the second equality in Eq.(2). Here $\Delta s_{X|Y}=-\log p(x_t|y)-[-\log p(x_0|y)]$ quantifies the stochastic entropy difference of the system between the final and initial states; $\delta s_{X|Y}=\log\frac{p[x(t)|x_0,y]}{p[\widetilde{x}(t)|x_t,y]}$ represents the stochastic entropy flow from the system to the baths, which is also identified as the heat transferred from the baths to the system as, $Q_{X|Y}=-T\delta s_{X|Y}$, which has been proved in the detailed fluctuation theorem in the Langevin or Markovian dynamics \cite{32,33}. Thus $\delta s_{X|Y}$ is recognized as the (stochastic) entropy change in the baths. On the other hand, when the demon's control $\Gamma(y)$ or the demon state $y$ is unknown in the system dynamics, the entropy production can be measured properly at the coarse-grained level. That is to say one needs to average or integrate the demon's control information out of the dynamics, i.e., to obtain the marginal probability $p[x(t)]=\sum_y p(y)p[x(t)|y]$ with implicit control conditions. Then another entropy production, which is a coarse-grained version of $\sigma_{X|Y}$, can be given by,
\begin{equation}
\label{3}
\sigma_{X}=\log\frac{p[x(t)]}{p[\widetilde{x}(t)]}=\Delta s_{X}+\delta s_{X}\tag{3}.
\end{equation}
In the second equality in Eq.(3), $\Delta s_x =\log\frac{p(x_0)}{p(x_t)}$ and $\delta s_{X}=\log\frac{p[x(t)|x_0]}{p[\widetilde{x}(t)|x_t]}$ are recognized as the coarse-grained entropy changes in the system and in the baths respectively. Thus, $\sigma_{X}$ quantifies the total entropy change at the coarse-grained level with the lack of the demon's control information.An illustrative case for showing the differences between the entropy productions can be found in Fig.1. It is interesting that both $\sigma_{X|Y}$ and $\sigma_{X}$ follow the Jarzynski equalities,
\begin{equation}
\label{4}
\langle\exp(-\sigma_{X|Y})\rangle=1\text{, and } \langle\exp(-\sigma_{X})\rangle=1,\tag{4}
\end{equation}
where the average $\langle\exp(-\sigma_{X|Y})\rangle$ is taken over the ensembles of the system and demon's state. One should note that for every protocol, $\sigma_{X|Y}$ obeys the detailed Jarzynski equality under every possible control protocol, i.e., $\langle\exp(-\sigma_{X|Y})\rangle_{X|Y}=1$, where the average $\langle\cdot\rangle_{X|Y}$ is taken over the ensemble of the system while $y$ is fixed. For a complete view of the controlled nonequilibriumness of the system, it is appropriate to take the average of the detailed Jarzynski equality on both sides over the ensemble of demon's state, with the notation $\langle\cdot\rangle\equiv \langle\langle\cdot\rangle_{X|Y}\rangle_Y$. Notice that together the two Jarzynski equalities in Eq.(4) provide a new sight that the 2nd law holds for the system at both two levels of the knowledge of demon's control.
\begin{figure}[!ht]
\centering
\includegraphics[height=7.5cm,width=8cm]{FIG1.eps}
\caption{The Entropy productions at fine and coarse-grained levels under the demon's control. A particle (shown as the blue circle) is confined in a box. The state of the particle can be represented by $0$ or $1$ when the particle is contained in the corresponding half of the box. A demon controls the particle system by exerting different potentials to the system. A trajectory of the particle in the position representation is given by $x(t)=\{x_0=0,x_t=1\}$. In the first row, the detailed information of the potential is unknown and the entropy production $\sigma_X$ can only be measured by using the coarse-grained dynamics. In the second row, the demon exerts an explicit potential to the system corresponding to $y=1$, and the entropy production at the fine level is given by $\sigma_{X|Y=1}$ at the fine-level. In the third row, the demon exerts another potential explicitly, and the entropy production is given by $\sigma_{X|Y=0}$. The three entropy productions are not equal to each other in general.}
\label{FIG1}
\end{figure}
In general the two entropy productions shown above are different from each other. The gap between them indicates demon's contribution to entropy production, which is exactly the dissipative information $\sigma_I$ shown in Eq.(1), where the trajectory $y(t)$ is fixed at a single value of state $y$. This can be seen from the following relationship,
\begin{equation}
\label{5.a}
\sigma_{X|Y}=\sigma_{X}+\sigma_I\tag{5.a}.
\end{equation}
The detailed contributions of the demon to the system and the baths can be revealed by the decomposition of dissipative information, and the relations between the entropy changes shown in Eq.(2,3) in the following equalities,
\begin{equation}
\label{5.b}
\begin{cases}\tag{5.b}
\sigma_I=\Delta i+\delta i\\
\Delta s_{X|Y}=\Delta s_X+\Delta i\\
\delta s_{X|Y}=\delta s_X+\delta i
\end{cases}.
\end{equation}
Here $\Delta i=i_0-i_t$ is the information change of the system during the dynamics, with $i_0=\log\frac{p(x_0|y)}{p(x_0)}$ and $i_t=\log\frac{p(x_t|y)}{p(x_t)}$ being the state mutual information between the system state and demon's state at initial and final time respectively, which has been introduced in the work \cite{34,35}; $\delta i=\rho-\widetilde{\rho}$ is the time-irreversible information transfer from the demon to the system. Here $\rho=\log \frac{p[x(t)|x_0,y]}{p[x(t)|x_0]}$ and $\widetilde{\rho}=\log \frac{p[\widetilde{x}(t)|x_t,y]}{p[\widetilde{x}(t)|x_t]}$ quantify the information transferred \cite{36,37} from the demon to the system along the forward in time and backward in time trajectories respectively. It is noteworthy that the information transfer is an informational measure of how the dynamics of system depends on the demon by using the comparison between the system dynamics at different coarse-grained levels under the demon's control ($p[x(t)|x_t,y]$ and $p[x(t)|x_t]$). In Eq.(5.b), the second equality identifies the role of $\Delta i$ that it can be regarded as the demon's contribution to the entropy change in the system; the third equality indicates that $\delta i$ depicts the demon's contribution to the baths. Then the role of dissipative information is clear: it describes how the demon influences the entropy production through the nonequilibrium binary relation or interaction (see Fig.2.). Moreover, this effect can be quantified precisely in the following fluctuation theorem,
\begin{equation}
\label{6}
\langle\exp(-\sigma_I)\rangle=1\tag{6}.
\end{equation}
This is a new fluctuation theorem which is quite different from the Jarzynski equality because it is for the nonequilibriumness of the binary interactions between the systems rather than for a single system.
\begin{figure}[!ht]
\centering
\includegraphics[height=5cm,width=8cm]{FIG2.eps}
\caption{Detailed contributions of the demon to the entropy changes in the system (denoted by $\Delta i$) and the baths (denoted by $\delta i$) respectively.}
\label{FIG2}
\end{figure}
To resolve the puzzle of the demon, we first review the construction of the improper entropy productions. A construction, $\eta=\Delta s_X+\delta s_{X|Y}$ is an improper entropy production because it violates Jarzynski equality, $\langle\exp(-\eta)\rangle\neq 1$; and in fact the two entropy changes $\Delta s_X$ and $\delta s_{X|Y}$ are measured at different levels of the knowledge of the demon's control, according to Eqs.(2,3). On the other hand, $\eta$ arises because $\Delta i$ is neglected in $\sigma_{X|Y}$, $\eta=\sigma_{X|Y}-\Delta i$, suggested by Eq.(5). This indicates that in Eq.(4) the Jarzynski equality for $\sigma_{X|Y}$ can be satisfied by adding the contribution of $\Delta i$ to $\eta$. This is the core of Sagawa-Ueda theorem which emphasizes $\Delta I=-\Delta i$ as the key characterization of the demon. Following the similar idea, one can construct different improper entropy productions. For instance, consider $\eta'=\Delta s_{X|Y} +\delta s_X$ where $\Delta s_{X|Y}$ and $\delta s_X$ are measured in an inconsistent manner in the dynamics, thus $\langle\exp(-\eta')\rangle\neq 1$. By adding $\delta i$ into $\eta'$, one has $\sigma_{X|Y}=\eta'+\delta i$, which gives rise to the same Jarzynski equality for $\sigma_{X|Y}$ in Eq.(4). However, neither $\Delta i$ nor $\delta i$ quantify the total demon's contribution, because $\Delta i$ gives the demon's influence on the system while $\delta i$ gives the demon's influence on the baths. Therefore, only the dissipative information $\sigma_I$ involving both the demon's control on the system and baths can take into account the overall contribution of the demon. Unlike previous works, the relation in Eq.(5) together with the corresponding set of fluctuation theorems in Eqs.(4,6) provides the full clear picture of Maxwell's demon.
Importantly, we further derive a series of inequalities to obtain the bounds on the dissipative entities (entropy productions and dissipative information). By applying Jensen's inequality $\langle\exp(-O)\rangle\ge \exp(-\langle O\rangle)$ to Eqs.(4,6) respectively, we have
\begin{equation}
\label{7}
\begin{cases}\tag{7}
\langle\sigma_{X|Y}\rangle\ge0,\text{ or } \langle\Delta s_{X|Y}\rangle\ge-\langle\delta s_{X|Y}\rangle\\
\langle\sigma_X\rangle\ge0,\text{ or }\langle\Delta s_{X}\rangle\ge-\langle\delta s_{X}\rangle\\
\langle\sigma_I\rangle\ge0,\text{ or } \langle\Delta i\rangle\ge-\langle\delta i\rangle
\end{cases}
\end{equation}
The first two are the 2nd law inequalities at different coarse-grained levels of the demon's control corresponding to the Jarzynski equalities in Eq.(4), while the last inequalities about $\sigma_I$ shows the new feature of the nonequilibrium behavior brought by the demon. To see this, take the average on both sides of Eq.(5) over the ensembles, we have $\langle\sigma_{X|Y}\rangle=\langle\sigma_X\rangle+\langle\sigma_I\rangle$. Combining with Eq.(7), one sees that $\langle\sigma_{X|Y}\rangle$ quantifies the true (utmost) entropy productions in the system. A lower bound of $\langle\sigma_{X|Y}\rangle$ different from that obtained from the 2nd law in Eq.(7) (which is zero) is given by the following inequality,
\begin{equation}
\label{8}
\langle\sigma_{X|Y}\rangle\ge\langle\sigma_I\rangle\ge0\tag{8}.
\end{equation}
$\langle\sigma_{X|Y}\rangle=0$ at the fine level indicates that the system is in a quasi-static (equilibrium) process where every control protocol is applied infinitely slowly. Such a demon does not work efficiently in practice. High efficiency means achieving the control in a finite time, which leads to a nonequilibrium process. Consequentially the lower bound of $\langle\sigma_{X|Y}\rangle$ is always a positive number rather than $0$. Although measured properly, $\langle\sigma_X\rangle$ does not reflect the true nonequilibriumness of the system due to the coarse-graining. Meanwhile, $\langle\sigma_X\rangle$ does not need to be strictly positive when the system is actually in nonequilibrium. However, there always exists a positive dissipative information ($\langle\sigma_I\rangle>0$) which is contained in the true entropy production, $\langle\sigma_{X|Y}\rangle$. This is due to the nonequilibrium part of the dynamical mutual information for the binary relationship between the demon and the system. The exception can be seen in the case where a demon controls the system with an unique and deterministic protocol, we have $\langle\sigma_I\rangle=0$ as $\langle\sigma_X\rangle=\langle\sigma_{X|Y}\rangle$ during the dynamics. Otherwise, there exists an intrinsic nonequilibrium state of the system in general, which is characterized by an inevitable energy dissipation given by $\langle\sigma_{X|Y}\rangle=\langle\sigma_I\rangle>0$.
\section{New Bounds for Work and Heat}
A consequence of Eq.(8) is that the bounds on the heat and work should be revised beyond the ordinary 2nd law. To see this, let us assume that the system is coupled with a thermal bath with temperature $T$ for simplicity. Then the system dynamics can be given by the Langevin dynamics. The Hamiltonian of the system depends on the system state and the control protocol, denoted by $H(x,y)\equiv H(x,\Gamma(y))$ (correspondence between $y$ and $\Gamma(y)$ is bijective). The change in the Hamiltonian during the dynamics can be given by $\Delta H_{X|Y}=H(x_t,y)-H(x_0,y)$. With the assumption of the detailed FT, the entropy production can be given in terms of the heat absorbed by the system, $\sigma_{X|Y}=\Delta s_{X|Y}-T^{-1}Q_{X|Y}$. According to the thermodynamic 1st law, $\Delta H_{X|Y}=Q_{X|Y}+W_{X|Y}$ where $W_{X|Y}$ is the work performed on the system, $\sigma_{X|Y}$ can be rewritten in terms of the work, $\sigma_{X|Y}=T^{-1}(W_{X|Y}-\Delta F_Y)$. Here $\Delta F_Y$ is the Helmholtz free energy difference, $\Delta F_Y=\langle \Delta H_{X|Y}\rangle_{X|Y}-T\langle \Delta s_{X|Y}\rangle_{X|Y}$, under certain protocol. The probability weights in the averages of the state variables in $\Delta F_Y$ should be distinguished at the initial and final states: the weights $p(x_0|y)$ and $p(x_t|y)$ are used for $x_0$ and $x_t$ respectively. Then according to the inequality for $\sigma_{X|Y}$ in Eq.(7), we reach the ordinary 2nd law inequalities for the heat and work,
\begin{equation}
\label{9}
\langle Q_{X|Y} \rangle\le T \langle\Delta s_{X|Y}\rangle \text{, and } \langle W_{X|Y}\rangle\ge \Delta F\tag{9}.
\end{equation}
Here, $\Delta F=\langle \Delta F_Y\rangle_{Y}$ is recognized as the averaged free energy difference over the ensemble of the demon state. It is noteworthy that different constructions of improper entropy productions may lead to different forms of Eq.(9). However, by noting the relation in Eq.(5) and rearranging terms, they are equivalent to each other. Because all the improper entropy productions are generated by decomposing $\sigma_{X|Y}$ in different ways. On the other hand, we take the dissipative information into account. By noting Eq.(8), we reach tighter bounds for the heat and the work, compared to Eq.(9),
\begin{equation}
\label{10}
\begin{matrix}\tag{10}
&\langle Q_{X|Y}\rangle\le T \langle\Delta s_{X|Y}\rangle-T \langle\sigma_{I}\rangle\le T\langle\Delta s_{X|Y}\rangle;&\\
&\langle W_{X|Y}\rangle\ge \Delta F+T \langle\sigma_{I}\rangle\ge \Delta F. &
\end{matrix}
\end{equation}
Here we obtain a smaller upper bound for the heat and a larger lower bound for the work than the ordinary 2nd law. These new tighter bounds clearly indicate the nontrivial nonequilibrium state of a system controlled by a demon. It is important to note that, compared to the tighter bounds in the first inequalities in Eq.(10), the looser bounds of the heat and work in the second inequalities (also see Eq.(9)), which are also predicted by the Sagawa-Ueda theorem, represent the equilibrium limit while the demon does not work efficiently.
Usually in the practical model of Maxwell's demon such as the Szilard's type demon, the action of the demon is divided into two different processes: measurement and feedback control. In the measurement process, the demon observes the system and acquires the information of the system state. The demon is usually implemented by a physical system, and the measured system can be viewed as the outer controller of the demon. In this situation, an inevitable heat, or say, the measurement heat $Q_{mea}$ can be generated from the demon during the information acquirement \cite{38,39}. In the feedback control process, the demon extracts a positive work $W_{ext}$ from the system with an additional energy dissipation. By noting the relations $Q_{mea}=-Q_{X|Y}$ and $W_{ext}=-W_{X|Y}$, the bounds for $Q_{mea}$ and $W_{ext}$ can be given by the ordinary 2nd law in Eq.(9) where the equalities hold for infinitely slow quasi-static or equilibrium processes. However, if the demon works efficiently, we then come to a nonequilibrium situation where an positive energy dissipation is originated from the dissipative information. Thus, new bounds for $Q_{mea}$ and $W_{ext}$ can be given by Eq.(10) such that
\begin{equation}
\label{11}
\begin{matrix}\tag{11}
&\langle Q_{mea}\rangle\ge T \langle\sigma_{I}\rangle-T \langle\Delta s_{X|Y}\rangle\ge -T\langle\Delta s_{X|Y}\rangle;&\\
&\langle W_{ext}\rangle\le -\Delta F-T \langle\sigma_{I}\rangle\le -\Delta F &
\end{matrix}
\end{equation}
This means that there is more heat generated in the measurement and less work extracted in the feedback control than the estimations given by the ordinary 2nd law.
\section{Illustrative Cases}
To illustrate our idea in this letter, we calculate the cases of the information ratchets shown in Fig.3, which can be tested in the experiments. A potential with the two wells is exerted on a confined particle. The height between the two wells is equal to $V>0$. While under the equilibrium, the probabilities that the particle is at the lower and the higher well can be quantified by $p_l=[1+\exp(-V)]^{-1}$ and $p_h=1-p_l$ respectively ($p_l>1/2$). An outside controller can control the particle by reversing the profile of the potential, i.e., by raising the lower well up to $V$ and lowering the higher well down to $0$. The action of the controller is assumed to be fast enough before the particle reacts. For no loss of generality, the temperature of the environmental bath is assumed to be at $T=1$.
\begin{figure}[!ht]
\centering
\includegraphics[height=2.5cm,width=8cm]{FIG3.eps}
\caption{The confined particle works as a demon or is controlled by a demon. In (a) and (b), the particle is used as a demon and measures the system state before the control. The system state, denoted by $y$, is represented by the location of the lower well ($0$ or $1$) in the potential. In (a), $y=1$; and In (b), $y=0$. The final state of the particle, denoted by $x_t$, is taken as the state of $y$. In (c) and (d), the particle is controlled by a demon. The initial state of the particle, denoted by $x_0$, is represented by $h$ or $l$ when the particle is at the higher or the lower well. When spotting the state $x_0=h$, the demon reverses the potential and extracts a positive work of $V$ from the particle system.}
\label{FIG3}
\end{figure}
If the particle works as a demon (shown in Fig.3. (a) and (b)), the particle is supposed to measure the state of the controlled system at first. The state of the particle is denoted by $x=0$ or $1$ when being at the left or the right well respectively. The system state can be represented by the location of the lower well with the value of $y=0$ or $1$ with equal probability $p(y=0)=p(y=1)=1/2$. The particle is initially under the equilibrium until the system state changes. Correspondingly, the potential is reversed by the system immediately and the particle starts measuring the current system state. When the equilibrium is achieved, the final state $x_t$ of the particle is taken as an observation of $y$. The probability of the measurement error can be given by the probability of the particle at the higher well, $p_{X_t|Y}(x_t\neq y|y)=p_h$. On the other hand, the measurement precision is characterized by the probability of the particle at the lower well, $p_{X_t|Y}(x_t= y|y)=p_l$. By noting the definitions and relationships shown in Eq.(2), the averaged measurement heat generated by the particle can be given by $\langle Q_{mea}\rangle=(1/2-p_h)V$, and the entropy change can be evaluated by $\langle\Delta s_{X|Y}\rangle=-I_t$ (see Eqs.(S.26-S.28) in \cite{40}). Here $I_t=\log 2-S\ge0$ is the final mutual information which measures the correlation between the observation $x_t$ and the state $y$ \cite{34,35}, where the Shannon entropy $S$ is given by $S=-p_l\log p_l-p_h\log p_h$. Then according to the Eq.(11), the new bound of $\langle Q_{mea}\rangle$ in this case can be given by
\begin{equation}
\label{12}
\langle Q_{mea}\rangle\ge \langle\sigma_{I}\rangle+I_t \ge I_t.\tag{12}
\end{equation}
Here the dissipative information $\langle\sigma_{I}\rangle$ can be calculated by using the probabilities of the forward and backward trajectories $x(t)=\{x_0,x_t\}$ and $\widetilde{x}(t)=\{x_t,x_0\}$ respectively. By inserting these probabilities into Eq.(1), we have the expression $\langle\sigma_{I}\rangle=\log\sqrt{2p_l^2+2p_h^2}\ge0$ (see Eq.(S.29) in \cite{40}). Although the measurement precision characterized by $p_l$ increases as the potential height $V$ increases, higher precision also raises up both the averaged measurement heat and the lower bound of the energy dissipation quantified by the dissipative information in this case. The numerical results can be found in Fig.4. (a) and (b).
\begin{figure}[!ht]
\centering
\includegraphics[height=5cm,width=8cm]{FIG4.eps}
\caption{In (a), the averaged measurement heat $\langle Q_{mea}\rangle$ (solid line), the traditional lower bound $I_t$ (dotted line), and the new lower bound $I_t+\langle\sigma_I\rangle$ (dash line) in the measurement are plotted as functions of the measurement precision $p_l$. The potential height $V$ is raised from $0$ to $1$. Correspondingly, $p_l$ is increased from $0.50$ to $0.73$ monotonically. The corresponding dissipative information $\langle\sigma_I\rangle$ (dash line) and the entropy production $\langle\sigma_{X|Y}\rangle$ (solid line) in the measurement are shown as the functions of the precision $p_l$ in (b). In (c), the extracted work $\langle W_{mea}\rangle$ (solid line), the traditional upper bound $I_0$ (dotted line), and the new upper bound $I_0-I_c$ (dash line) are plotted as functions of the measurement precision $1-\epsilon$, where $1-\epsilon$ is ranged from $0.5$ to $1$ and the potential height is $V=1$. The corresponding dissipative information $\langle\sigma_I\rangle$ (dash line) and the entropy production $\langle\sigma_{X|Y}\rangle$ (solid line) are shown in (d).}
\label{FIG4}
\end{figure}
In the next, we use a demon to extract positive work from the particle system (shown in Fig.3. (c) and (d)). In this case, the state of the particle can be denoted by $x=l$ or $h$ when at the lower or the higher well respectively. Initially, the particle is under equilibrium. The demon measures the state of the particle at first and obtains the observation $y$. The demon plays the feedback control according to the observation $y$. When the particle is observed to be at the higher well, the demon reverses the potential immediately and extracts an amount of work $W_{ext}=V$. After the control, the demon does nothing until the particle goes to the equilibrium again. For a practical thought, the demon's measurement can have a random error and this error certainly lowers the efficiency of the work extraction. Here we simply assume that the measurement error occurs with stable probability $p_{Y|X_0}(y\neq x_0|x_0)=\epsilon$. By using Eq.(2) and noting the thermodynamical 1st law, the extracted work can be given by $\langle W_{ext}\rangle=(p_h-\epsilon)V$ on average, and the efficient free energy difference is equal to the mutual information change during the dynamics, $\Delta F=-I_0$ (see Eqs.(S.30-S.33) in \cite{40}). Here the mutual information $I_0=S_Y-S_\epsilon\ge0$ represents the initial correlation between the demon and the particle, where the Shannon entropies can be given by $S_Y=-p_y\log p_y-(1-p_y)\log(1- p_y)$ and $S_\epsilon=-\epsilon\log \epsilon-(1-\epsilon)\log (1-\epsilon)$, with $p_y=p_l(1-\epsilon)+p_h\epsilon$ representing the probability of the observation $y=l$. Then due to Eq.(11), the new bound of $\langle W_{ext}\rangle$ can be given by
\begin{equation}
\label{13}
\langle W_{ext}\rangle\le I_0-I_c \le I_0,\tag{13}
\end{equation}
Here the mutual information $I_c=S_Y-S\ge0$ measures correlation between the demon and the particle right after the control, where the Shannon entropy is $S=-p_l\log p_l-p_h\log p_h$. It is important to note that $I_c$ is actually the information that is not used to extract the work but can merely dissipate into the bath as the dissipative information $\langle\sigma_{I}\rangle=I_c$. This result can be verified by evaluating Eq.(1) (see Eq.(S.18) in \cite{40}). For this reason, the demon can only extract work less than the mutual information difference before and after the control, quantified by $I_0-I_c$. We can see that higher measurement precision characterized by $1-\epsilon$ can increase the averaged extracted work (with fixed potential height $V$), in the meanwhile the inevitable dissipative information is decreased by the increasing precision in this case. The numerical results are shown in Fig.2. (c) and (d). Also, we can note that the dissipative information $\langle \sigma_{I}\rangle$ bounds the entropy production $\langle \sigma_{X|Y}\rangle$ from the below in both the cases of the measurement and feedback control (see Fig.4. (b) and (d)). This verifies the inequality in Eq.(8).
On the other hand, we find that the tighter upper bound in Eq.(13) is equivalent to the \emph{information process 2nd law}\cite{24,25,26}, by noting $I_0-I_c=S-S_\epsilon$. Here $S$ and $S_\epsilon$ can be regarded as the Shannon entropies of a ``0,1" tape before and after the information processing respectively\cite{23}. This indicates that the proposed FTs in this letter can be applied to the area of thermodynamics computing from a general perspective. In addition, the looser bounds for the heat and work in Eqs.(12,13) are predicted by the 2nd law (Sagawa-Ueda theorem), and these bounds can only be achieved in the quasistatic (or equilibrium) control protocols.
\section{Conclusion}
Traditional analysis on the Maxwell's demon focus on how the 2nd law is violated by the system, and is rescued by some hidden demon-induced entities. These entities were believed as the key characterization of the demon. In contrast, we show that the system does not disobey the 2nd law whether the demon is hidden or not, which can be seen in the new set of fluctuation theorems (Eq.(4)) for the entropy productions when they are correctly measured (Eqs.(2,3)). Intrinsically, the nonequilibrium behavior of the system led by the demon is due to the time-irreversibility of the binary relationship between them, which is quantified by the dissipative information (Eq.(1)). Besides, we prove another new fluctuation theorem for this dissipative information (Eq.(6)). This theorem (Eq.(6)) combining with the other new fluctuation theorems (Eq.(4)) for the entropy productions gives a precise quantification of the effect of the demon. An apparent result following these theorems is that there exists an inevitable energy dissipation originated from the positive dissipative information, which leads to the tighter bounds for the work and the heat (Eq.(11)) than that estimated by the ordinary 2nd law. We also suggest a possible realization of the experimental estimation of these work and heat bounds, which can be measured and tested. These results offer a general picture of a large class of the models of the Maxwell's demon.
\emph{Proof of the Fluctuation Theorems--}The probabilities (densities) $p[x(t)|y]$ and $p[x(t)]$ are assumed to be nonnegative and to be normalized, i.e., $p[x(t)|y], p[x(t)]\ge 0$ respectively, $\int p[x(t)|y] D x(t)=1$ and $\int p[x(t)] D x(t)=1$. Besides, we need that the differentials with respect to the time-forward and backward trajectories are equal to each other, i.e., $D x(t)=D \widetilde{x}(t)$. For the entropy productions and the dissipative information in Eqs.(1,2,3), we obtain the equalities,
\begin{equation*}
\begin{matrix}
&\langle \exp(-\sigma_{X|Y})\rangle=\int dy \int p(y)p[x(t)|y]\frac{p[\widetilde{x}(t)|y]}{p[x(t)|y]}D x(t)=1\\
&\langle \exp(-\sigma_{X})\rangle=\int p[x(t)]\frac{p[\widetilde{x}(t)]}{p[x(t)]}D x(t)=1\\
&\langle \exp(-\sigma_{I})\rangle=\int dy \int p[x(t)]p[y|x(t)]\frac{p[y|\widetilde{x}(t)]}{p[y|x(t)]}D x(t)=1\\
\end{matrix}
\end{equation*}
In the last equation for $\sigma_{I}$, by noting the relation in the probabilities that $p[y|x(t)]=\frac{p(y)p[x(t)|y]}{p[x(t)]}$, we have $\sigma_{I}=i-\widetilde{i}=\log \frac{p[y|x(t)]}{p[y|\widetilde{x}(t)]}$. This completes the proof on the new FTs in Eqs.(4,6).
\section{Acknowledgements}
Qian Zeng thanks the support in part by National Natural Science Foundation of China (NSFC-91430217).
\section{Experimental Estimations on Thermodynamical Entities}
\subsection{Coarse-Grained Markovian Dynamics of Controlled System}
In real cases, the estimations of the entropy productions (Eqs.(2,3)) and the dissipative information $\sigma_I$ (Eq.(1)) can have difficulties in dealing with the non-Markovian systems, where the results may highly depends on the system trajectories. Here we present a heuristic method for simplifying the estimations on the new bounds of the work and the heat given in Eq.(11). The Szilard's type demon is considered in the following discussion\cite{0,1,2,3,4,5}. For a general description, let us still call the controller ``demon" in both the measurement and feedback control processes, and the controlled system is referred to ``system". We still assume that the controlled system is coupled to a single thermal environmental bath where the temperature is set to $T=1$.
The demon performs control to the system in a finite time $t$. The general settings can be given as follows. At the beginning, the system and the demon are at the states $x_{0}$ and $y$ respectively. The probability of $x_0$ is represented by $p_{X_0}(x_0)$. Then $x_{0}$ and $y$ can be correlated at the initial time with stable conditional probability $p_{X_0|Y}(x_{0}|y)$. The Szilard's type demon can achieve a control $\Gamma(y)$ to change the state from $x_{0}$ to $x_{c}$ immediately right after the initial time. Here, $x_c$ needs not to be equal to $x_0$. Due to this protocol, the state $x_{c}$ is then correlated with $y$ described by the probability $p_{X_c|Y}(x_c|y)$. Besides, the control time can be short enough before the system reacts. In this situation, the demon permutes the probabilities for the states in the distribution $p_{X_0}$ during the control, and turns $p_{X_0|Y=y}$ to be $p_{X_c|Y=y}$ right after the control. That is, the conditional probabilities of the states $p_{X_c|Y}(x_c|y)=p_{X_0|y}(x_0|y)$ if the state is changed from $x_0$ to $x_c$ at a given $y$. Here, more details about the conditional probabilities $p_{X_0|Y}(x_{0}|y)$ and $p_{X_c|Y}(x_{c}|y)$ should be discussed in the measurement and feedback control processes individually.
\begin{enumerate}
\item In the measurement process where the system is supposed to measure the demon state $y$, the system does not influence the demon state. Thus, the system and the demon are uncorrelated at the initial state before the control (measurement). Based on the settings in the above, we then have
\begin{equation}
\label{S.1}
\begin{cases}\tag{S.1}
p_{X_0|Y}(x_0|y)=p_{X_0}(x_0), \\
p_{X_c|Y}(x_c|y)=p_{X_0}(x_0).
\end{cases}
\end{equation}
Besides, the a priori probability of the demon state $p(y)$ should be known for the succeeding calculations.
\item In the feedback control process, the demon state $y$ is regarded as the observation of the system state $x_0$. For this reason, $y$ is assumed to be initially correlated with $x_0$ with stable probability $p_{Y|X_0}(y|x_0)$ which should be known for the following discussions. According to the Bayes' rule for the probabilities, we have that
\begin{equation}
\label{S.2}
\begin{cases}\tag{S.2}
p(y)=\sum_{x_0} p_{Y|X_0}(y|x_0)P_{X_0}(x_0),\\
p_{X_0|Y}(x_0|y)=\frac{1}{p(y)}p_{Y|X_0}(y|x_0)p_{X_0}(x_0), \\
p_{X_c|Y}(x_c|y)=p_{X_0|Y}(x_0|y).
\end{cases}
\end{equation}
Here, we should emphasize that the conditional probability $p_{X_c|Y}(x_c|y')$ can be unequal to $p_{X_0|Y}(x_0|y')$ at another $y'\neq y$ although $p_{X_c|Y}(x_c|y)=p_{X_0|Y}(x_0|y)$. This is due to concrete control protocols.
\end{enumerate}
After the control, the demon will not disturb the system dynamics in the rest time. The conditional probability of the final system state is represented by $p_{X_t|Y}(x_t|y)$. Driven by the bath, the system dynamics is conditionally Markovian right after the control with the new initial state $x_c$. The Markovian dynamics in continuous time can be given by the following master equation,
\begin{equation}
\label{S.3}
\partial_t p_{X_t|Y}=K p_{X_t|Y},\tag{S.3}
\end{equation}
where $p_{X_t|Y}$ denotes the conditional distribution of the system state $x$ conditioning on the demon state $y$ at time $t$; $K$ denotes the matrix of the transition rates of the dynamics. Then given the initial conditional distribution as $p_{X_c|Y}$, the solution of Eq.(S.3) can be given by
\begin{equation}
\label{S.4}
p_{X_t|Y}=\exp(Kt)p_{X_c|Y}.\tag{S.4}
\end{equation}
The system will go to the equilibrium exponentially fast. If the time $t$ is long enough for the stochastic environmental force to drive the system, then the final conditional distribution of the system state can be approximately taken as the equilibrium distribution $p_{X_t|Y}=p^{ss}_{X|Y}$. Here the the equilibrium distribution satisfies the equation $p^{ss}_{X|Y}=Qp$ with arbitrary initial distribution $p$. Here $Q=\lim_{t\to \infty}\exp(Kt)$ is recognized as the matrix of the transition probabilities in the long time limit, and each columns of $Q$ are exactly equal to the equilibrium distribution $p^{ss}_{X|Y}$. This suggests that for the long time evolution, the coarse-grained dynamics can be approximately given by
\begin{equation}
\label{S.5}
p_{X_t|Y}=Qp_{X_c|Y},\tag{S.5}
\end{equation}
where the transition probability is independent of $x_c$, i.e., $Q(x_t|x_c,y)=p_{X_t|Y}(x_t|y)=p^ss_{X|Y}(x_t|y)$.
The probability of the demon state $y$, denoted by $p(y)$, is assumed to be known in the general discussion. By using $p(y)$, the probabilities of the system states can be evaluated as follows, $p_X(x)=\sum_y p(y)p_{X|Y}(x|y)$. The coarse-grained dynamics of the system is in general non-Markovian. Also, there is a discontinuity in the dynamics at the control time. To calculate the entropy productions and the dissipative information, we can evaluate the probabilities of the coarse-grained trajectory $x(t)=\{x_c, x_t\}$ (the initial state $x_0$ should be known) and the time-reversal trajectory $\widetilde{x}(t)=\{x_t, x_c\}$ right after the control, by using the Markovian nature for the dynamics (Eq.(S.5)). We then have the following probabilities of the trajectories as follows,
\begin{enumerate}
\item For the measurement process (due to Eq.(S.1)), we have
\begin{equation}
\label{S.6}
\begin{cases}\tag{S.6}
p[x(t)|y]=p_{X_0}(x_0)p^{ss}_{X|Y}(x_t|y)\\
p[\widetilde{x}(t)|y]=p^{ss}_{X|Y}(x_t|y)(x_t|y)p^{ss}_{X|Y}(x_c|y),\\
p[x(t)]=p_{X_0}(x_0)\sum_y p(y)p^{ss}_{X|Y}(x_t|y),\\
p[\widetilde{x}(t)]=\sum_y p(y)p[\widetilde{x}(t)|y].
\end{cases}
\end{equation}
\item For the feedback control process (according to Eq.(S.2)), we have that
\begin{equation}
\label{S.7}
\begin{cases}\tag{S.7}
p[x(t)|y]=p_{X_c|Y}(x_c|y)p^{ss}_{X|Y}(x_t|y),\\
p[\widetilde{x}(t)|y]=p^{ss}_{X|Y}(x_t|y)(x_t|y)p^{ss}_{X|Y}(x_c|y), \\
p[x(t)]=\sum_y p(y)p[x(t)|y],\\
p[\widetilde{x}(t)]=\sum_y p(y)p[\widetilde{x}(t)|y].
\end{cases}
\end{equation}
Particularly, for the cases where the final correlation between the system and the demon vanishes, we have that $p^{ss}_{X|Y}(x|y)=p^{ss}_{X}(x)$. Then Eq.(S.7) can be reduced into
\begin{equation}
\label{S.8}
\begin{cases}\tag{S.8}
p[x(t)|y]=p_{X_c|Y}(x_c|y)p^{ss}_{X}(x_t),\\
p[\widetilde{x}(t)|y]=p^{ss}_{X}(x_t)p^{ss}_{X}(x_c), \\
p[x(t)]=p^{ss}_{X}(x_t)\sum_y p(y)p_{X_c|Y}(x_c|y),\\
p[\widetilde{x}(t)]=p^{ss}_{X}(x_t)p^{ss}_{X}(x_c).
\end{cases}
\end{equation}
\end{enumerate}
Then we can insert the probabilities in Eq.(S.6) and Eq.(S.7) (or Eq.(S.8)) into Eqs.(1,2,3) to evaluate the entropy productions and the dissipative information in the measurement and the feedback control processes respectively. Furthermore, Eq.(S.6) and Eq.(S.7) indicate that the thermodynamical entities, which depend on the system trajectories can be easily estimated in the experiments, because the calculation only involves the counting of the probabilities (or relative frequencies) of the states.
\subsection{Heat, Work, Free Energy Change, and Entropy Production}
The initial Hamiltonian of the system can be independent of the current demon state $y$, denoted by $H_0(x)$. The Hamiltonian right after the control can depend on both the states of the system and the demon, and is represented by $H_c(x,y)$. During the control, the Szilard's type demon can change the Hamiltonian fast enough before the system reacts. Then we can assume that during the control the change in the total energy is totally caused by the averaged work performed on the system, that is,
\begin{equation}
\label{S.9}
\langle W_{X|Y}\rangle=\langle H\rangle_c-\langle H\rangle_0,\tag{S.9}
\end{equation}
where $\langle H\rangle_{0}=\sum_{x}p_{X_0}(x)H_0(x)$ and $\langle H\rangle_{c}=\sum_{x,y}p(y)p_{X_c|Y}(x|y)H_c(x,y)$ are the total energies right before and after the control respectively.
The total energy change during the dynamics is recognized as,
\begin{equation}
\label{S.10}
\langle \Delta H\rangle=\langle H\rangle_t-\langle H\rangle_0,\tag{S.10}
\end{equation}
where $\langle H\rangle_{t}=\sum_{x,y}p(y)p_{X_t|Y}(x|y)H_c(x,y)$ is the total energy at the final state.
Then following the thermodynamical 1st law, the averaged heat absorbed by the system can be given by,
\begin{equation}
\label{S.11}
\langle Q_{X|Y}\rangle=\langle \Delta H\rangle-\langle W_{X|Y}\rangle=\langle H\rangle_t-\langle H\rangle_c.\tag{S.11}
\end{equation}
Thus, the true entropy production right after the control can be given by,
\begin{equation}
\label{S.12}
\langle \sigma_{X|Y}\rangle=\langle \Delta s_{X|Y}\rangle-\langle Q_{X|Y}\rangle\ge0.\tag{S.12}
\end{equation}
Here the true entropy change is quantified by
\begin{equation}
\label{S.13}
\langle \Delta s_{X|Y}\rangle=\sum_{x,y}p(y)p_{X_c|Y}(x|y)\log p_{X_c|Y}(x|y)-\sum_{x,y}p(y)p_{X_t|Y}(x|y)\log p_{X_t|Y}(x|y).\tag{S.13}
\end{equation}
By recalling the thermodynamical 1st law, we can rewrite the true entropy production in Eq.(S.12) in terms of the work instead of the heat as follows,
\begin{equation}
\label{S.14}
\langle \sigma_{X|Y}\rangle=\langle W_{X|Y}\rangle- \Delta F\ge0,\tag{S.14}
\end{equation}
In Eq.(S.14), the term $\Delta F$ is known as the effective free energy change during the dynamics, and the 2nd law $\langle \sigma_{X|Y}\rangle\ge 0$ still holds for the work. Then according to Eq.(S.12) and Eq.(14), $\Delta F$ can be given as follows,
\begin{equation}
\label{S.15}
\Delta F=\langle \Delta H\rangle- \langle \Delta s_{X|Y}\rangle,\tag{S.15}
\end{equation}
where the total energy change $\langle \Delta H\rangle$ and the true entropy change $\langle \Delta s_{X|Y}\rangle$ have been given in Eq.(S.10) and Eq.(S.13) respectively.
We should note that in the measurement process the averaged measurement heat can be given as $\langle Q_{mea}\rangle=-\langle Q_{X|Y}\rangle$, and in the feedback control the extracted work can be taken as $\langle W_{ext}\rangle=-\langle W_{X|Y}\rangle$.
\subsection{Dissipative Information}
The dissipative information for the general case has been given by Eq.(1). We take the average of the dissipative information as follows,
\begin{equation}
\label{S.16}
\langle \sigma_{I}\rangle=\sum_{y,x(t)}p(y)p[x(t)|y]\left\{\log\frac{p[x(t)|y]}{p[x(t)]}- \log\frac{p[\widetilde{x}(t)|y]}{p[\widetilde{x}(t)]}\right\}.\tag{S.16}
\end{equation}
Here, for the Szilard's type demon, we can give the explicit expressions of $\langle \sigma_{I}\rangle$ in the measurement and the feedback control processes as follows,
\begin{enumerate}
\item For the measurement process (Eq.(S.6)), we have
\begin{equation}
\label{S.17}
\langle \sigma_{I}\rangle=I_t-\langle \widetilde{i}\rangle,\tag{S.17}
\end{equation}
where
\begin{equation*}
\begin{cases}
I_t=\sum_{y,x_t}p(y)p^{ss}_{X|Y}(x_t|y)\log\frac{p^{ss}_{X|Y}(x_t|y)}{\sum_y p(y)p^{ss}_{X|Y}(x_t|y)},\\
\langle \widetilde{i}\rangle=\sum_{y,x_c,x_t}p(y)p_{X_c|y}(x_c|y)p^{ss}_{X|Y}(x_t|y) \log\frac{p^{ss}_{X|Y}(x_t|y)(x_t|y)p^{ss}_{X|Y}(x_c|y)}{\sum_yp(y)p^{ss}_{X|Y}(x_t|y)(x_t|y)p^{ss}_{X|Y}(x_c|y)}.
\end{cases}
\end{equation*}
Here $I_t$ is the mutual information which quantifies the final correlation between the system and the demon; and $\langle \widetilde{i}\rangle$ is the averaged dynamical mutual information along the time-reversed trajectories.
\item For the feedback control process, consider the cases where the final correlation vanishes (Eq.(S.8)), we have that
\begin{equation}
\label{S.18}
\langle \sigma_{I}\rangle=I_c=\sum_{y,x_c}p(y)p_{X_c|Y}(x_c|y)\log\frac{p_{X_c|Y}(x_c|y)}{\sum_y p(y)p_{X_c|Y}(x_c|y)}.\tag{S.18}
\end{equation}
Here $I_c$ is the mutual information which quantifies the remained correlation between the system and the demon right after the control.
\end{enumerate}
Although the system trajectories start with the state $x_c$ right after the control but not the initial state $x_0$, the new bounds in Eq.(10) or Eq.(11) still hold for the heat and the work in the case of the Szilard's type demon. This is because the heat $Q_{X|Y}$ is generated along the trajectories $x(t)=\{x_c, x_t\}$, and the dynamics shown in Eq.(S.3) is consistent in time. Consequentially, the dissipative information $\langle \sigma_{I}\rangle$ which is measured along the trajectories $x(t)$ can be used to bound the averaged heat shown in Eq.(10) or Eq.(11). On the other hand, the effective free energy change $\Delta F$ guarantees that the averaged work $\langle W_{X|Y}\rangle$ can be bounded by using the entropy production $\langle \sigma_{X|Y}\rangle$ quantified along the trajectories $x(t)$. Then $\langle W_{X|Y}\rangle$ can be also bounded by using $\langle \sigma_{I}\rangle$ since $\langle W_{X|Y}\rangle- \Delta F=\langle\sigma_{X|Y}\rangle\ge \langle \sigma_{I}\rangle$ (Eq.(8)). This yields the new bounds for the work in Eq.(10) or Eq.(11).
\section{Detailed Settings and Calculations on the Cases of the Information Ratchet}
\subsection{Settings of the Particle System}
A particle is confined in a 1-dimension finite zone ranged from $s=-1$ to $1$. Here $s$ is the position of the particle in the zone. The particle can feel a potential $U(s)$ with two ``flat" wells. The potential has two possible profiles as shown in Fig. 1. The functions of the two profiles can be respectively given by
\begin{equation}
\label{S.19a}
U_0(s)=\begin{cases}\tag{S.19a}
0, &s\in[-1,0),\\
V, &s\in[0,1),
\end{cases}
\end{equation}
and
\begin{equation}
\label{S.19b}
U_1(s)=\begin{cases}\tag{S.19b}
V, &s\in[-1,0),\\
0, &s\in[0,1),
\end{cases}
\end{equation}
where the subscript of the potential profile $0$ or $1$ represents the location of the lower well as on the left or the right half of the zone. Here $V>0$ is referred to potential height.
\begin{figure}[!hbp]
\centering
\includegraphics[height=10cm,width=16cm]{FIG1supp.eps}
\caption{The potential profiles given in Eq.(S.19).}
\label{FIG1}
\end{figure}
The particle is driven by a random force $\xi(t)$ resulted from the environmental thermal bath with temperature $T=1$. Consequentially, the particle obeys a Langevin dynamics where the inertia effect can be neglected,
\begin{equation}
\label{S.20}
0=-\gamma\frac{ds}{dt}-\frac{dU(s)}{ds}+\xi(t),\tag{S.20}
\end{equation}
where $-\gamma\frac{ds}{dt}$ is the viscous friction force with the (friction) coefficient $\gamma>0$; $\xi(t)$ is the random force (Gaussian white noise) characterized by the following relations, $\langle\xi(t)\rangle=0$, and $\langle\xi(t)\xi(t')\rangle=2\gamma\beta^{-1}\delta(t-t')$.
If the potential height $V$ satisfies the inequality $V\le1$, then the particle can ``jump" across the barrier and move to either of the wells. Thus, starting from arbitrary position in the zone, the system can reach the equilibrium steady state with the stationary distribution of the position $\pi(s)= Z^{-1}\exp[-U(s)]$, where $Z=\int_{-1}^{1}\exp[-U(s)]ds=1+\exp(-V)$ is the partition function. Then the probabilities that the particle can be found at either of the two wells in the equilibrium steady state can be given by:
\begin{equation}
\label{S.21}
\begin{cases}\tag{S.21}
\text{probability at the lower well, }p_l=\int_{\text{location of lower well}}\pi(s)ds=\frac{1}{1+\exp(-V)},\\
\text{probability at the higher well, }p_h=\int_{\text{location of higher well}}\pi(s)ds=\frac{\exp(-V)}{1+\exp(-V)}.
\end{cases}
\end{equation}
When the particle feels no potential, the relaxation time to reach the equilibrium is approximately the diffusion time $\tau_{0}=2/D$, where $D=\gamma^{-1}$ is the diffusion coefficient. Then the transitions rates of the particle between the two wells can be roughly proportional to the inverse of $\tau_{0}$:
\begin{equation}
\label{S.22}
\begin{cases}\tag{S.22}
\text{rate from the lower to higher well, }r_{l\to h}=v_{0}\exp(-\frac{1}{2} V),\\
\text{rate from the higher to lower well, }r_{h\to l}=v_{0}\exp(\frac{1}{2} V),
\end{cases}
\end{equation}
where $v_{0}=\tau^{-1}_{0}=D/2$. Then the average dwelling time that the particle is at the lower well or the higher well can be given by the inverses of the transition rates as follows,
\begin{equation}
\label{S.23}
\begin{cases}\tag{S.23}
\text{dwelling time at the lower well, }\tau_{l}=\frac{1}{r_{h\to l}},\\
\text{dwelling time at the higher well, }\tau_{h}=\frac{1}{r_{l\to h}},
\end{cases}
\end{equation}
By using the transition rates in Eq.(S.22), we can formulate a series of master equations in the form of Eq.(S.3). These equations can be constructed based on an essential master equation which characterizes the dynamics of the particle as follows,
\begin{equation}
\label{S.24}
\begin{cases}\tag{S.24}
\frac{du_l(t)}{dt}=r_{h\to l}u_h(t)-r_{l\to h}u_l(t),\\
\frac{du_h(t)}{dt}=r_{l\to h}u_l(t)-r_{h\to l}u_h(t),
\end{cases}
\end{equation}
where $u_l(t)$ and $u_h(t)$ ($u_l(t)+u_h(t)=1$) represent the probabilities that the particle is at the lower and the higher wells at time $t$ respectively. In general, given arbitrary initial distribution, the particle finally goes to the equilibrium distribution as $u_l\to p_l$ and $u_h\to p_h$. The corresponding relaxation time can be estimated by,
\begin{equation}
\label{S.25}
\tau_{sys}=\frac{\tau_0}{2\cosh(\frac{1}{2} V)}=\frac{1}{r_{l\to h}+r_{h\to l}}<\tau_{h}<\tau_{l}.\tag{S.25}
\end{equation}
We can see from Eq.(S.25) that the relaxation time $\tau_{sys}$ is less than the average dwelling times at each wells. Thus, it is possible to control the particle with fast operation rate before the particle jumps between wells by switching the potential profiles, and then the particle can goes to the equilibrium steady state in a finite time after the control\cite{6}.
\begin{figure}[!hbp]
\centering
\includegraphics[height=10cm,width=16cm]{FIG2supp.eps}
\caption{The confined particle works as a demon or is controlled by a demon. In (a) and (b), the particle is used as a demon and measures the system state before the control. The system state, denoted by $y$, is represented by the location of the lower well ($0$ or $1$) in the potential. In (a), $y=1$; and In (b), $y=0$. The final state of the particle, denoted by $x_t$, is taken as the observation of $y$. In (c) and (d), the particle is controlled by a demon. The initial state of the particle, denoted by $x_0$, is represented by $h$ or $l$ when the particle is at the higher or the lower well. When spotting the state $x_0=h$, the demon reverses the potential and extracts a positive work of the particle system.}
\label{FIG4}
\end{figure}
\subsection{Information Measurement}
If the particle works as a demon (shown in Fig.2. (a) and (b)), the particle is supposed to measure the state of the controlled system at first. The state of the particle is denoted by $x=0$ or $1$ when at the left or the right well respectively. The system state can be represented by the location of the lower well with the value of $y=0$ or $1$ with equal probability $p(y=0)=p(y=1)=1/2$. The particle is initially under the equilibrium until the system state changes. Correspondingly, the potential profile is reversed by the system immediately and the particle starts measuring the current system state. When the equilibrium is achieved, the final state $x_t$ of the particle is taken as an observation of $y$. The probability of the measurement error can be given by the probability of the particle at the higher well, $p_{X_t|Y}(x_t\neq y|y)=p_h$ (given by Eq.(S.21)). On the other hand, the measurement precision is characterized by the probability of the particle at the lower well, $p_{X_t|Y}(x_t= y|y)=p_l$.
Due to the description of the information measurement, we rewrite the potential profiles in Eq.(S.19) as follows,
\begin{equation}
\label{S.26a}
U(x,0)=\begin{cases}\tag{S.26a}
0, &x=0,\\
V, &x=1,
\end{cases}
\end{equation}
and
\begin{equation}
\label{S.26b}
U(x,1)=\begin{cases}\tag{S.26b}
V, &x=0,\\
0, &x=1,
\end{cases}
\end{equation}
According to the Markovian dynamics given in Eq.(S.24), we can evaluate the probabilities of the particle trajectories by using Eq.(S.6), given the initial potential profile as $U(x,y')$ ($y'=0,1$) in Eq.(S.26). Since the potential profile change does not alter the position of the particle in this case, the particle state $x_c$ right after the control is equal to the initial state $x_0$, i.e., $x_c=x_0$. The probabilities of the trajectories $x(t)=\{x_c,x_t\}$ and the time-reversal trajectories $\widetilde{x}(t)=\{x_t,x_c\}$ are given in Table I and Table II respectively.
On the other hand, the value of $y'$ (in the expression of the initial profile $U(x,y')$) can be taken as the system state in the previous control (measurement) cycle when performing the cyclic control, where the probabilities of $y'$ can be given by $p(y'=0)=p(y'=1)=1/2$. Thus, the final (equilibrium) distribution in the previous cycle, represented by $p_{X_t|Y'}$ is exactly the initial distribution in the current cycle. Then we can rewrite the probability of the initial state $p_{X_0}(x_0)$ into $p_{X_0|Y'}(x_0|y')$. Consequentially, the probability of the state $x_c=x_0$ right after the control can be given by $p_{X_0|Y'}(x_0|y')$. With this thought, the averaged measurement heat (see Eq.(S.11)) in this case can be given by
\begin{equation}
\label{S.27}
\langle Q_{mea}\rangle=\langle H\rangle_c-\langle H\rangle_t=(1/2-p_h)V.\tag{S.27}
\end{equation}
Here, the total energy right after the control is given by $\langle H\rangle_c=\sum_{y',y,x}p(y')p(y)p_{X_0|Y'}(x|y')U(x,y)=V/2$; and the final total energy is given by $\langle H\rangle_t=\sum_{y,x}p(y)p_{X_t|Y}(x|y)U(x,y)=p_hV$.
The entropy change (see Eq.(S.13)) can be given by
\begin{equation}
\label{S.28}
\langle \Delta s_{X|Y}\rangle=S_{X_t|Y}-S_{X_c|Y}=-I_t.\tag{S.28}
\end{equation}
Here the (conditional) entropy right after the control is given by $S_{X_c|Y}=S=-p_l\log p_l-p_h\log p_h$; and the final entropy is given by $S_{X_t|Y}=S-I_t$, where $I_t=\log 2-S\ge0$ is the final mutual information which measures the correlation between the observation $x_t$ and the state $y$.
The dissipative information can be evaluated by inserting the probabilities shown in Table I and Table II into Eq.(S.17). Due to the symmetry of the potential profile in Eq.(S.26), we have the dissipative information as follows,
\begin{equation}
\label{S.29}
\langle\sigma_{I}\rangle=\log\sqrt{2p_l^2+2p_h^2}. \tag{S.29}
\end{equation}
\begin{table}[!ht]
\caption{The probabilities of the particle trajectories in the information measurement, given the initial potential profile as $U(x,0)$}
\begin{ruledtabular}
\begin{tabular}{lllllll}
$x(t)$ & $p[x(t)|y=0]$ & $p[x(t)|y=1]$ & $p[x(t)]$& $p[\widetilde{x}(t)|y=0]$ & $p[\widetilde{x}(t)|y=1]$ &$p[\widetilde{x}(t)]$\\
\hline
$\{0,0\}$ & $p_l^2$ & $p_lp_h$ & $\frac{1}{2}p_l$ & $p_l^2$ &$p_h^2$ & $\frac{1}{2}(p_l^2+p_h^2)$ \\
\hline
$\{0,1\}$ & $p_lp_h$ & $p_l^2$ & $\frac{1}{2}p_l$ & $p_lp_h$ &$p_lp_h$ & $p_lp_h$ \\
\hline
$\{1,0\}$ & $p_lp_h$ & $p_h^2$ & $\frac{1}{2}p_h$ & $p_lp_h$ &$p_lp_h$ & $p_lp_h$ \\
\hline
$\{1,1\}$ & $p_h^2$ & $p_lp_h$ & $\frac{1}{2}p_h$ & $p_h^2$ &$p_l^2$ & $\frac{1}{2}(p_l^2+p_h^2)$
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}[!ht]
\caption{The probabilities of the particle trajectories in the information measurement, given the initial potential profile as $U(x,1)$}
\begin{ruledtabular}
\begin{tabular}{lllllll}
$x(t)$ & $p[x(t)|y=0]$ & $p[x(t)|y=1]$ & $p[x(t)]$& $p[\widetilde{x}(t)|y=0]$ & $p[\widetilde{x}(t)|y=1]$ &$p[\widetilde{x}(t)]$\\
\hline
$\{0,0\}$ & $p_lp_h$ & $p_h^2$ & $\frac{1}{2}p_h$ & $p_l^2$ &$p_h^2$ & $\frac{1}{2}(p_l^2+p_h^2)$ \\
\hline
$\{0,1\}$ & $p_h^2$ & $p_lp_h$ & $\frac{1}{2}p_h$ & $p_lp_h$ &$p_lp_h$ & $p_lp_h$ \\
\hline
$\{1,0\}$ & $p_l^2$ & $p_lp_h$ & $\frac{1}{2}p_l$ & $p_lp_h$ &$p_lp_h$ & $p_lp_h$ \\
\hline
$\{1,1\}$ & $p_lp_h$ & $p_l^2$ & $\frac{1}{2}p_l$ & $p_h^2$ &$p_l^2$ & $\frac{1}{2}(p_l^2+p_h^2)$
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Work Extraction}
We use a demon to extract positive work from the particle system (shown in Fig.2. (c) and (d)). In this case, the state of the particle can be denoted by $x=l$ or $h$ when at the lower or the higher well respectively. Initially, the particle is under equilibrium. he demon measures the state of the particle at first and obtains the observation $y$. The demon plays the feedback control according to the observation $y$. When the particle is observed to be at the higher well, the demon reverses the potential immediately and extracts an amount of work $W_{ext}=V$. After the control, the demon does nothing until the particle goes to the equilibrium again. For a practical thought, the demon's measurement can have a random error and this error certainly lowers the efficiency of the work extraction. Here we simply assume that the measurement error occurs with stable probability $p_{Y|X_0}(y\neq x_0|x_0)=\epsilon$.
Due to the description of the work extraction, we rewrite the potential profiles in Eq.(S.19) as follows,
\begin{equation}
\label{S.30}
U(x)=\begin{cases}\tag{S.30}
0, &x=l,\\
V, &x=h.
\end{cases}
\end{equation}
We can see from Eq.(S.30) that the potential profile in the work extraction depends no more on the demon state $y$. Thus, the demon does not change the Hamiltonian in this case. The particle state $x_c$ right after the control can be written as the function of the initial state $x_0$ and the demon's observation $y$ as follows,
\begin{equation}
\label{S.31}
x_c=\begin{cases}\tag{S.31}
l, &x_0=l,y=l,\\
l, &x_0=h,y=h,\\
h, &x_0=l,y=h,\\
h, &x_0=h,y=l.
\end{cases}
\end{equation}
We can see from Eq.(31) that the state $x_c=h$ only if a measurement error $y\neq x_0$ occurs. Thus, the probability of $x_c=h$ is equal to the error probability, $p_{X_c}(x_c=h)=\epsilon$; and the probability of $x_c=l$ is equal to the precision probability, $p_{X_c}(x_c=l)=1-\epsilon$. The final (equilibrium) distribution of the state only depends on the potential profiles in Eq.(S.30) and is independent of the demon' observation $y$.
According to Eq.(S.31) and the relationships between the probabilities given in Eq.(S.2), we can evaluate the averaged work by using Eq.(S.9) as follows,
\begin{equation}
\label{S.32}
\langle W_{ext}\rangle=\langle H\rangle_0-\langle H\rangle_c=(p_h-\epsilon)V.\tag{S.32}
\end{equation}
Here, the initial total energy is given by $\langle H\rangle_0=\sum_xp_{X_0}(x)U(x)=p_hV$; and the total energy right after the control is given by $\langle H\rangle_c=\sum_{x,y}p(y)p_{X_c}(x|y)U(x)=\sum_{x}p_{X_c}(x)U(x)=\epsilon V$.
The effective free energy difference can be given by Eq.(S.15)
\begin{equation}
\label{S.33}
\Delta F=\langle \Delta H\rangle- \langle \Delta s_{X|Y}\rangle=-I_0.\tag{S.33}
\end{equation}
Here the total energy change $\langle \Delta H\rangle=0$ because the demon does not change the Hamiltonian; the entropy change $\langle \Delta s_{X|Y}\rangle=-I_0$. Here the mutual information $I_0=S_Y-S_\epsilon\ge0$ represents the initial correlation between the demon and the particle, where the Shannon entropies $S_Y=-p_y\log p_y-(1-p_y)\log(1- p_y)$ and $S_\epsilon=-\epsilon\log \epsilon-(1-\epsilon)\log (1-\epsilon)$, with $p_y=p_l(1-\epsilon)+p_h\epsilon$ representing the probability of the observation $y=l$.
The dissipative information in this case can be given by Eq.(S.18) as $\langle \sigma_{I}\rangle=I_c$, where the mutual information $I_c$ measures the remained correlation between the system and the demon right after the control.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,304
|
a novel spacecraft trajectory design method using hybrid low thrust system is proposed in this paper. The hybrid system is constituted with a solar sail propulsion thruster and a solar electric propulsion thruster. In proposed method, the former one provides radical thrust and circumferential thrust to from a virtual gravity, while the later one provides a tangential thrust. In this way, the spacecraft is virtually motioned by constant tangential thrust in a virtual gravity field. Using proposed method, the thrusting trajectory can be parameterized, and a large number of feasible trajectories for circle to circle rendezvous problem can be obtained. To the end the steering law to minimize the fuel cost is found using Matlab optimization tools Fmicon function, and the result is compared with traditional pure solar electric propulsion method in terms of payload mass fraction. The simulation results show that the proposed method can reduced propellant consumption significantly compared with the pure SEP system.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 590
|
Prayer, like breathing, is not only natural but it is also necessary for our new life in Christ. I think that we can all agree that it's a bad thing to stop breathing. Breathing is so important that it is an automatic function. We don't have to remind ourselves to breathe. About the only time, we even think about it is when we've become short of breath through exertion.
Recently Betty and I were hiking up the side of Mt. Ranier in Washington with some family. The elevation was high and the going difficult. Not only was the trail steep but it was covered with a thick layer of packed snow. During that hike, John, an experienced mountain hiker, showed me how to change my breathing to adapt to the higher elevation. The same can be said of prayer, we may need to adjust our prayer to changing elevations.
Here's my suggestion. Often our prayer life is driven by what is wrong, troubling, confusing, or hurting in our lives. Like pain that drives us to the medicine cabinet or the doctor, the struggles of life and the pain of our souls drives us to prayer. And there is nothing wrong with that. Jesus invites us to throw our cares and anxieties upon Him (1 Peter 5:7). However, if problems and pains are the only reason we pray then we will never walk the higher elevations with Jesus. Paul instructed the Thessalonian church to "Rejoice always; pray without ceasing; in everything give thanks; for this is God's will for you in Christ Jesus." (1 Thessalonians 5:16–18, NASB95) Our prayers are to be continuous and wrapped in joy and thanksgiving. To walk the higher places we need to look up, rejoicing with thanksgiving instead of looking down at our problems and pains.
Rejoice! It's not about feeling happy. To rejoice means to look beyond to something better. Jesus preached, "Blessed are you when people insult you and persecute you, and falsely say all kinds of evil against you because of Me. "Rejoice and be glad, for your reward in heaven is great; for in the same way they persecuted the prophets who were before you." (Matthew 5:11–12, NASB95) I don't think that any of us would enjoy being bullied, mocked, beaten or worse because of our faith but Jesus said to rejoice, look farther ahead. We are to rejoice that our names are written in heaven (Luke 10:20). Rejoice because of the harvest no matter your role in it (John 4:36). Rejoice in hope (Romans 12:12). Rejoice in truth (1 Corinthians 13:6). Paul said it best, "Rejoice in the Lord always; again I will say, rejoice!" (Philippians 4:4, NASB95) No matter the pains or the victories – Rejoice!
Ever consider the word "thanksgiving"? Giving thanks is not an act of obligation, it's one of grace. Thanks should be given freely, not out of compulsion or habit. The giving of thanks is vitally important because it recognizes relationship and the freely granted service of another. It changes our view from something owed to something given. If we ask "please pass the salt" and the other person does pass the salt, it is not because they owed us something. The salt passer did so freely and not under compulsion or debt. It is for that reason that we say "thank you". Keeping our prayers wrapped in thanksgiving recognizes God's unwarranted grace towards us. Thanking Jesus in advance, even before we see the results of our prayer, keeps us from treating God like a bubble-gum machine. (As in, I dropped the quarter in, I prayed, so you owe me something).
As I panted for breath on the side of Mt. Ranier all I could focus on was the next step and the next breath. Changing my breathing as John suggested, allowed me to look up and see the beauty and majesty of God that surrounded us in that place. It did not change the difficulty of the trail but it did allow me to enjoy the journey instead of just suffering through it. By wrapping our prayers in joy and thanksgiving our focus changes. We begin to see God's hands in places that we never expected. Our eyes are opened to the majesty and beauty of God's creation. Instead of focusing on the faults of others or ourselves we begin to see our world through the eyes of hope. Offering thanks to God through Jesus allows us to lift our heads and focus on Him and not on the problems and pains of life.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 2,088
|
Q: Oracle PL/SQL speed of NVL/LENGTH/TRIM calls versus IS NOT NULL AND != ' ' I try to find the best way to check if a CHAR/VARCHAR2 variable contains characters (NULL or spaces should be considered the same, as "no-value"):
I know there are several solutions, but it appears that (NVL(LENGTH(TRIM(v)),0) > 0) is faster than (v IS NOT NULL AND v != ' ')
Any idea why? Or did I do something wrong in my test code?
Tested with Oracle 18c on Linux, UTF-8 db charset ...
I get the following results:
time:+000000000 00:00:03.582731000
time:+000000000 00:00:02.494980000
set serveroutput on;
create or replace procedure test1
is
ts timestamp(3);
x integer;
y integer;
v char(500);
--v varchar2(500);
begin
ts := systimestamp;
--v := null;
v := 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa';
for x in 1..50000000
loop
if v is not null and v != ' ' then
y := x;
end if;
end loop;
dbms_output.put_line('time:' || (systimestamp - ts) ) ;
end;
/
create or replace procedure test2
is
ts timestamp(3);
x integer;
y integer;
v char(500);
--v varchar2(500);
begin
ts := systimestamp;
--v := null;
v := 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa';
for x in 1..50000000
loop
if nvl(length(trim(v)),0) > 0 then
y := x;
end if;
end loop;
dbms_output.put_line('time:' || (systimestamp - ts) ) ;
end;
/
begin
test1();
test2();
end;
/
drop procedure test1;
drop procedure test2;
quit;
A: The best practice is to ignore the speed difference between small functions and use whatever is easiest.
In realistic database programming, the time to run functions like NVL or IS NOT NULL is completely irrelevant compared to the time needed to read data from disk or the time needed to join data. If one function saves 1 seconds per 50 million rows, nobody will notice. Whereas if a SQL statement reads 50 million rows with a full table scan instead of using an index, or vice-versa, that could completely break an application.
It's unusual to care about these kinds of problems in a database. (But not impossible - if you have a specific use case, then please add it to the question.) If you really need optimal procedural code you may want to look into writing an external procedure in Java or C.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 7,372
|
{"url":"https:\/\/www.appropedia.org\/Free_and_open-source_automated_3-D_microscope","text":"Project data\nAuthors Wijnen B.Petersen E. E.Hunt E. J.Joshua M. Pearce Michigan, USA Designed Modelled Prototyped Verified MOST https:\/\/www.academia.edu\/28175221\/Free_and_Open_Source_Automated_3_D_Microscopehttps:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1111\/jmi.12433 Download Upload your project too!body.poncho-dark-mode .mw-parser-output .mw-ui-button{background:#303134;border-color:#3c4043;color:#bdc1c6}@media screen and (max-width:500px){.mw-parser-output .button .mw-ui-button{display:block;margin:.5em 0}}\nDevice data\nDesign files https:\/\/osf.io\/v2pwa\/ Design informationBOMetc Franklin CERN-OHL-S Start OSHWA certification\n\nOpen-source technology not only has facilitated the expansion of the greater research community, but by lowering costs it has encouraged innovation and customizable design. The field of automated microscopy has continued to be a challenge in accessibility due the expense and inflexible, noninterchangeable stages. This paper presents a low-cost, open-source microscope 3-D stage. A RepRap 3-D printer was converted to an optical microscope equipped with a customized, 3-D printed holder for a USB microscope. Precision measurements were determined to have an average error of 10 \u03bcm at the maximum speed and 27 \u03bcm at the minimum recorded speed. Accuracy tests yielded an error of 0.15%. The machine is a true 3-D stage and thus able to operate with USB microscopes or conventional desktop microscopes. It is larger than all commercial alternatives, and is thus capable of high-depth images over unprecedented areas and complex geometries. The repeatability is below 2-D microscope stages, but testing shows that it is adequate for the majority of scientific applications. The open-source microscope stage costs less than 3\u20139% of the closest proprietary commercial stages. This extreme affordability vastly improves accessibility for 3-D microscopy throughout the world.\n\nSource\n\n## Lay description\n\nThe concept of open-source technology stems from the developmental model of accessibility and ease of use, enabling an environment of continual improvement and open communication. This approach has the potential to dramatically change many fields of study including that of microscope use in research settings. Automated microscopy is often a challenging and inaccessible field due to the cost and noninterchangeable stages, the flat plate on which samples are placed for analysis. This paper presents a low-cost, open-source microscope 3-D stage. A RepRap 3-D printer was converted to an optical microscope equipped with a customized, 3-D printed holder for a USB microscope. Precision measurements were determined to have an average error of 10\u03bc at the maximum speed and 27\u03bc at the minimum recorded speed. Accuracy tests yielded an error of 0.15%. The machine is a true 3-D stage and thus able to operate with USB microscopes or conventional desktop microscopes. Because it is larger than all current commercial alternative systems, it is capable of high depth images over unprecedented areas and complex geometries. The repeatability is below 2-D microscope stages, but testing shows that it is adequate for the majority of scientific applications. The open-source microscope stage costs less than 3% to 9% of the closest proprietary commercial stages. This extreme affordability vastly improves accessibility for 3-D microscopy throughout the world.\n\nHow to Use the OS 3-D Microscope\n\n## Code for quick picture using Franklin\n\nBy Bas\n\n #!\/usr\/bin\/python3\nimport websocketd\np = websocketd.RPC('host.with.franklin:8000', tls = False)\nfor y in range(10):\nfor x in range(10):\np.line_cb((x, y))\n# Insert code for taking a picture here.","date":"2022-08-13 21:22:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5332263708114624, \"perplexity\": 5715.8733201744935}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882571987.60\/warc\/CC-MAIN-20220813202507-20220813232507-00615.warc.gz\"}"}
| null | null |
\section{Background}
\section{Proof Details}
\begin{lemma}[Azuma-Hoeffdings]\label{lemma:Azuma-Hoeffdings}
Let $M_t$ be a martingale on a filtration $\fF_t$ with almost surely bounded increments $|M_t - M_{t-1}| < B$. Then
\begin{align*}
\PP[M_T - M_0 > s] \leq \exp\left(-\frac{s^2}{2TB^2}\right)
\end{align*}
\end{lemma}
\subsection{Proof of Lemma \ref{lemma: regret in terms of ucb}} \label{app: proof lemma}
To bound the regret of Algorithm \ref{alg: distribution ucb linear} with sampled features $\tilde{\psi}_{x,\mu_t}$, we add and subtract $(\tilde{\psi}_{x_t^*,\mu_t} + \bar{\psi}_{x_t,\mu_t} + \tilde{\psi}_{x_t,\mu_t})^\T \theta$ to the regret to find
\begin{align*}
\rR_T &= \rR_T^{UCB} + \sum_{t=1}^T(\phi_{x_t^*,c_t} - \tilde{\psi}_{x_t^*,\mu_t} + \bar{\psi}_{x_t,\mu_t} - \phi_{x_t,c_t})^\T \theta + \sum_{t=1}^T (\tilde{\psi}_{x_t,\mu_t} - \bar{\psi}_{x_t,\mu_t})^\T \theta\\
&\leq \rR_T^{UCB} + 4\sqrt{ 2 T \log \frac{1}{\delta}} + \sum_{t=1}^T (\tilde{\psi}_{x_t,\mu_t} - \bar{\psi}_{x_t,\mu_t})^\T \theta \text{ .}
\end{align*}
By the same reasoning as for the expected features, we bounded
\begin{align} \label{eq: concentrated features 2}
\sum_{t=1}^T(\phi_{x_t^*,c_t} - \tilde{\psi}_{x_t^*,\mu_t} + \bar{\psi}_{x_t,\mu_t} - \phi_{x_t,c_t})^\T \theta \leq 4\sqrt{ 2 T \log \frac{1}{\delta}}
\end{align}
using Azuma-Hoeffding's inequality. We are left with a sum $\sum_{t=1}^T (\tilde{\psi}_{x_t,\mu_t} - \bar{\psi}_{x_t,\mu_t})^\T \theta$ that is more intricate to bound because $x_t$ depends on the samples $\tilde{c}_{t,l}$ that define $\tilde{\psi}_{x_t,\mu_t}$. We exploit that for large $L$, $\tilde{\psi}_{x_t,\mu_t}^L - \bar{\psi}_{x_t,\mu_t} \rightarrow 0$. First consider a fixed $x \in \xX$. Then, by $(\tilde{\psi}_{x,\mu_t} - \bar{\psi}_{x,\mu_t})^\T \theta \leq 2$ and Azuma-Hoeffding's inequality with probability at least $1-\delta$,
\begin{align}\label{eq: concentrated features}
(\tilde{\psi}_{x,\mu_t} - \bar{\psi}_{x,\mu_t})^\T \theta \leq \sqrt{\frac{8}{L} \log \frac{1}{\delta}} \text{ .}
\end{align}
We enforce this to hold for any $x \in \xX$ and any time $t \in \NN$ by replacing $\delta$ by $\frac{6 \delta}{|\xX|\pi^2 t^2}$ and taking the union bound over the event where \eqref{eq: concentrated features} holds. By our choice $L=t$ and $\sum_{t=1}^T \frac{1}{\sqrt{t}} \leq 2 \sqrt{T}$, we have with probability at least $1-\delta$, for any $T \in \NN$,
\begin{align} \label{eq: concentrated features 3}
\sum_{t=1}^T (\tilde{\psi}_{x_t,\mu_t} - \bar{\psi}_{x_t,\mu_t})^\T \theta \leq 4\sqrt{T \log \frac{|\xX|\pi^2 T^2}{6 \delta}} \text{ .}
\end{align}
A final application of the union bound over the events such that \eqref{eq: concentrated features 2} and \eqref{eq: concentrated features 3} simultaneously hold, gives
\begin{align*}
R_T \leq \rR_T^{UCB} + 4\sqrt { 2 T \log \frac{|\xX|\pi T}{3 \delta}} \text{ .}
\end{align*}
\subsection{Proof of Theorem \ref{thm: main regret bound}} \label{app: proof theorem}
To bound the regret term $\rR_T^{UCB}$ for the case where we uses sample-based feature vectors, the main task is to show a high-probability bound on $\|\theta - \hat{\theta}_t\|_{V_t}$ with observations $y_t = \tilde{\psi}_{x,t}^\T\theta + \xi_t + \epsilon_t$. Recall that $\xi_t = (\phi_{x,c_t}- \tilde{\psi}_{x_t, \mu_t}) ^\T \theta$, but now we have in general $\EE[\xi_t|\fF_{t-1},\mu_t,x_t] \neq 0$, because $x_t$ depends on the sampled features $\tilde{ \psi}_{x,\mu_t}$. The following lemma bounds the estimation error of the least-square estimator in the case where the noise term contains an (uncontrolled) biased term.
\begin{lemma}
Let $\hat{\theta}_t$ be the least squares estimator $\hat{\theta}_t$ defined for any sequence $\{(\phi_t, y_t)\}_{t}$ with observations $y_t = \phi_t^\T\theta + b_t + \epsilon_t$, where $\epsilon_t$ is $\rho$-subgaussian noise and $b_t$ is an arbitrary bias. The following bound holds with probability at least $1-\delta$, at any time $t \in \NN$,
\begin{align*}
\quad \|\theta - \hat{\theta}_t\|_{V_t} \leq \beta_t + \sqrt{\textstyle \sum_{t=1}^T b_t^2}\qquad \text{where } \beta_t = \beta_t(\rho, \delta) = \rho \sqrt{2\log \left(\frac{\det (V_t)^{1/2}}{\delta\det (V_0)^{1/2}} \right)} + \lambda^{1/2} \|\theta\|_2\text{ .}
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma]
Basic linear algebra and the triangle inequality show that
\begin{align}\label{eq: least-squares estimator split}
\|\theta - \hat{\theta}_t\|_{V_t} \leq \|\sum_{s=1}^t \phi_s \epsilon_s\|_{V_s^{-1}} + \|\sum_{s=1}^t \phi_s b_s \|_{V_s^{-1}} + \lambda^{1/2} \|\theta\|_2
\end{align}
Recall that we assume $\|\theta\|_2 \leq 1$ in order to bound the last term. The noise process can be controlled with standard results \cite[Theorem 1]{abbasi2011improved}, specifically $ \|\sum_{s=1}^t \phi_s \epsilon_s \|_{V_s^{-1}} \leq \beta_t$. Finally, to bound the sum over the biases, set $b = [b_1, \dots, b_t]$ and $A = [\phi_1, \dots, \phi_t]^\T \in \RR^{d\times t}$. The matrix inequality
\begin{align}
A^\T(AA^\T + \lambda \mathbf{I}_d)^{-1}A \leq \mathbf{I}_t
\end{align}
follows by using a SVD decomposition. This implies
\begin{align*}
\|\sum_{s=1}^t \phi_s b_s \|_{V_s^{-1}}^2 = \|Ab\|_{(AA^\T + \lambda \mathbf{I}_d)^{-1}}^2 \leq \|b\|_2^2
\end{align*}
Applying the individual bounds to \eqref{eq: least-squares estimator split} completes the proof of the lemma.
\end{proof}
We continue the proof of the theorem with the intuition to use the lemma with $b_t = (\bar{\psi}_{x,\mu_t}-\tilde{ \psi}_{x_t, \mu_t})^\T \theta$. To control the sum over bias terms $b_t$, note that by our choice $L=t$, similar to \eqref{eq: concentrated features 3}, with probability at least $1-\delta$, for all $t\in \NN$ and $x \in \xX$,
\begin{align*}
|b_t| \leq \sqrt{\frac{8}{t} \log \frac{\pi^2 t^2 |\xX|}{6 \delta}} \text{ .}
\end{align*}
Hence with $\sum_{s=1}^t \frac{1}{s}\leq \log(t)$, we get
\begin{align*}
\sum_{t=1}^T b_t^2 \leq 8\log(T)\log \left( \frac{\pi^2 T^2|\xX|}{6 \delta}\right) \text{ .} \label{eq:noise process 2}
\end{align*}
As before, the remaining terms are zero-mean, $\EE[(\phi_{x_t,c_t}- \bar{\psi}_{x_t, \mu_t})^\T \theta + \epsilon_t|\fF_{t-1},\mu_t, x_t] = 0$, and $\sqrt{4+ \sigma^2}$-subgaussian. Hence, with the previous lemma and another application of the union bound we get with probability at least $1-\delta$,
\begin{align}
\|\theta - \hat{\theta}_t\|_{V_t} &\leq \beta_t + \sqrt{\textstyle \sum_{t=1}^T b_t^2 } + \lambda \|\theta\|_2 \nonumber\\
&\leq \sqrt{ 2 (4 + \sigma^2) \log \left(\frac{2\det (V_t)^{1/2}}{\delta\det (V_0)^{1/2}} \right)}+ \sqrt{ 8\log(T)\log \left( \frac{\pi^2 T^2 |\xX|}{3 \delta}\right)} + \lambda \text{ .}
\end{align}
Finally, we invoke Lemma \ref{thm:regret ucb standard} with $\beta_t = \tilde{\beta}_t$, where
\begin{align}\label{eq:tilde beta}
\tilde{\beta}_t := \sqrt{ 2 (4 + \sigma^2) \log \left(\frac{2 \det (V_t)^{1/2}}{\delta\det (V_0)^{1/2}} \right)} + \sqrt{ 8\log(T)\log \left( \frac{\pi^2 T^2 |\xX|}{3 \delta}\right)} + \lambda
\end{align}
to obtain a bound on $\rR_T^{UCB}$,
\begin{align}
\rR_T^{UCB} \leq \tilde{\beta}_T \sqrt{8T \log \left(\frac{\det V_T}{\det V_0}\right)} \text{ .}
\end{align}
This concludes the proof.
\newpage
\section{UCB with Context Distributions and Observed Context}\label{app: algorithm observed context}
\begin{algorithm}[h]
\begin{minipage}{\textwidth}
Initialize $\hat{\theta} = 0 \in \RR^d$, $V_0 = \lambda \mathbf{I} \in \RR^{d\times d}$\\
\textbf{For} step $t=1,2,\dots, T$:
\begin{addmargin}[2em]{0pt}
\textit{Environment} chooses $\mu_t \in \pP(\cC)$ \hfill \textit{// context distribution}\\
\textit{Learner} observes $\mu_t$\\
Set $\Psi_t = \{\psi_{x,\mu_t} : x \in \xX \}$ with $\bar{\psi}_{x,\mu_t}= \EE_{\mu_t}[\phi_{x,c}]$ \hfill \textit{// expected version}\vspace{2pt}\\
Alternatively, sample $c_{t,1}, \dots, c_{t,L}$ for $L=t$, \hfill \textit{// or sampled version}\\
Set $\Psi_t =\{\tilde{\psi}_{x,\mu_t} : x \in \xX \}$ with $\tilde{\psi}_{x,\mu_t} := \frac{1}{L} \sum_{i=1}^L \phi_{x,\tilde{c}_i}$\\
Run UCB step with $\Psi_t$ as context set \hfill \textit{// reduction}\\
Choose action $x_t = \argmax_{x \in \xX} \psi_{x,\mu_t}^\T\hat{\theta}_{t-1} + \beta_t(\sigma)\|\psi_{x,\mu_t}\|_{V_{t-1}^{-1}} $ \hfill \textit{// UCB action}\\
\textit{Environment} samples $c_t \sim \mu_t$\\
\textit{Learner} observes $y_t = \phi_{x_t, c_t}^\T \theta + \epsilon_t$ and $c_t$ \hfill \textit{// reward and context observation}\\
Update $V_{t} = V_{t-1} + \phi_{x_t, c_t} \phi_{x_t, c_t}^\T$, $\hat{\theta}_{t} = V_{t}^{-1}\sum_{s=1}^{t} \phi_{x_s, c_s} y_s$ \hfill \textit{// least-squares update}\\
\end{addmargin}
\end{minipage}
\caption{UCB for linear stochastic bandits with context distributions and observed context}\label{alg: distribution ucb linear with context observation}
\end{algorithm}
\subsection{Proof of Theorem \ref{thm: regret bound observed}} \label{app: proof regret bound observed}
From Lemma \ref{lemma: regret in terms of ucb} we obtain with probability at least $1-\delta$,
\begin{align}
\rR_T \leq \sum_{t=1}^T \psi_t^{*\T}\theta - \bar{\psi}_{x_t,\mu_t}^\T\theta + 4\sqrt{2T\log \frac{1}{\delta}}
\end{align}
What is different now, is that the sum $\sum_{t=1}^T \psi_t^{*\T}\theta - \bar{\psi}_{x_t,\mu_t}^\T\theta$ contains actions $x_t$, that are computed from a more precise estimator $\hat{\theta}_t$, hence we expect the regret to be smaller. This does not follow directly from UCB analysis, as there the estimator is computed with the features $\bar{\psi}_{x_t,\mu_t}$.
We start with the usual regret analysis, making use of the confidence bounds (Lemma \ref{thm: confidence bounds}) and the definition of the UCB action. Denote $\psi_t := \bar{\psi}_{x_t,\mu_t}$ in the following.
\begin{align}
\sum_{t=1}^T \psi_t^{*\T}\theta - \psi_t^\T \theta \leq \sum_{t=1}^T \psi_t^{*\T}\hat{\theta}_t + \beta_t \|\psi_t^*\|_{V_t^{-1}} - (\phi_t^\T \hat{\theta}_t - \beta_t\|\psi_t\|_{V_t^{-1}})\leq 2 \beta_T \sum_{t=1}^T \|\psi_t\|_{V_t^{-1}} \label{eq: analysis -2 start}
\end{align}
From here, the standard analysis proceeds by using Cauchy-Schwarz to obtain an upper bound on $\sum_{t=1}^T \|\psi_t\|_{V_t^{-1}}$ and then the argument proceeds by simplifying the sum $\sum_{t=1}^T \|\psi_t\|_{V_t^{-1}}$. The simplification doesn't work here because $V_t$ is defined on the realized features $\phi_{x_t, c_t}$ and not $\psi_t$. Instead we require the following intermezzo.
\begin{align}
\sum_{t=1}^T \|\psi_t\|_{V_t^{-1}} = \sum_{t=1}^T \|\phi_{x_s,c_s}\|_{V_t^{-1}} + \|\psi_t\|_{V_t^{-1}} - \|\phi_{x_s,c_s}\|_{V_t^{-1}} \leq \sum_{t=1}^T \|\phi_{x_s,c_s}\|_{V_t^{-1}} + \sum_{t=1}^T S_t \text{ ,}
\end{align}
where we defined $S_t = \|\psi_t\|_{V_t^{-1}} - \|\phi_{x_s,c_s}\|_{V_t^{-1}}$. We show that $\sum_{t=1}^T S_t$ is a supermartingale. For the expected features $\bar{\psi}_{x,\mu_t} = \EE_{c \sim \mu_t}[\phi_{x,c}]$, note that Jensen's inequality yields $\|\EE_{c \sim \mu_t}[\phi_{x,c}]\|_{V_t^{-1}} \leq \EE_{c \sim \mu_t}[\|\phi_{x,c}\|_{V_t^{-1}}]$ for all $x\in\xX$. From this we obtain $\EE[S_t|\fF_{t-1},\mu_t] \leq 0$. Finally, note that $\|\phi_{x_s,c_s}\|_{V_t^{-1}} \leq \lambda^{-1/2}\|\phi_{x_s,c_s}\|_2 \leq \lambda^{-1/2}$ and $|S_t| \leq 2 \lambda^{-1/2}$, hence by Azuma-Hoeffdings inequality with probability at least $1-\delta$, $\sum_{t=1}^T S_t \leq 2 \lambda^{-1/2} \sqrt{2T\log \frac{1}{\delta}}$.
From here we complete the regret analysis by bounding $\sum_{t=1}^T \|\phi_{x_s,c_s}\|_{V_t^{-1}}$ with the standard argument. Write $\phi_t := \phi_{x_t,c_t}$. First using Cauchy-Schwarz and then $\|\phi_t\|_2 \leq 1$ as well as $u \leq 2\log(1+u)$ for $u\leq 1$, it follows that
\begin{align}
\sum_{t=1}^T \|\phi_{t}\|_{V_t^{-1}} \leq \sqrt{T\sum_{t=1}^T \|\phi_t\|_{V_t^{-1}}^2} \leq \sqrt{2 T \sum_{t=1}^T \log(1 +\|\phi_t\|_{V_t^{-1}}^2)}= \sqrt{2 T \log \left(\frac{\det V_T}{\det V_0}\right)} \label{eq: analysis -2 end}
\end{align}
The last equality essentially follows from an application of the Sherman-Morrison formula on the matrix $V_t = \sum_{s=1}^t \phi_{x_t,c_t}\phi_{x_t,c_t}^\T + \lambda \mathbf{I}_d$, compare e.g.\ \citep[Lemma 11]{abbasi2011improved}. It remains to assemble the results from equations \eqref{eq: analysis -2 start}-\eqref{eq: analysis -2 end}, and a final application of the union bound completes the proof.
With a bit of extra work, one can obtain a similar bound for the sample-based features. Using Jensen's inequality we get $\| \frac{1}{L}\sum_{i=1}^L \phi_{x,\tilde{c}_i}\|_{V_t^{-1}} \leq \frac{1}{L}\sum_{i=1}^L\|\phi_{x,\tilde{c}_i}\|_{V_t^{-1}}$, but now the action $x_t$ depends on the samples $\tilde{c}_{t,i}$ that define the features $\tilde{\psi}_{x_t,\mu_t}$ and the previous direct argument does not work. The strategy around is the sames as in the proof of Lemma \ref{lemma: regret in terms of ucb}. For fixed $x \in \xX$, $\frac{1}{L}\sum_{i=1}^L\|\phi_{x,\tilde{c}_i}\|_{V_t^{-1}}$ concentrates around $\EE_{c \sim \mu_t}[\|\phi_{x,c}\|_{V_{t}^{-1}}]$ at a rate $1/\sqrt{L}$, fast enough to get $\oO(\sqrt{T})$ regret if we set $L=t$ in iteration $t$. A careful application of the union bound (again over all $x \in \xX$) completes the proof.
\section{Kernelized UCB with Context Distributions} \label{app: kernelized algorithm}
\subsection{Proof of Theorem \ref{thm: regret bound kernelized}}\label{app: details kernelized algorithm}
To bound the regret we proceed like in the linear case. First, define $\bar{f}_t(x) = \EE_{\mu_t}[f(x,c)|\fF_{t-1}]$. Analogously to Lemma \ref{lemma: regret in terms of ucb}, an application of Azuma-Hoeffding's inequality yields with probability at least $1-\delta$.
\begin{align}
\rR_T \leq \sum_{t=1}^T \bar{f}_t(x_t^*) - \bar{f}_t(x_t^*) + 4\sqrt{2T\log \frac{1}{\delta}}
\end{align}
To bound the sum, we need to understand the concentration behavior of $|\hat{f}_t(x) - \bar{f}_t(x_t^*)|$, where we denote $\hat{f}_t(x) = \EE_{\mu_t}[\hat{f}_t(x,c)|\fF_{t-1}, \mu_t]$. Recall that
\begin{align}
\hat{f}_t(x,c) = k_t(x,c)^\T(K_t + \lambda \mathbf{I})^{-1} y_t
\end{align}
where $k_t(x,c) = [\bar{k}_{x_1,\mu_1}(x,c), \dots, \bar{k}_{x_t,\mu_t}(x,c)]^\T$, $(K_t)_{i,j} = \EE_{c' \sim \mu_j}[\bar{k}_{x_i,\mu_i}(x_j,c')]$ for $1 \leq i,j, \leq t$ is the kernel matrix and $y_t = [y_1, \dots, y_t]^T$ denotes the vector of observations. Define further $\bar{k}_t(x) = \EE_{c \sim \mu_t}[k_t(x,c)]$.
Note that we can compute $\hat{f}_t(x) = \<\hat{f}, k_{x,\mu_t}\> = \bar{k}_t(x)^\T(K_t + \lambda \mathbf{I})^{-1} y_t$ according to the inner product $\<\cdot, \cdot\>$ on $\hH$. Concentration bounds for the kernel least squares estimator, that hold for adaptively collected data, are well understood by now, see \cite{srinivas2010gaussian, abbasi2012online, chowdhury2017kernelized, durand2018streaming}. For instance, as a direct corollary of \cite[Theorem 3.11]{abbasi2012online}, we obtain
\begin{lemma}\label{lemma: kernel mean concentration}
For any stochastic sequence $\{(x_t,\mu_t,y_t)\}_{t \in \NN}$, where $y_t = f(x_t,c_t) + \epsilon_t$ with $\sigma$-subgaussian noise $\epsilon_t$ and $c_t \sim \mu_t$, the kernel-least squares estimate \eqref{eq: distributional risk minimization} satisfies with probability at least $1-\delta$, at any time $t$ and for any $x \in \xX$,
\begin{align*}
|\hat{f}_t(x) - \bar{f}_t(x_t^*)| \leq \beta_t \sigma_t(x) \text{ .}
\end{align*}
Here we denote,
\begin{align*}
\beta_t &= \rho\left(\sqrt{2\log \left(\frac{\det(\mathbf{I} + (\lambda\rho)^{-1}K_t)^{1/2}}{\delta}\right)} + \lambda^{1/2}\|f\|_\hH\right)\text{ ,}\\
\sigma_{t}^2(x) &= \frac{1}{\lambda}\big(\<k_{x,\mu_t},k_{x,\mu_t}\> - k_t(x)^\T (K_t + \lambda \mathbf{I})^{-1}k_t(x) \big) \text{ ,}
\end{align*}
and $\rho = \sqrt{4+\sigma^2}$ is the subgaussian variance proxy of the observation noise $\rho_t = y_t-\bar{f}_t(x)$.
\end{lemma}
Using the confidence bounds, we define the UCB action,
\begin{align}
x_t = \argmax_{x \in \xX} \hat{f}_t(x) + \beta_t \sigma_{t}(x) \text{ .}
\end{align}
It remains to bound the regret of the UCB algorithm. The proof is standard except that we use Lemma \ref{lemma: kernel mean concentration} to show concentration of the estimator. For details see \cite[Theorem 4.1]{abbasi2012online} or \cite[Theorem 3]{chowdhury2017kernelized}.
\begin{algorithm}[h]
\begin{minipage}{\textwidth}
Initialize $\hat{f}_0$, $\sigma_0$ \\
\textbf{For} step $t=1,2,\dots, T$:
\begin{addmargin}[2em]{0pt}
\textit{Environment} chooses $\mu_t \in \pP(\cC)$ \hfill \textit{// context distribution}\\
\textit{Learner} observes $\mu_t$\\
\textit{// definitions for expected version}\\
$[k_{t}(x)]_s := \EE_{c \sim \mu_t}[k_{x_s,\mu_s}(x,c)]$ \quad for $s=1,\dots, t-1$ \hfill\\
$s_t(x) := \EE_{c\sim\mu_t, c'\sim \mu_t}[k(x, c, x, c')]$ \\
\textit{// definitions for sampled version}\\
Sample $\tilde{c}_{t,1}, \dots, \tilde{c}_{t,L} \sim \mu_t$ \hfill \textit{// sample context distribution}\\
$[k_{t}(x)]_s := \frac{1}{L} \sum_{i=1}^L k_{x_s,\mu_s}(x, \tilde{c}_i)$ \quad for $s=1,\dots, t-1$ \hfill \\
$s_t(x) := \frac{1}{L^2} \sum_{i,j=1}^L k(x, \tilde{c}_{t,i}, x, \tilde{c}_{t,j})$ \\
$\hat{f}_t(x) := k_t(x)^\T (K_t + \lambda \mathbf{I})^{-1} y_t$ \hfill \textit{// compute estimate}\\
$\sigma_{t}^2(x) := \frac{1}{\lambda}\big(s_t(x) - k_t(x)^\T (K_t + \lambda \mathbf{I})^{-1}k_t(x) \big)$\hfill \textit{// confidence width} \\
Set $\beta_t$ as in Lemma \ref{lemma: kernel mean concentration}\\
Choose action $x_t \in \argmax_{x \in \xX} \hat{f}_{t-1}(x) + \beta_t\sigma_{t-1}(x) $ \hfill \textit{// UCB action}\\
\textit{Environment} provides $y_t = f(x_t,c_t) + \epsilon$ where $c_t \sim \mu_t$ \hfill \textit{// reward observation}\\
Store $y_{t+1} := [y_1, \dots, y_t]$ \hfill \textit{// observation vector}\\
\textit{// Kernel matrix update, expected version}\\
$K_{t+1}(x) := [\<k_{x_a,\mu_a},k_{x_b,\mu_b} \>]_{1 \leq a,b \leq t}$ with $\<k_{x_a,\mu_a},k_{x_b,\mu_b} \>= \EE_{\mu_a, \mu_b}[k(x_a, c_a, x_b, c_b)]$\\
$k_{x_t,\mu_t} := \EE_{c \sim \mu_t}[k_{x_t,c}]$ \hfill \textit{// kernel mean embeddings}\\
\textit{// Kernel matrix update, sampled version}\\
$K_{t+1}(x) := [\<k_{x_a,\mu_a},k_{x_b,\mu_b} \>]_{1 \leq a,b \leq t}$ with $\<k_{x_a,\mu_a},k_{x_a,\mu_a} \> = \frac{1}{L^2} \sum_{i,j=1}^L k(x_a, \tilde{c}_{a,i}, x_b, \tilde{c}_{b,j})$\\
$k_{x_t,\mu_t} := \frac{1}{L}\sum_{i=1}^L k_{x_t,\tilde{c}_{t,i}} = \frac{1}{L}\sum_{i=1}^L k(x_t,\tilde{c}_{t,i}, \cdot, \cdot)$ \hfill \textit{// sample kernel mean embeddings}\\
\end{addmargin}
\end{minipage}
\caption{UCB for RKHS bandits with context distributions}\label{alg: distribution ucb kernel}
\end{algorithm}
\section{Details on the Experiments}
We tune $\beta_t$ over the values $\{0.5,1,2,5,10\}$. In the synthetic experiment we set $\beta_t=2$ for the \emph{exact} and \emph{observed} variant and $\beta_t=10$ for the \emph{hidden} experiment. In the both experiments based on real-world data, we set $\beta_t=1$ for all variants.
\subsection{Movielens}\label{app: details movielens}
We use matrix factorization based on singular value decomposition (SVD) \citep{koren2009matrix} which is implemented in the Surprise library \citep{hug2017suprise} to learn 6 dimensional features $v_u$, $w_m$ for users $u$ and movies $m$ (this model obtains a RMSE $\approx 0.88$ over a 5-fold cross-validation). In the linear parameterization this corresponds to 36-dimensional features $\phi_{m,u} = v_uw_m^T$. We use the demographic data (gender, age and occupation) to group the users. The realized context is set to a random user in the data set, and the context distribution is defined as the empirical distribution over users within the same group as the chosen user.
\subsection{Crops Yield Dataset}\label{app: details agroscope}
Recall that the data set $\dD=\{(x_i, w_i, y_i) : i=1,\dots,8849\}$ consists of crop identifiers $x_i$, normalized yield measurements $y_i$ and site-year features $w_i \in \RR^{16+10}$ that are based on 16 suitability factors computed from weather measurements and a 1-hot encoding for each site (out of 10). The suitability factors are based on the work of \cite{holzkamper2013identifying}. To obtain a model for the crop yield responses, we train a bilinear model on the following loss \citep{koren2009matrix},
\begin{align}
\lL(W,V) = \sum_{i=1}^n (y_i - w_i^\T W V_{x_{i}})^2 + \|v_{j_i}\|^2 + \|w_{x_i}^\T W\|_2^2 \text{ .}
\end{align}
where $V = (V_{x_j})_{j=1}^k$ are the crop features $V_{x_j} \in \RR^5$ and $W \in \RR^{26\times6}$ is used to compute site features $w_i^\T W$ given the suitability features $w_i$ from the dataset.
We also use an empirical noise function created from the data. Since the data set contains up to three measurements of the same crop on a specific site and a year, we randomly pick a measurement and use its residual w.r.t.\ the mean of all measurements of the same crop under exactly the same conditions as noise. This way we ensure that the observation noise of our simulated environment is of the same magnitude as the noise on the actual measurements.
\section{Introduction}
In the contextual bandit model a learner interacts with an environment in several rounds. At the beginning of each round, the environment provides a context, and in turn, the learner chooses an action which leads to an \mbox{a priori} unknown reward. The learner's goal is to choose actions that maximize the cumulative reward, and eventually compete with the best mapping from context observations to actions. This model creates a dilemma of \emph{exploration and exploitation}, as the learner needs to balance exploratory actions to estimate the environment's reward function, and exploitative actions that maximize the total return. Contextual bandit algorithms have been successfully used in many applications, including online advertisement, recommender systems and experimental design.
The contextual bandit model, as usually studied in the literature, assumes that the context is observed \emph{exactly}. This is not always the case in applications, for instance, when the context is itself a noisy measurement or a forecasting mechanism. An example of such a context could be a weather or stock market prediction. In other cases such as recommender systems, privacy constraints can restrict access to certain user features, but instead we might be able to infer a distribution over those. To allow for uncertainty in the context, we consider a setting where the environment provides a \emph{distribution over the context set}. The exact context is assumed to be a sample from this distribution, but remains hidden from the learner. Such a model, to the best of our knowledge, has not been discussed in the literature before. Not knowing the context realization makes the learning problem more difficult, because the learner needs to estimate the reward function from noisy observations and without knowing the exact context that generated the reward. Our setting recovers the classical contextual bandit setting when the context distribution is a Dirac delta distribution. We also analyze a natural variant of the problem, where the exact context is observed \emph{after} the player has chosen the action. This allows for different applications, where at the time of decision the context needs to be predicted (e.g. weather conditions), but when the reward is obtained, the exact context can be measured.
We focus on the setting where the reward function is linear in terms of action-context feature vectors. For this case, we leverage the UCB algorithm on a specifically designed bandit instance \emph{without} feature uncertainty to recover an $\oO(d\sqrt{T})$ high-probability bound on the cumulative regret. Our analysis includes a practical variant of the algorithm that requires only sampling access to the context distributions provided by the environment. We also extend our results to the kernelized setting, where the reward function is contained in a known reproducing kernel Hilbert space (RKHS). For this case, we highlight an interesting connection to distributional risk minimization and we show that the natural estimator for the reward function is based on so-called kernel mean embeddings. We discuss related work in Section \ref{sec: related_work}.
\section{Stochastic Bandits with Context Distributions}
We formally define the setting of \emph{stochastic bandits with context distributions} as outlined in the introduction. Let $\xX$ be a set of actions and $\cC$ a context set. The environment is defined by a fixed, but unknown reward function $f : \xX \times \cC \rightarrow \RR$. At iteration $t \in \NN$, the environment chooses a distribution $\mu_t \in \pP(\cC)$ over the context set and samples a context realization $c_t \sim \mu_t$. The learner observes only $\mu_t$ but not $c_t$, and then chooses an action $x_t \in \xX$. We allow that an \emph{adaptive adversary} chooses the context distribution, that is $\mu_t$ may in an arbitrary way depend on previous choices of the learner up to time $t$. Given the learner's choice $x_t$, the environment provides a reward $y_t = f(x_t, c_t) + \epsilon_t$, where $\epsilon_t$ is $\sigma$-subgaussian, additive noise.
The learner's goal is to maximize the cumulative reward $\sum_{t=1}^T f(x_t,c_t)$, or equivalently, minimize the cumulative regret
\begin{align}\label{eq:cumulative reg}
\rR_T = \sum_{t=1}^{T} f(x_t^*,c_t) - f(x_t,c_t)
\end{align}
where $x_t^* = \argmax_{x \in \xX} \EE_{c \sim \mu_t}[f(x,c)]$ is the best action provided that we know $f$ and $\mu_t$, but not $c_t$. Note that this way, we compete with the best possible mapping $\pi^* : \pP(\cC) \rightarrow \xX$ from the observed context distribution to actions, that maximizes the expected reward $\sum_{t=1}^T \EE_{c_t \sim \mu_t}[f(\pi^*(\mu_t),c_t)|\fF_{t-1}, \mu_t]$ where $\fF_t = \{(x_s,\mu_s,y_s)\}_{s=1}^t$ is the filtration that contains all information available at the end of round $t$. It is natural to ask if it is possible to compete with the stronger baseline that chooses actions given the context realization $c_t$, i.e.\ $\tilde{x}_t^* = \argmax_{x\in \xX} f(x,c_t)$. While this can be possible in special cases, a simple example shows, that in general the learner would suffer $\Omega(T)$ regret. In particular, assume that $c_t \sim \text{Bernoulli}(0.6)$ , and $\xX= \{0,1\}$. Let $f(0,c) = c$ and $f(1,c) = 1 - c$. Clearly, any policy that does not know the realizations $c_t$, must have $\Omega(T)$ regret when competing against $\tilde{x}_t^*$.
From now on, we focus on linearly parameterized reward functions $f(x,c) = \phi_{x,c}^\T \theta$ with given feature vectors $\phi_{x,c} \in \RR^d$ for $x\in\xX$ and $c \in \cC$, and unknown parameter $\theta \in \RR^d$. This setup is commonly referred to as the \textit{linear bandit} setting. For the analysis we require standard boundedness assumptions $\|\phi_{x,c}\|_2 \leq 1$ and $\|\theta\|_2 \leq 1$ that we set to 1 for the sake of simplicity. In Section \ref{subsection:context observed}, we further consider a variant of the problem, where the learner observes $c_t$ \emph{after} taking the decision $x_t$. This simplifies the estimation problem, because we have data $\{(x_t,c_t, y_t)\}$ with exact context $c_t$ available, just like in the standard setting. The exploration problem however remains subtle as at the time of decision the learner still knows only $\mu_t$ and not $c_t$. In Section \ref{subsection: rkhs} we extend our algorithm and analysis to \emph{kernelized bandits} where $f\in \hH$ is contained in a reproducing kernel Hilbert space $\hH$.
\section{Background}
We briefly review standard results from the linear contextual bandit literature and the upper confidence bound (UCB) algorithm that we built on later \citep{abbasi2011improved}. The \emph{linear contextual bandit} setting can be defined as a special case of our setup, where the choice of $\mu_t$ is restricted to Dirac delta distributions $\mu_t = \delta_{c_t}$, and therefore the learner knows beforehand the exact context which is used to generate the reward. In an equivalent formulation, the environment provides at time $t$ a set of action-context feature vectors $\Psi_t = \{ \phi_{x,c_t} : x\in \xX \} \subset \RR^d$ and the algorithm chooses an action $x_t$ with corresponding features $\phi_t := \phi_{x_t, c_t} \in \Psi_t$. We emphasize that in this formulation the context $c_t$ is extraneous to the algorithm, and everything can be defined in terms of the time-varying action-feature sets $\Psi_t$. As before, the learner obtains a noisy reward observation $y_t =\phi_t^\T \theta + \epsilon_t$ where $\epsilon_t$ is conditionally $\rho$-subgaussian with \emph{variance proxy} $\rho$, i.e.\
\begin{align*}
\forall \lambda \in \RR,\qquad \EE[e^{\lambda \epsilon_t}|\fF_{t-1}, \phi_t] \leq \exp(\lambda^2\rho^2/2) \text{ .}
\end{align*}
Also here, the standard objective is to minimize the cumulative regret $\rR_T = \sum_{t=1}^T \phi_t^{*\T} \theta - \phi_t^\T \theta$ where $\phi_t^* = \argmax_{\phi \in \Psi_t} \phi^\T \theta$ is the feature vector of the best action at time $t$.
To define the UCB algorithm, we make use of the confidence sets derived by \cite{abbasi2011improved} for online least square regression. At the end of round $t$, the algorithm has adaptively collected data $\{(\phi_1, y_1), \dots, (\phi_t, y_t)\}$ that we use to compute the regularized least squares estimate $\hat{\theta}_t = \argmin_{\theta' \in \RR^d} \sum_{s=1}^t (\phi_s^\T \theta' - y_s)^2 + \lambda\|\theta'\|_2^2$ with $\lambda > 0$. We denote the closed form solution by $\hat{\theta}_t = V_t^{-1}\sum_{s=1}^t \phi_sy_s$ with $V_t = V_{t-1} + \phi_t\phi_t^\T$, $V_0 = \lambda \mathbf{I}_d$ and $\mathbf{I}_d \in \RR^{d\times d}$ is the identity matrix.
\begin{lemma}[\cite{abbasi2011improved}]\label{thm: confidence bounds}
For any stochastic sequence $\{(\phi_t, y_t)\}_{t}$ and estimator $\hat{\theta}_t$ as defined above for $\rho$-subgaussian observations, with probability at least $1-\delta$, at any time $t \in \NN$,
\begin{align*}
\quad \|\theta - \hat{\theta}_t\|_{V_t} \leq \beta_t \qquad \text{where } \beta_t = \beta_t(\rho, \delta) = \rho \sqrt{2\log \left(\frac{\det (V_t)^{1/2}}{\delta\det (V_0)^{1/2}} \right)} + \lambda^{1/2} \|\theta\|_2\text{ .}
\end{align*}
\end{lemma}
Note that the size of the confidence set depends the variance proxy $\rho$, which will be important in the following. In each round $t+1$, the UCB algorithm chooses an action $\phi_{t+1}$, that maximizes an upper confidence bound on the reward,
\begin{align*}
\phi_{t+1} := \argmax_{\phi \in \Psi_{t+1}} \phi^\T \hat{\theta}_t + \beta_t \|\phi\|_{V_t^{-1}} \text{ .}
\end{align*}
The following result shows that the UCB policy achieves sublinear regret \citep{dani2008stochastic, abbasi2011improved}.
\begin{lemma}\label{thm:regret ucb standard}
In the standard contextual bandit setting with $\rho$-subgaussian observation noise, the regret of the UCB policy with $\beta_t = \beta_t(\rho, \delta)$ is bounded with probability $1-\delta$ by
\begin{align*}
\rR_T^{UCB} \leq \beta_T\sqrt{8 T \log \left(\frac{\det V_T}{\det V_0}\right)}
\end{align*}
\end{lemma}
The data-dependent terms can be further upper-bounded to obtain $\rR_T^{UCB} \leq \tilde{\oO}(d\sqrt{T})$ up to logarithmic factors in $T$ \cite[Theorem 3]{abbasi2011improved}. A matching lower bound is given by \citet[Theorem 3]{dani2008stochastic}.
\section{UCB with Context Distributions}
In our setting, where we only observe a context distribution $\mu_t$ (e.g.\ a weather prediction) instead of the context $c_t$ (e.g.\ realized weather conditions), also the features $\phi_{x,c_t}$ (e.g.\ the last layer of a neural network that models the reward $f(x,c) = \phi_{x,c_t}^\T \theta$) are uncertain. We propose an approach that transforms the problem such that we can directly use a contextual bandit algorithm as for the standard setting. Given the distribution $\mu_t$, we define a new set of feature vectors $\Psi_t = \{\bar{\psi}_{x,\mu_t} : x \in \xX \}$, where we denote by $\bar{\psi}_{x,\mu_t} = \EE_{c \sim \mu_t}[\phi_{x,c}|\fF_{t-1},\mu_t]$ the expected feature vector of action $x$ under $\mu_t$. Each feature $\bar{\psi}_{x,\mu_t}$ corresponds to exactly one action $x \in \xX$, so we can use $\Psi_t$ as feature context set at time $t$ and use the UCB algorithm to choose an action $x_t$. The choice of the UCB algorithm here is only for the sake of the analysis, but any other algorithm that works in the linear contextual bandit setting can be used. The complete algorithm is summarized in \mbox{Algorithm \ref{alg: distribution ucb linear}}. We compute the UCB action $x_t$ with corresponding expected features $\psi_t := \bar{\psi}_{x_t,t} \in \Psi_t$, and the learner provides $x_t$ to the environment. We then proceed and use the reward observation $y_t$ to update the least squares estimate. That this is a sensible approach is not immediate, because $y_t$ is a noisy observation of $\phi_{x_t,c_t}^\T \theta$, whereas UCB expects the reward $\psi_t^\T \theta$. We address this issue by constructing the feature set $\Psi_t$ in such a way, that $y_t$ acts as unbiased observation also for the action choice $\psi_t$. As computing exact expectations can be difficult and in applications often only sampling access of $\mu_t$ is possible, we also analyze a variant of Algorithm \ref{alg: distribution ucb linear} where we use finite sample averages $\tilde{\psi}_{x,\mu_t} = \frac{1}{L} \sum_{l=1}^L \phi_{x,\tilde{c}_l}$ for $L \in \NN$ i.i.d.\ samples $\tilde{c}_l \sim \mu_t$ instead of the expected features $\bar{\psi}_{x,\mu}$. The corresponding feature set is $\tilde{\Psi}_t = \{\tilde{\psi}_{x,\mu_t} : x \in \xX \}$. For both variants of the algorithm we show the following regret bound.
\begin{theorem}\label{thm: main regret bound}
The regret of Algorithm \ref{alg: distribution ucb linear} with expected feature set $\Psi_t$ and $\beta_t = \beta_t(\sqrt{4 + \sigma^2}, \delta/2)$ is bounded at time $T$ with probability at least $1-\delta$ by
\begin{align*}
\rR_T \leq \beta_T \sqrt{8T \log \left(\frac{\det V_T}{\det V_0}\right)} + 4\sqrt{2 T \log \frac{4}{\delta}} \text{ .}
\end{align*}
Further, for finite action sets $\xX$, if the algorithm uses sampled feature sets $\tilde{\Psi}_t$ with $L=t$ and $\beta_t = \tilde{\beta}_t$ as defined in \eqref{eq:tilde beta}, Appendix \ref{app: proof theorem}, then with probability at least $1-\delta$,
\begin{align*}
\rR_T \leq \tilde{\beta}_T \sqrt{8T \log \left(\frac{\det V_T}{\det V_0}\right)} + 4\sqrt { 2 T \log \frac{2|\xX|\pi T}{3 \delta}} \text{ .}
\end{align*}
\end{theorem}
As before, one can further upper bound the data-dependent terms to obtain an overall regret bound of order $\rR_T \leq \tilde{\oO}(d\sqrt{T})$, see \cite[Theorem 2]{abbasi2011improved}.
With iterative updates of the least-squares estimator, the per step computational complexity is $\oO(Ld^2|\xX|)$ if the UCB action is computed by a simple enumeration over all actions.
\begin{algorithm}[t]
\begin{minipage}{\textwidth}
Initialize $\hat{\theta} = 0 \in \RR^d$, $V_0 = \lambda \mathbf{I} \in \RR^{d\times d}$\\
\textbf{For} step $t=1,2,\dots, T$:
\begin{addmargin}[2em]{0pt}
\textit{Environment} chooses $\mu_t \in \pP(\cC)$ \hfill \textit{// context distribution}\\
\textit{Learner} observes $\mu_t$\\
Set $\Psi_t = \{\bar{\psi}_{x,\mu_t} : x \in \xX \}$ with $\bar{\psi}_{x,\mu_t} := \EE_{c\sim \mu_t}[\phi_{x,c}]$ \hfill \textit{// variant 1, expected version}\vspace{2pt}\\
Alternatively, sample $c_{t,1}, \dots, c_{t,L}$ for $L=t$, \hfill \textit{// variant 2, sampled version}\\
Set $\Psi_t =\{\tilde{\psi}_{x,\mu_t} : x \in \xX \}$ with $\tilde{\psi}_{x,\mu_t} = \frac{1}{L} \sum_{l=1}^L \phi_{x,\tilde{c}_l}$\\
Run UCB step with $\Psi_t$ as context set \hfill \textit{// reduction}\\
Choose action $x_t = \argmax_{\psi_{x,\mu_t} \in \Psi_t} \psi_{x,\mu_t}^\T\hat{\theta}_{t-1} + \beta_t\|\psi_{x,\mu_t}\|_{V_{t-1}^{-1}} $ \hfill \textit{// UCB action}\\
\textit{Environment} provides $y_t = \phi_{x_t, c_t}^\T \theta + \epsilon$ where $c_t \sim \mu_t$ \hfill \textit{// reward observation}\\
Update $V_{t} = V_{t-1} + \psi_{x_s,\mu_s} \psi_{x_s,\mu_s}^\T$, $\;\hat{\theta}_{t} = V_{t}^{-1}\sum_{s=1}^{t} \psi_{x_s,\mu_s} y_s$ \hfill \textit{// least-squares update}\\
\end{addmargin}
\end{minipage}
\caption{UCB for linear stochastic bandits with context distributions}\label{alg: distribution ucb linear}
\end{algorithm}
\subsection{Regret analysis: Proof of Theorem \ref{thm: main regret bound}}
Recall that $x_t$ is the action that the UCB algorithm selects at time $t$, $\psi_t$ the corresponding feature vector in $\Psi_t$ and we define $\psi_t^* = \argmax_{\psi \in \Psi_t} \psi^\T \theta$. We show that the regret $\rR_T$ is bounded in terms of the regret $\rR_T^{UCB} := \sum_{t=1}^T \psi_t^{*\T} \theta - \psi_t^\T \theta$ of the UCB algorithm on the contextual bandit defined by the sequence of action feature sets $\Psi_t$.
\begin{lemma} \label{lemma: regret in terms of ucb}
The regret of Algorithm \ref{alg: distribution ucb linear} with the expected feature set $\Psi_t$ is bounded at time $T$ with probability at least $1-\delta$,
\begin{align*}
\rR_T \leq \mathcal{R}_T^{UCB} + 4 \sqrt{2 T \log \frac{1}{\delta}} \text{ .}
\end{align*}
Further, if the algorithm uses the sample based features $\tilde{\Psi}_t$ with $L=t$ at iteration $t$, the regret is bounded at time $T$ with probability at least $1-\delta$,
\begin{align*}
\rR_T \leq \mathcal{R}_T^{UCB} + 4\sqrt { 2 T \log \frac{|\xX|\pi T}{3 \delta}} \text{ .}
\end{align*}
\end{lemma}
\begin{proof}
Consider first the case where we use the expected features $\bar{\psi}_{x_t,\mu_t}$. We add and subtract $(\bar{\psi}_{x_t^*,\mu_t} - \bar{\psi}_{x_t,\mu_t})^\T \theta$ and use $\bar{\psi}_{x_t^*,\mu_t}^\T \theta \leq \psi_t^{*\T} \theta$ to bound the regret by
\begin{align*}
R_T \leq \rR_T^{UCB} + \sum_{t=1}^T D_t\text{ ,}
\end{align*}
where we defined $D_t = (\phi_{x_t^*,c_t} - \bar{\psi}_{x_t^*,\mu_t} + \bar{\psi}_{x_t, \mu_t} - \phi_{x_t,c_t})^\T \theta$.
It is easy to verify that $\EE_{c_t \sim \mu_t}[D_t|\fF_{t-1}, \mu_t, x_t]=0$, that is $D_t$ is a martingale difference sequence with $|D_t| \leq 4$ and $M_T = \sum_{t=1}^T D_t$ is a martingale. The first part of the lemma therefore follows from Azuma-Hoeffding's inequality (Lemma \ref{lemma:Azuma-Hoeffdings}, Appendix). For the sample-based version, the reasoning is similar, but we need to ensure that the features $\tilde{\psi}_{x,\mu_t} = \frac{1}{L} \sum_{l=1}^L \phi_{x,\tilde{c}_l}$ are sufficiently concentrated around their expected counterparts $\bar{\psi}_{x,\mu_t}$ for any $x \in \xX$ and $t \in \NN$. We provide details in Appendix \ref{app: proof lemma}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm: main regret bound}]
Clearly, the lemma gives a regret bound for Algorithm \ref{alg: distribution ucb linear} if the regret term $\rR_T^{UCB}$ is bounded. The main difficulty is that the reward observation $y_t = \phi_{x_t,c_t}^\T \theta + \epsilon_t$ is generated from a different feature vector than the feature $\psi_t \in \Psi_t$ that is chosen by UCB. Note that in general, it is not even true that $\phi_{x_t,c_t} \in \Psi_t$. However, a closer inspection of the reward signal reveals that $y_t$ can be written as
\begin{align}
y_t = \psi_t^\T\theta+ \xi_t + \epsilon \qquad \text{with} \quad \xi_t:= (\phi_{x_t,c_t}- \psi_t) ^\T \theta
\end{align}
For the variant that uses the expected features $\psi_t = \bar{\psi}_{x_t,\mu_t}$, our construction already ensures that $\EE[\xi_t|\fF_{t-1}, \mu_t, x_t] = \EE[\phi_{x_t,c_t}- \bar{\psi}_{x_t,\mu_t}|\fF_{t-1}, \mu_t, x_t]^\T \theta = 0$. Note that the distribution of $\xi_t$ depends on $x_t$ and is therefore heteroscedastic in general. However, by boundedness of the rewards, $|\xi_t| \leq 2$ and hence $\xi_t$ is $2$-subgaussian, which allows us to continue with a homoscedastic noise bound. We see that $y_t$ acts like an observation of $\psi_t^\T\theta$ perturbed by $\sqrt{4 + \sigma^2}$-subgaussian noise (for two independent random variables $X$ and $Y$ that are $\sigma_1$- and $\sigma_2$-subgaussian respectively, $X+Y$ is $\sqrt{\sigma_1^2 + \sigma_2^2}$- subgaussian). Therefore, the construction of the confidence bounds for the least squares estimator w.r.t.\ $\psi_t$ remains valid at the cost of an increased variance proxy, and we are required to use $\beta_t$ with $\rho=\sqrt{4 + \sigma^2}$ in the definition of the confidence set. The regret bound for the UCB algorithm (Lemma \ref{thm:regret ucb standard}) and an application of the union bound completes the proof for this case. When we use the sample-based features $\tilde{\psi}_{x,\mu_t}$, the noise term $\xi_t$ can be biased, because $x_t$ depends on the sampled features and $\EE[\tilde{ \psi}_{x_t,\mu_t}|\fF_{t-1},\mu_t] \neq \bar{\psi}_{x_t,\mu_t}$. This bias carries on to the least-squares estimator, but can be controlled by a more careful analysis. See Appendix \ref{app: proof theorem} for details.
\end{proof}
\subsection{When the context realization is observed}\label{subsection:context observed}
We now turn our attention to the alternative setting, where it is possible to observe the realized context $c_t$ (e.g. actual weather measurements) after the learner has chosen $x_t$. In Algorithm \ref{alg: distribution ucb linear}, so far our estimate $\hat{\theta}_t$ only uses the data $\{(x_s, \mu_s, y_s)\}_{s=1}^t$, but with the context observation we have $\{(x_s, c_s, y_s)\}_{s=1}^t$ available. It makes sense to use the additional information to improve our estimate $\hat{\theta}_t$, and as we show below this reduces the amount the UCB algorithm explores. The pseudo code of the modified algorithm is given in Algorithm \ref{alg: distribution ucb linear with context observation} (Appendix \ref{app: algorithm observed context}), where the only difference is that we replaced the estimate of $\theta$ by the least squares estimate $\hat{\theta}_t = \argmin_{\theta' \in \RR^d} \sum_{s=1}^t (\phi_{x_s,c_s}^\T \theta' - y_s)^2 + \lambda \|\theta'\|_2^2$. Since now the observation noise $\epsilon_t = y_t - \phi_{x_t,c_t}^\T \theta$ is only $\sigma$- subgaussian (instead of $\sqrt{4+\sigma^2}$-subgaussian), we can use the smaller scaling factor $\beta_t$ with $\rho=\sigma$ to obtain a tighter upper confidence bound.
\begin{theorem} \label{thm: regret bound observed}
The regret of Algorithm \ref{alg: distribution ucb linear with context observation} based on the expected feature sets $\Psi_t$ and $\beta_t = \beta_t(\sigma,
\delta/3)$ is bounded with probability at least $1-\delta$ by
\begin{align*}
\rR_T \leq \beta_T \sqrt{8 T \log \left(\frac{\det V_T}{\det V_0}\right)} + 4(1 + \lambda^{-1/2}\beta_T)\sqrt{2 T \log \frac{3}{\delta}}
\end{align*}
\end{theorem}
The importance of this result is that it justifies the use of the smaller scaling $\beta_t$ of the confidence set, which affects the action choice of the UCB algorithm. In practice, $\beta_t$ has a large impact on the amount of exploration, and a tighter choice can significantly reduce the regret as we show in our experiments. We note that in this case, the reduction to the regret bound of UCB is slightly more involved than previously. As before, we use Lemma \ref{lemma: regret in terms of ucb} to reduce a regret bound on $\rR_T$ to the regret $\rR_T^{UCB}$ that the UCB algorithm obtains on the sequence of context-feature sets $\Psi_t$. Since now, the UCB action is based on tighter confidence bounds, we expect the regret $\rR_T^{UCB}$ to be smaller, too. This does not follow directly from the UCB analysis, as there the estimator is based on the features $\bar{\psi}_{x_t,\mu_t}$ instead of $\phi_{x_t,c_t}$. We defer the complete proof to Appendix \ref{app: proof regret bound observed}. There we also show a similar result for the sample based feature sets $\tilde{\Psi}_t$ analogous to Theorem \ref{thm: main regret bound}.
\subsection{Kernelized stochastic bandits with context distributions}\label{subsection: rkhs}
In the kernelized setting, the reward function $f: \xX\times \cC \rightarrow \RR$ is a member of a known reproducing kernel Hilbert space (RKHS) $\hH$ with kernel function $k : (\xX\times \cC)^2 \rightarrow \RR$. In the following, let $\| \cdot \|_\hH$ be the Hilbert norm and we denote by $k_{x,c} := k(x,c, \cdot, \cdot) \in \hH$ the kernel features. For the analysis we further make the standard boundedness assumption $\|f\| \leq 1$ and $\|k_{x,c}\| \leq 1$. We provide details on how to estimate $f$ given data $\{(x_s,\mu_s, y_s)\}_{s=1}^t$ with uncertain context. As in the linear case, the estimator $\hat{f}_t$ can be defined as an empirical risk minimizer with parameter $\lambda > 0$,
\begin{align}
\hat{f}_t = \argmin_{f \in \hH} \sum_{s=1}^t \big(\EE_{c \sim \mu_t}[f(x_s, c)] - y_s\big)^2 + \lambda \|f\|_\hH^2 \text{ .}\label{eq: distributional risk minimization}
\end{align}
In the literature this is known as \emph{distributional risk minimization} \citep[Section 3.7.3]{muandet2017kernel}. The following representer theorem shows, that the solution can be expressed as a linear combination of kernel mean embeddings $\bar{k}_{x,\mu} := \EE_{c \sim \mu}[k_{x,c}] \in \hH$.
\begin{theorem}[{\citet[Theorem 1]{muandet2012learning}}]
Any $f \in \hH$ that minimizes the regularized risk functional \eqref{eq: distributional risk minimization} admits a representation of the form $ f = \sum_{s=1}^t \alpha_s \bar{k}_{x_s, \mu_s}$ for some $\alpha_s \in \RR$.
\end{theorem}
It is easy to verify that the solution to \eqref{eq: distributional risk minimization} can be written as
\begin{align}
\hat{f}_t(x,c) = k_t(x,c)^\T(K_t + \lambda \mathbf{I})^{-1} y_t
\end{align}
where $k_t(x,c) = [\bar{k}_{x_1,\mu_1}(x,c), \dots, \bar{k}_{x_t,\mu_t}(x,c)]^\T$, $(K_t)_{a,b} = \EE_{c \sim \mu_b}[\bar{k}_{x_a,\mu_a}(x_b,c)]$ for $1 \leq a,b, \leq t$ is the kernel matrix and $y_t = [y_1, \dots, y_t]^T$ denotes the vector of observations. Likewise, the estimator can be computed from sample based kernel mean embeddings $\tilde{k}_{x,\mu}^L := \frac{1}{L}\sum_{i=1}^L k(x,\tilde{c_i}, \cdot, \cdot) \in \hH$ for i.i.d.\ samples $\tilde{c_i} \sim \mu$. This allows for an efficient implementation also in the kernelized setting, at the usual cost of inverting the kernel matrix. With iterative updates the overall cost amount to $\oO(LT^3)$. The cubic scaling in $T$ can be avoided with finite dimensional feature approximations or inducing points methods, e.g.\ \cite{rahimi2008random,mutny2018efficient}.
The UCB algorithm can be defined using an analogous concentration result for the RKHS setting \citep{abbasi2012online}. We provide details and the complete kernelized algorithm (Algorithm \ref{alg: distribution ucb kernel}) in Appendix \ref{app: kernelized algorithm}. The corresponding regret bound is summarized in the following theorem.
\begin{theorem}\label{thm: regret bound kernelized}
At any time $T \in \NN$, the regret of Algorithm \ref{alg: distribution ucb kernel} with exact kernel mean embeddings $\bar{k}_{x,c}$ and $\beta_t$ as defined in Lemma \ref{lemma: kernel mean concentration} in Appendix \ref{app: kernelized algorithm}, is bounded with probability at least $1-\delta$ by
\begin{align*}
\rR_T \leq \beta_T \sqrt{8 T \log (\det(\mathbf{I} + (\lambda\rho)^{-1}K_T))} + 4\sqrt{2 T \log \frac{2}{\delta}}
\end{align*}
\end{theorem}
Again, the data dependent log-determinant in the regret bound can be replaced with kernel specific bounds, referred to as \emph{maximum information gain} $\gamma_T$ \citep{srinivas2010gaussian}.
\section{Experiments}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{\textwidth}
\hspace{20px} \includegraphics{plots/legend.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.345\textwidth}
\includegraphics{plots/cdb2.pdf}
\caption{Synthetic Example}
\label{fig:synthetic}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics{plots/movielens_h.pdf}
\caption{Movielens}
\label{fig:movielens}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics{plots/agroscope_n1.pdf}
\caption{Wheat Yield Data}
\label{fig:crop}
\end{subfigure}
\caption{The plots show cumulative regret as defined in \eqref{eq:cumulative reg}. As expected, the variant that does not observe the context (\emph{hidden}) is out-performed by the variant that uses the context realization for regression (\emph{obs})4. The sample size $l$ used to construct the feature sets from the context distribution has a significant effect on the regret, where with $l=100$ performance is already competitive with the policy that uses the exact expectation over features (E). The \emph{exact} baseline, which has access to the context realization before taking the decision, achieves negative regret on the benchmark (\subref{fig:synthetic}) and (\subref{fig:movielens}), as the regret objective \eqref{eq:cumulative reg} compares to the action maximizing the expected reward. The error bars show two times standard error over 100 trials for (\subref{fig:synthetic}) and (\subref{fig:crop}), and 200 trials for (\subref{fig:movielens}). The variance in the movielens experiment is fairly large, likely because our linear model is miss-specified; and at the first glance, it looks like the sample-based version outperforms the expected version in one case. From repeated trials we confirmed, that this is only an effect of the randomness in the results.}
\label{fig:experiments}
\end{figure}
We evaluate the proposed method on a synthetic example as well as on two benchmarks that we construct from real-world data. Our focus is on understanding the effect of the sample size $L$ used to define the context set $\tilde{\Psi}_t^l$. We compare three different observational modes, with decreasing amount of information available to the learner. First, in the \emph{exact} setting, we allow the algorithm to observe the context realization before choosing an action, akin to the usual contextual bandit setting. Note that this variant possibly obtains negative reward on the regret objective \eqref{eq:cumulative reg}, because $x_t^*$ is computed to maximize the expected reward over the context distribution independent of $c_t$. Second, in the \emph{observed} setting, decisions are based on the context distribution, but the regression is based on the exact context realization. Last, only the context distribution is used for the \emph{hidden} setting. We evaluate the effect of the sample sizes $L=10,100$ and compare to the variant that uses that exact expectation of the features. As common practice, we treat the confidence parameter $\beta_T$ as tuning parameter that we choose to minimize the regret after $T=1000$ steps. Below we provide details on the experimental setup and the evaluation is shown in Figure \ref{fig:experiments}. In all experiments, the `exact' version significantly outperforms the distributional variants or even achieves negative regret as anticipated. Consistent with our theory, observing the exact context after the action choice improves performance compared to the unobserved variant. The sampled-based algorithm is competitive with the expected features already for $L=100$ samples.
\paragraph{Synthetic Example} As a simple synthetic benchmark we set the reward function to $f(x,c) = \sum_{i=1}^5 (x_i-c_i)^2$, where both actions and context are vectors in $\RR^5$. We choose this quadratic form to create a setting where the optimal action strongly depends on the context $c_i$. As linear parametrization we choose $\phi(x,c) = (x_1^2, \cdots, x_5^2, c_1^2, \cdots, c_5^2, x_1c_1, \dots, x_5c_5)$. The action set consists of $k=100$ elements that we sample at the beginning of each trial from a standard Gaussian distribution. For the context distribution, we first sample a random element $m_t \in \RR^5$, again from a multivariate normal distribution, and then set $\mu_t= \nN(m_t, \mathbf{1})$. Observation noise is Gaussian with standard deviation 0.1.
\paragraph{Movielens Data} Using matrix factorization we construct 6-dimensional features for user ratings of movies in the \textit{movielens-1m} dataset \citep{harper2016movielens}. We use the learned embedding as ground truth to generate the reward which we round to half-integers between 0 and 5 likewise the actual ratings. Therefore our model is miss-specified in this experiment. Besides the movie ratings, the data set provides basic demographic data for each user. In the interactive setting, the context realization is a randomly sampled user from the data. The context distribution is set to the empirical distribution of users in the dataset with the same demographic data. The setup is motivated by a setting where the system interacts with new users, for which we already obtained the basic demographic data, but not yet the exact user's features (that in collaborative filtering are computed from the user's ratings). We provide further details in Appendix \ref{app: details movielens}.
\paragraph{Crop Yield Data} We use a wheat yield dataset that was systematically collected by the Agroscope institute in Switzerland over 15 years on 10 different sites. For each site and year, a 16-dimensional suitability factor based on recorded weather conditions is available. The dataset contains 8849 yield measurements for 198 crops. From this we construct a data set $\dD =\{(x_i, w_i, y_i)\}$ where $x_i$ is the identifier of the tested crop, $w_i \in \RR^{16+10}$ is a 16 dimensional suitability factor obtained from weather measurements augmented with a 1-hot encoding for each site, and $y_i$ is the normalized crop yield. We fit a bilinear model $y_i \approx w_i^T W V_{x_i}$ to get 5-dimensional features $V_x$ for each variety $x$ and site features $w^\T W$ that take the weather conditions $w$ of the site into account. From this model, we generate the ground-truth reward. Our goal is to provide crop recommendations to maximize yield on a given site with characteristics $w$. Since $w$ is based on weather measurements that are not available ahead of time, we set the context distribution such that each feature of $w$ is perturbed by a Gaussian distribution centered around the true $w$. We set the variance of the perturbation to the empirical variance of the features for the current site over all 15 years. Further details are in Appendix \ref{app: details agroscope}.
\section{Related Work}\label{sec: related_work}
There is a large array of work on bandit algorithms, for a survey see \cite{bubeck2012regret} or the book by \cite{lattimore2018bandit}. Of interest to us is the \emph{stochastic contextual bandit problem}, where the learner chooses actions after seeing a context; and the goal is to compete with a class of policies, that map contexts to actions. This is akin to reinforcement learning \citep{sutton2018reinforcement}, but the contextual bandit problem is different in that the sequence of contexts is typically allowed to be \emph{arbitrary} (even adversarially chosen), and does not necessarily follow a specific transition model. The contextual bandit problem in this formulation dates back to at least \cite{abe1999associative} and \cite{langford2007epoch}. The perhaps best understood instance of this model is the \emph{linear contextual bandit}, where the reward function is a linear map of feature vectors \citep{auer2002nonstochastic}. One of the most popular algorithms is the Upper Confidence Bound (UCB) algorithm, first introduced by \cite{auer2002using} for the multi-armed bandit problem, and later extended to the linear case by \cite{li2010contextual}. Analysis of this algorithm was improved by \cite{dani2008stochastic}, \cite{abbasi2011improved} and \cite{li2019tight}, where the main technical challenge is to construct tight confidence sets for an online version of the least squares estimator. Alternative exploration strategies have been considered as well, for instance Thompson sampling \citep{thompson1933}, which was analyzed for the linear model by \cite{agrawal2013thompson} and \cite{abeille2017linear}. Other notable approaches include an algorithm that uses a perturbed data history as exploration mechanism \citep{kveton2019perturbed}, or a \emph{mostly greedy} algorithm that leverages the randomness in the context to obtain sufficient exploration \citep{bastani2017mostly}. In \emph{kernelized bandits} the reward function is contained given reproducing kernel Hilbert space (RKHS). This setting is closely related to Bayesian optimization \citep{mockus1982}. Again, the analysis hinges on the construction of confidence sets and bounding a quantity referred to as \emph{information gain} by the decay of the kernel's eigenspectrum. An analysis of the UCB algorithm for this setting was provided by \cite{srinivas2010gaussian}. It was later refined by \cite{abbasi2012online,valko2013finite,chowdhury2017kernelized, durand2018streaming} and extended to the contextual setting by \cite{krause2011contextual}.
Interestingly, in our reduction the noise distribution depends on the action, also referred to as \emph{heteroscedastic} bandits. Heteroscedastic bandits where previously considered by \cite{hsieh2018heteroscedastic} and \cite{kirschner18heteroscedastic}. Stochastic uncertainty on the action choice has been studied by \citet{oliveira2019bayesian} in the context of Bayesian optimization. Closer related is the work by \cite{yun2017contextual}, who introduce a linear contextual bandit model where the observed feature is perturbed by noise and the objective is to compete with the best policy that has access to the unperturbed feature vector. The main difference to our setting is that we assume that the environment provides a distribution of feature vectors (instead of a single, perturbed vector) and we compute the best action as a function of the distribution. As a consequence, we are able to obtain $\oO(\sqrt{T})$ regret bounds without further assumptions on the context distribution, while \cite{yun2017contextual} get $\oO(T^{7/8})$ with identical noise on each feature, and $\oO(T^{2/3})$ for Gaussian feature distributions. Most closely related is the work by \citet{lamprier2018profile} on linear bandits with stochastic context. The main difference to our setting is that the context distribution in \cite{lamprier2018profile} is fixed over time, which allows to built aggregated estimates of the mean feature vector over time. Our setting is more general in that it allows an arbitrary sequence of distributions as well as correlation between the feature distributions of different actions. Moreover, in contrast to previous work, we discuss the kernelized-setting and the setting variant, where the context is observed exactly after the action choice.
Finally, also adversarial contextual bandit algorithms apply in our setting, for example the EXP4 algorithm of \citet{auer2002nonstochastic} or ILTCB of \citet{agarwal2014taming}. Here, the objective is to compete with the best policy in a given class of policies, which in our setting would require to work with a covering of the set of distributions $\pP(\cC)$. However, these algorithms do not exploit the linear reward assumption and, therefore, are arguably less practical in our setting.
\section{Conclusion}
We introduced \emph{context distributions} for stochastic bandits, a model that is naturally motivated in many applications and allows to capture the learner's uncertainty in the context realization. The method we propose is based on the UCB algorithm, and in fact, both our model and algorithm strictly generalize the standard setting in the sense that we recover the usual model and the UCB algorithm if the environment chooses only Dirac delta distributions. The most practical variant of the proposed algorithm requires only sample access to the context distributions and satisfies a high-probability regret bound that is order optimal in the feature dimension and the horizon up to logarithmic factors.
\subsubsection*{Acknowledgments}
The authors thank Agroscope for providing the crop yield data set, in particular Didier Pellet, Lilia Levy and Juan Herrera, who collected the winter wheat data, and Annelie Holzkämper, who developed the environmental suitability factors model. Further, the authors acknowledge the work by Mariyana Koleva and Dejan Mir\v{c}i\'c, who performed the initial data cleaning and exploration as part of their Master's theses.
This research was supported by SNSF grant 200020 159557 and has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme grant agreement No 815943.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,004
|
Eclipsa de Lună din 4 aprilie 2015 este prima eclipsă de Lună din anul 2015. Este cea de a treia eclipsă totală din tetradă, adică dintr-o serie de patru eclipse totale consecutive care au fiecare loc la un interval de circa șase luni. Primele două s-au produs la 15 aprilie 2014 și la 8 octombrie 2014, iar ultima va avea loc la 28 septembrie 2015.
Vizibilitate
Notă: Principala particularitate a aceste eclipse totale este durata foarte scurtă a fazei de totalitate (4 minute și 43 de secunde).
Maximul eclipsei a avut loc la 14h 01m 15 s UTC.
Această eclipsă a fost vizibilă din vestul Americii de Nord, cvasi-totalitatea Oceanului Pacific, estul Asiei, Australia și Noua Zeelandă.
Nu a fost vizibilă în Europa, și, prin urmare, nici în România.
Note
Bibliografie
Societatea Astronomică Română de Meteori, Efemeride Astronomice pentru anul 2015, Anul III (2014), Conține efemeride astronomice pentru meridianul București, Publicație pentru astronomii amatori, Bumbești-Jiu 2014 ISSN 2285-8911
Legături externe
Astroinfo: Sorin Hotea, Scurta eclipsă totală de Lună din 4 aprilie 2015
Hermit Eclipse: Total Lunar Eclipse: 4 aprilie 2015
Mattastro.com Total Lunar Eclipse: 4 aprilie 2015
Full Moon in Earth's Shadow APOD 2015 April 8
Eclipse de Lună
2015 în știință
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,768
|
Last Updated: Friday, 17 February 2006, 10:01 GMT
Presley weds for the fourth time
Lisa Marie Presley and Michael Lockwood married in Japan
Elvis Presley's daughter Lisa Marie Presley has tied the knot with guitarist Michael Lockwood.
The couple exchanged vows in a traditional Japanese service in Kyoto last month, where her mother, Priscilla Presley, gave her away.
It is the fourth time the 38-year-old singer has walked down the aisle.
She was previously married to Michael Jackson, Oscar-winning actor Nicolas Cage, and musician Danny Keough, who was best man at the latest ceremony.
Presley married musician Keough in 1988, and the couple had two children, Riley, 16, and Benjamin, 13.
They divorced in 1994, just two years before Presley's surprise marriage to pop icon Jackson, which lasted just 20 months
In 2002, Presley married Captain Corelli's Mandolin star Nicolas Cage after an on-off relationship.
Cage filed for divorce after just three months saying the union had been "a big mistake." The divorce became final in May 2004.
Presley's debut album To Whom It May Concern was released in 2003, selling 140,000 copies during its first week
She released her second album, Now What, which went gold in November 2005.
Lisa Marie sells Presley estate
17 Dec 04 | Entertainment
Presley regrets Jackson marriage
28 Mar 03 | Entertainment
Presley regrets Cage marriage
27 Nov 02 | Entertainment
Wedding bells for Cage and Presley
12 Aug 02 | Entertainment
TOP ENTERTAINMENT STORIES
U2 reschedule US and Canada dates
Cheryl 'on the mend' says Simon Cowell
Zsa Zsa Gabor taken to hospital
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 563
|
Biografia
Nato a Parigi, Devoyon iniziò i suoi studi con Blanche Bascourret de Gueraldi, poi con Lélia Gousseau al Conservatorio di Parigi dove vinse il primo premio nel 1971. Poi diventò famoso per i suoi successi nelle competizioni internazionali: il Concorso pianistico internazionale Ferruccio Busoni (nel 1974 , secondo posto), il Concorso pianistico internazionale di Leeds (nel 1975, terzo posto), infine il Concorso internazionale Čajkovskij a Mosca (nel 1978, secondo premio). Favorito nella finale di questa competizione, la giuria gli assegnò solo il secondo posto dietro a Mikhail Pletnev. L'anno seguente iniziò una carriera internazionale e diede diversi concerti in Europa, Stati Uniti e Unione Sovietica. Tra le sue registrazioni degne di nota troviamo Gaspard de la nuit di Maurice Ravel e la Sonata per pianoforte in si minore di Franz Liszt.
Devoyon si esibisce spesso come pianista da camera e collabora con il violoncellista Steven Isserlis, il violinista Dong-Suk Kang e la pianista Rikako Murata. Dal 1991 insegna al Conservatorio di Parigi, quindi dal 1995 all'Università delle Arti di Berlino e al Conservatoire de musique de Genève.
Tra i suoi allievi ci sono Mélodie Zhao, Caroline Fischer e Louis Schwizgebel-Wang.
Continua allo stesso tempo la sua carriera concertistica.
Note
Collegamenti esterni
Pascal Devoyon on Royal Academy of Music
Pascal Devoyon (Yamaha)
Pascal Devoyon on Hyperion Records
Pascal Devoyon plays Saint-Saëns Piano Concerto no. 4 (YouTube)
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,318
|
When it came time for Steven Moffat to revisit the Weeping Angels, I didn't know how he'd be able to top their jaw-droppingly creepy debut in Season Four's "Blink."
Yet, in this week's episode of Doctor Who ("The Time of Angels"), written by Steven Moffat and directed by Adam Smith, Moffat manages to not only make the terrifying "Blink" seem like child's play in comparison, but he creates a whole new level of dread and suspense involving these otherworldly creatures, taking Doctor Who's fairy tale ethos into a very dark place indeed.
Since their introduction in "Blink," The Weeping Angels have remained my favorite Who villains to date as their very horror stems from the fact that they're quantum locked: we can't see them move or about to pounce until it's too late for us. Look away, or even blink, and they'll consume the full potential of your life. While that was already a particularly unnerving proposition, Moffat has done himself proud with "The Time of Angels," an episode that finds the Doctor and Amy Pond joining up with a figure from the Doctor's future: River Song (Alex Kingston), who appears to be hiding a rather crucial secret from the man who could one day be her husband. Or will he be? Hmmm...
What did I think of "The Time of Angels"? Let's discuss.
Talk about an entrance: I thought that the reunion between the Doctor and River was handled with a mix of excitement, drama, and heart-stopping visuals as River not only carves a message in Old High Gallifreyan to the Doctor (so he'll find it at some point in the future) but then jettisons herself through a spaceship airlock knowing that he'll rescue her in the TARDIS. Yes, ladies and gentlemen, River Song is amazing.
Just who is she? She knows the ancient Gallifreyan language (enough at least to inscribe "Hello, Sweetie!" on the Byzantium's Home Box), she can pilot the TARDIS better than the Doctor himself, and she knows details about the Doctor that point to some future relationship between the two. (In her last appearance, Season Four's "The Forest of the Dead," also written by Moffat, River appears to know the Doctor's true name.)
But there's also something else going on here as well, something that River is keeping concealed from the Doctor. Octavian whispers something to River about the Doctor perhaps not helping her if he knew the truth... and the fact that she appears to be a released prisoner helping the church on this mission. What crime could she have committed? Hmmm...
And then there are the Angels themselves. While their appearance in "Blink" was haunting, here the Weeping Angels are even more powerful, able to transform themselves from an image of an Angel into an actual Angel, as poor Amy Pond discovers when she's trapped inside the surveillance center and is watching the Angel move, even though it's been filmed on a four-second timeloop. I thought that sequence was brilliantly shot and acted as Amy becomes aware of her predicament and is able to save herself without the Doctor's help. However, her belief that she has something in her eye makes me think that she's not out of the woods by a longshot. If an image of an Angel can become an Angel and the creature imprinted itself on her eye then it may have transmitted itself to her brain... where it begins to manifest inside Amy Pond.
This theory would seem to bare some weight given Amy's later belief that she's turning to stone as the Angel begins to control her mind and perception capabilities. Which would be bad enough if the team wasn't already surrounded by an army of Weeping Angels, the statues within the maze of the dead, and who had already killed off the majority of their military cleric team. The crash of the Byzantium wasn't an accident: it was a rescue mission. The Angel aboard that ship was looking to free its brethren trapped in the maze of the dead on Alfava Metraxis.
And now they've lured the Doctor, Amy, and River into a deadly trap. But, as the Doctor says, there's one thing you don't want to put in a trap: him. As he fires a gun (which must be a first for the Doctor), we're left to ponder just how they'll escape and seeming inescapable trap laid out for them by the army of Weeping Angels surrounding surrounding them and draining their lights...
Well, there is that matter of the gravity globe, after all. And, while they might be at the top of the maze, they're right underneath the Byzantium.
All in all, Moffat has delivered a fantastic episode that put the tenuous partnership between the Doctor and Amy in jeopardy, reintroduced both the Weeping Angels and River Song in sensational style, and left me anxious to see just what happens next. Full of atmosphere, tension, and gripping suspense, "The Time of Angels" ranks up there with the very best of Doctor Who, a consummate ghost story of terror, rage, and fear that I can't shake from my mind, days later.
What did you think of this week's episode? Is River Song who she claims to be? What did Octavian mean by his off-hand comment towards River? What's going on with Amy? Head to the comments section to discuss.
On the next episode of Doctor Who ("Flesh and Stone"), trapped among an army of Weeping Angels, the Doctor and his friends must try to escape through the wreckage of a crashed space liner, but in the forest vault, Amy Pond finds herself under a yet more deadly attack.
BBC America Doctor Who From Across the Pond
Labels: BBC America Doctor Who From Across the Pond
The CineManiac said…
Stupid Ethics and attorney brain keeping me from just downloading the episode illegally from one of the many torrent sites. I want to watch the next episode!!!!!
This episode was great and I loved the scene with Amy and the video, it was brilliant. Although I would love to have an appearance by Sally Sparrow!
I am not a River Song fan. I felt like she didn't fit with Tennant, and she really doesn't fit with Smith. But - she was far less annoying this go round. She may grow on me yet.
Did you call her "River Pond" on purpose because I fully intend on calling her that from now on.
As for the Angels I really loved this go round. They were terrifying. This was by far the best ep of the new season.
Generally a great episode, but not as riveting as Blink, which is still one of my favs.
I'm probably missing something, but why is this Angel killing people? Their M.O. used to be to send their victims back in time. They killed Bob to rip out his cerebral cortex, fine, but why kill the other two?
thefarmersdaughter said…
I still haven't been able to sit through an episode with the new Dr. yet. It's lost a little bit of...something...for me that I can't explain. I like Amy fine, and Matt Smith is better than I thought he would be.
I started this episode (I LOVE the weeping angels) but didn't finish it.
I miss David Tennant and I'm not on board with the new writing yet. I'll keep trying though.
Hadley said…
Blink is still one of my favorite episodes of all time but this one was also thrilling and I was happy (or terrified?) to see more of the Weeping Angels as they certainly are one of the most creepy creations ever to grace the small screen. The scene with Amy and the video image of the Angel was definitely the most intense of the episode!
Oscar Gordon said…
I'm not a long enough fan to know the whole mythology of the Doctor, so this may not be possible, but I always had the feeling that River was in fact a later incarnation of the Doctor himself. It would explain how she knows his real name, and why she had a sonic screwdriver inthe Library.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 416
|
This Article addresses the recent pretrial diversion scheme undertaken by the Department of Justice in conjunction with its Foreign Corrupt Practices Act Pilot Program—specifically, "declinations with disgorgement." Pursuant to the Pilot Program, the Department of Justice declined to prosecute or even continue an investigation, provided the company disgorge its alleged ill-gotten gains. This Article dissects both the purpose of, and terminology used in, declinations with disgorgement and argues that this novel and creative pretrial diversion is a dangerous conflation of legal remedial theories and terms. A criminal disposition cannot be a declination with attendant penalties because either illegal activity occurred or it did not; prosecutorial discretion does not allow an "in-between" option of declination while simultaneously requiring disgorgement. Calling these dispositions "declinations" and the penalties associated therewith "disgorgement" is a wild misuse of the terms, which creates a crisis in the expressive function of the Foreign Corrupt Practices Act and in the legal lexicon itself.
Karen Woody, "Declinations with Disgorgement" in FCPA Enforcement, 51 U. Mich. J. L. Reform 269 (2018).
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,031
|
Fury over new north terrorist attacks
Disident Republicans are the chief suspects in the murder of a policeman in Craigavon, Co. Armagh, 48 hours after two unarmed British soldiers were shot dead as they accepted a pizza delivery at Massereene barracks near Antrim on Saturday night.
Paddy Clancy
Soldier Mark Quinsey with members of his family.
Two other soldiers and two civilian pizza delivery men — one a Pole aged in his thirties — were injured in the barracks attack for which responsibility was admitted by the Real IRA, whose members carried out the 1998 Omagh bombing in which 29 people were massacred.
Making matters even worse in the North, on Monday evening a Police Service of Northern Ireland (PSNI) constable was shot and killed on Monday in Craigavon, Co. Armagh.
The Continuity IRA, an even smaller splinter Republican movement than the Real IRA, claimed responsibility for the attack.
"As long as there is British involvement in Ireland, these attacks will continue," a coded message from the group said.
A 17-year-old and a 37-year-old were arrested on Tuesday and was being questioned by police in connection with that attack.
The cop, Constable Stephen Carroll, who was answering a call from what PSNI chief Sir Hugh Orde called "a vulnerable person"— a woman who said a brick had been thrown through her window — was shot dead in a mainly Catholic estate in the Lismore Manor area of Craigavon.
Carroll, a 48-year-old with three grandchildren, was two years from retirement. He was the first member of the PSNI to be killed by a terrorist since the force was established eight years ago, taking over from the RUC.
Fury over the murders of the soldiers was widespread, with Sinn Fein joining in the condemnation. Party leader Gerry Adams described the killings as "wrong and counter-productive." He said there should be an end to "actions" like the one on Antrim and added, "The popular will is for peaceful and democratic change. Sinn Fein has a responsibility to be consistent. The logic of this is that we support the police in the apprehension of those responsible."
Alex Maskey, a Sinn Fein Policing Board member, said that the murder of the policeman was "yet another awful tragedy" which prompted "disgust and anger."
The deaths of the soldiers and policeman were the first terrorist murders of security forces members in almost 12 years.
The soldiers, Mark Quinsey and Cengiz Azimkar, both in their twenties, were due to leave Northern Ireland to serve in Afghanistan just hours after they were gunned down. Azimkar was said to have liked Northern Ireland so much that he was thinking of eventually settling there.
The North's First Minister Peter Robinson and Deputy First Minister Martin McGuinness postponed a planned visit to the U.S. that was to have started early in the week following the attacks. Both leaders traveled to the U.S. on Tuesday, and will be in Washington, D.C. on St. Patrick's Day where it is expected that they will meet President Barack Obama.
Speaking in Belfast on Tuesday with Robinson and Orde, McGuinness was unequivocal in his condemnation of the murders of both the soldiers and the police officer.
"The people who were responsible for killing Constable Carroll are hoping that we will lose our nerve, are hoping to destroy the peace process and to destroy the political institutions that are overwhelmingly supported by the people of this country. We are absolutely dedicated and committed to ensure that they will not succeed. We're absolutely united in our approach, in our opposition to what they are doing," he said.
"And I want to join with Peter to also wholeheartedly appeal to everyone and anyone who has any information whatsoever about these killings to pass that information to the police, north and south. We all know that because of the organization of these groups that this has to be an all-island approach in order to defeat them. And what we need to do is pledge our support to Hugh Orde, to the police forces or services north and south, in their work of combating the activities of these groups."
Robinson said, "This is a battle of wills between the political class and the evil gunmen. The political class will win. We are absolutely determined that these people will not direct us, will not frame our agenda and will not cause us to retreat from the steps that we believe to be right to take this country forward."
British Prime Minister Gordon Brown visited the scene of the soldiers' deaths on Monday and was urged by Robinson "to take whatever steps are necessary to ensure that innocent life is protected in the face of this terrorist threat across Northern Ireland."
At Stormont just before the policeman's murder, Brown, Robinson and McGuinness, in a strong show of political unity aimed at easing fears that political stability is threatened, insisted the soldiers' killings would not be allowed to derail the political process.
Robinson said, "I am sickened at the attempts by terrorists to destabilize Northern Ireland. Those responsible will not be allowed to drag our province back to the past."
The attacks followed last week's announcement by Orde that he was redeploying a small number of specialist British Army Intelligence personnel in the North, a move that was criticized by Sinn Fein.
McGuinness said after the soldiers' murders that he stood by the criticism, saying there had to be concern over more soldiers being brought into the situation in the North. He described it as "a step back in time."
But McGuinness, who called the murderers "traitors," also emphasized his personal determination to defend and build the peace process and he urged everybody to support the police service and to "make politics work."
McGuinness said the attacks were carried out by members of "micro-groups who are living in cloud cuckoo-land."
He said, "We have always known there was going to be a challenge from people who are hostile to the peace process and have their own agendas." He called on Orde to ensure that dissident Republicans do not get their way.
In the Irish Republic President Mary McAleese used exceptionally strong language in her condemnation. She spoke of her "outrage" at the murder of a PSNI officer and said that his killers and those who murdered the soldiers were acting "in utter defiance" of the clearly expressed will of the Irish people.
"Dissident Republican-ism has been left far behind. The dissidents are now a tiny, isolated band of throwbacks, using tired, old, failed strategies," she said.
McAleese urged anybody with information on the attacks to tell what they know. She said, "I hope those who know them will reflect on what is at stake here and be persuaded by the massive public and political solidarity which has greeted the cowardly acts at Massareene and now Craigavon to lift the phone, tell the police, join the peacemakers and put an end to this hell on earth."
In Leinster House, seat of the Dail (Parliament) in Dublin, there was also cross-party strong condemnation of the murders, with Taoiseach (Prime Minister) Brian Cowen saying he looked forward to seeing those responsible brought quickly to justice.
"Violence has been utterly rejected by the people of this island, both north and south," he said. "A tiny group of evil people cannot and will not undermine the will of the people of Ireland to live in peace together."
Fine Gael leader Enda Kenny, speaking after the soldiers' murders, said, "We have grown accustomed to a peace that has allowed all communities on the island to look to the future with optimism. This brutal assault is a throwback to a period that we all hoped was in our past."
A minute's silence as a mark of respect to the memory of the dead soldiers was observed at the
Dublin man killed in tragic Texas crash to be brought home to Ireland
St. Patrick's Festival 2023! Theme and program revealed for this year's global Irish celebration
Irish farmers break record in applying to become organic
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,347
|
Le tournoi de Roumanie, dénommé le BRD Nastase Tiriac Trophy, est un tournoi de tennis du circuit masculin de l'ATP World Tour, joué depuis 1993. Il fait partie du Romanian Tennis Series, circuit composé du Challenger d'Arad, du Challenger de Brașov et enfin de l'Open de Roumanie (tournoi ATP 250). Il était anciennement dénommé le BCR Open Romania. Depuis 2011, le tournoi a changé de nom et est à présent dénommé le BRD Nastase Tiriac Trophy en honneur à ses 2 icônes du tennis que sont Ilie Năstase et Ion Țiriac.
Il se dispute sur les courts en terre battue de Bucarest aux Arènes BNR. Il a été organisé au milieu du mois de septembre jusqu'en 2011 et était alors le dernier tournoi sur terre battue de la saison. Depuis 2012, il a été déplacé à la fin du mois d'avril et arrive donc au début de la saison sur terre battue.
Le joueur qui parvient à remporter trois titres consécutifs remporte du même coup le « Tiriac-Nastase Trophy » en cristal. Le seul joueur à avoir remporté plusieurs titres est le Français Gilles Simon (en 2007, 2008 et 2012).
En 2017, il disparaît du calendrier ATP, où il est remplacé par le tournoi de Budapest.
Palmarès
Simple
Double
Liens externes
Site officiel du tournoi
Navigation
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,338
|
Jacobine-Sofie Gjertz anomenada Marie (Cristiania (Noruega), 13 de juliol de 1819 - París, 1862) fou una escriptora, compositora i pianista noruega.
Estudià música a Copenhaguen i París, i el 1844 efectuà una gira per Amèrica, on donà diversos concerts, i l'any següent es casà amb un comerciant francès, convertint-se al catolicisme: llavors fou quan adoptà el nom de Marie. El 1857 s'establí a París com a professora de música.
Publicà: La musique au point de vue moral et religieux (1859), i les novel·les L'enthoussiasme (1861) i Gabrielle. Col·laborà en els diaris Croisé i en L'Univers.
Bibliografia
Enciclopèdia Espasa Volum núm. 26, pàg. 240. ()
Escriptors noruecs
Compositors noruecs
Morts a París
Artistes d'Oslo
Pianistes noruecs
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 311
|
Hammond Elementary School will hold its Spring Garage Sale from 9 a.m. to 1 p.m. Saturday. The rain date is May 4.
Anyone with gently used goods is invited to reserve a space to sell. Forms to reserve space are available at the school or online at www.hespta.org/garage-sale.html.
The school is on Aladdin Drive, off Gorman Road. Information: Cathy Barrett, 301-604-1971.
The Ebony Classical Music Society will hold its fifth annual benefit Spring Concert at 4 p.m. May 4 at Christ Episcopal Church, 6800 Oakland Mills Road. The nonprofit organization promotes the study, appreciation and performance of classical music among talented singers.
Winners of the organization's annual competition will perform at the concert.
Proceeds from ticket sales will help fund scholarships. Information or to purchase tickets: 301-596-6099.
The Vikatadamshtri Buddhist Center of Baltimore will offer a meditation seminar, "Settling the Mind," from 9:30 a.m. to 1 p.m. Saturday at the Oakland Mills Interfaith Center, 5885 Robert Oliver Place.
The cost is $30. Information: 410-243-3837 or visit www.MeditationMd.org.
The Columbia Association Art Center, 6100 Foreland Garth, will offer a Bead-Stringing workshop, with artist Pat Baker, from 10 a.m. to 4 p.m. May 11.
Participants can learn basic techniques for creating a bracelet, necklace and earrings.
Tuition is $60 for residents; $70 for nonresidents.
Information or to register: 410-730-0075.
The group will discuss Erasure by Percival Everett.
The Good Reads book club will meet at 7 p.m. May 14 to discuss March by Geraldine Brooks. Books are available at the branch.
The library's Morning Books with Coffee club will meet at 10:30 a.m. May 28. The group will discuss Away by Amy Bloom. Books are available at the branch.
Registration is not required. Information: 410-313-7700.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 267
|
/*
* This file contains the system call numbers.
*/
#define TARGET_NR_restart_syscall 0
#define TARGET_NR_exit 1
#define TARGET_NR_fork 2
#define TARGET_NR_read 3
#define TARGET_NR_write 4
#define TARGET_NR_open 5
#define TARGET_NR_close 6
#define TARGET_NR_waitpid 7
#define TARGET_NR_creat 8
#define TARGET_NR_link 9
#define TARGET_NR_unlink 10
#define TARGET_NR_execve 11
#define TARGET_NR_chdir 12
#define TARGET_NR_time 13
#define TARGET_NR_mknod 14
#define TARGET_NR_chmod 15
#define TARGET_NR_lchown 16
#define TARGET_NR_break 17
#define TARGET_NR_oldstat 18
#define TARGET_NR_lseek 19
#define TARGET_NR_getpid 20
#define TARGET_NR_mount 21
#define TARGET_NR_umount 22
#define TARGET_NR_setuid 23
#define TARGET_NR_getuid 24
#define TARGET_NR_stime 25
#define TARGET_NR_ptrace 26
#define TARGET_NR_alarm 27
#define TARGET_NR_oldfstat 28
#define TARGET_NR_pause 29
#define TARGET_NR_utime 30
#define TARGET_NR_stty 31
#define TARGET_NR_gtty 32
#define TARGET_NR_access 33
#define TARGET_NR_nice 34
#define TARGET_NR_ftime 35
#define TARGET_NR_sync 36
#define TARGET_NR_kill 37
#define TARGET_NR_rename 38
#define TARGET_NR_mkdir 39
#define TARGET_NR_rmdir 40
#define TARGET_NR_dup 41
#define TARGET_NR_pipe 42
#define TARGET_NR_times 43
#define TARGET_NR_prof 44
#define TARGET_NR_brk 45
#define TARGET_NR_setgid 46
#define TARGET_NR_getgid 47
#define TARGET_NR_signal 48
#define TARGET_NR_geteuid 49
#define TARGET_NR_getegid 50
#define TARGET_NR_acct 51
#define TARGET_NR_umount2 52
#define TARGET_NR_lock 53
#define TARGET_NR_ioctl 54
#define TARGET_NR_fcntl 55
#define TARGET_NR_mpx 56
#define TARGET_NR_setpgid 57
#define TARGET_NR_ulimit 58
#define TARGET_NR_oldolduname 59
#define TARGET_NR_umask 60
#define TARGET_NR_chroot 61
#define TARGET_NR_ustat 62
#define TARGET_NR_dup2 63
#define TARGET_NR_getppid 64
#define TARGET_NR_getpgrp 65
#define TARGET_NR_setsid 66
#define TARGET_NR_sigaction 67
#define TARGET_NR_sgetmask 68
#define TARGET_NR_ssetmask 69
#define TARGET_NR_setreuid 70
#define TARGET_NR_setregid 71
#define TARGET_NR_sigsuspend 72
#define TARGET_NR_sigpending 73
#define TARGET_NR_sethostname 74
#define TARGET_NR_setrlimit 75
#define TARGET_NR_getrlimit 76
#define TARGET_NR_getrusage 77
#define TARGET_NR_gettimeofday 78
#define TARGET_NR_settimeofday 79
#define TARGET_NR_getgroups 80
#define TARGET_NR_setgroups 81
#define TARGET_NR_select 82
#define TARGET_NR_symlink 83
#define TARGET_NR_oldlstat 84
#define TARGET_NR_readlink 85
#define TARGET_NR_uselib 86
#define TARGET_NR_swapon 87
#define TARGET_NR_reboot 88
#define TARGET_NR_readdir 89
#define TARGET_NR_mmap 90
#define TARGET_NR_munmap 91
#define TARGET_NR_truncate 92
#define TARGET_NR_ftruncate 93
#define TARGET_NR_fchmod 94
#define TARGET_NR_fchown 95
#define TARGET_NR_getpriority 96
#define TARGET_NR_setpriority 97
#define TARGET_NR_profil 98
#define TARGET_NR_statfs 99
#define TARGET_NR_fstatfs 100
#define TARGET_NR_ioperm 101
#define TARGET_NR_socketcall 102
#define TARGET_NR_syslog 103
#define TARGET_NR_setitimer 104
#define TARGET_NR_getitimer 105
#define TARGET_NR_stat 106
#define TARGET_NR_lstat 107
#define TARGET_NR_fstat 108
#define TARGET_NR_olduname 109
#define TARGET_NR_iopl 110
#define TARGET_NR_vhangup 111
#define TARGET_NR_idle 112
#define TARGET_NR_vm86 113
#define TARGET_NR_wait4 114
#define TARGET_NR_swapoff 115
#define TARGET_NR_sysinfo 116
#define TARGET_NR_ipc 117
#define TARGET_NR_fsync 118
#define TARGET_NR_sigreturn 119
#define TARGET_NR_clone 120
#define TARGET_NR_setdomainname 121
#define TARGET_NR_uname 122
#define TARGET_NR_modify_ldt 123
#define TARGET_NR_adjtimex 124
#define TARGET_NR_mprotect 125
#define TARGET_NR_sigprocmask 126
#define TARGET_NR_create_module 127
#define TARGET_NR_init_module 128
#define TARGET_NR_delete_module 129
#define TARGET_NR_get_kernel_syms 130
#define TARGET_NR_quotactl 131
#define TARGET_NR_getpgid 132
#define TARGET_NR_fchdir 133
#define TARGET_NR_bdflush 134
#define TARGET_NR_sysfs 135
#define TARGET_NR_personality 136
#define TARGET_NR_afs_syscall 137 /* Syscall for Andrew File System */
#define TARGET_NR_setfsuid 138
#define TARGET_NR_setfsgid 139
#define TARGET_NR__llseek 140
#define TARGET_NR_getdents 141
#define TARGET_NR__newselect 142
#define TARGET_NR_flock 143
#define TARGET_NR_msync 144
#define TARGET_NR_readv 145
#define TARGET_NR_writev 146
#define TARGET_NR_getsid 147
#define TARGET_NR_fdatasync 148
#define TARGET_NR__sysctl 149
#define TARGET_NR_mlock 150
#define TARGET_NR_munlock 151
#define TARGET_NR_mlockall 152
#define TARGET_NR_munlockall 153
#define TARGET_NR_sched_setparam 154
#define TARGET_NR_sched_getparam 155
#define TARGET_NR_sched_setscheduler 156
#define TARGET_NR_sched_getscheduler 157
#define TARGET_NR_sched_yield 158
#define TARGET_NR_sched_get_priority_max 159
#define TARGET_NR_sched_get_priority_min 160
#define TARGET_NR_sched_rr_get_interval 161
#define TARGET_NR_nanosleep 162
#define TARGET_NR_mremap 163
#define TARGET_NR_setresuid32 164
#define TARGET_NR_getresuid32 165
#define TARGET_NR_query_module 166
#define TARGET_NR_poll 167
#define TARGET_NR_nfsservctl 168
#define TARGET_NR_setresgid32 169
#define TARGET_NR_getresgid32 170
#define TARGET_NR_prctl 171
#define TARGET_NR_rt_sigreturn 172
#define TARGET_NR_rt_sigaction 173
#define TARGET_NR_rt_sigprocmask 174
#define TARGET_NR_rt_sigpending 175
#define TARGET_NR_rt_sigtimedwait 176
#define TARGET_NR_rt_sigqueueinfo 177
#define TARGET_NR_rt_sigsuspend 178
#define TARGET_NR_pread64 179
#define TARGET_NR_pwrite64 180
#define TARGET_NR_chown 181
#define TARGET_NR_getcwd 182
#define TARGET_NR_capget 183
#define TARGET_NR_capset 184
#define TARGET_NR_sigaltstack 185
#define TARGET_NR_sendfile 186
#define TARGET_NR_getpmsg 187 /* some people actually want streams */
#define TARGET_NR_putpmsg 188 /* some people actually want streams */
#define TARGET_NR_vfork 189
#define TARGET_NR_ugetrlimit 190 /* SuS compliant getrlimit */
#define TARGET_NR_readahead 191
#if !defined(TARGET_PPC64) || defined(TARGET_ABI32)
#define TARGET_NR_mmap2 192
#define TARGET_NR_truncate64 193
#define TARGET_NR_ftruncate64 194
#define TARGET_NR_stat64 195
#define TARGET_NR_lstat64 196
#define TARGET_NR_fstat64 197
#endif
#define TARGET_NR_pciconfig_read 198
#define TARGET_NR_pciconfig_write 199
#define TARGET_NR_pciconfig_iobase 200
#define TARGET_NR_multiplexer 201
#define TARGET_NR_getdents64 202
#define TARGET_NR_pivot_root 203
#if !defined(TARGET_PPC64) || defined(TARGET_ABI32)
#define TARGET_NR_fcntl64 204
#endif
#define TARGET_NR_madvise 205
#define TARGET_NR_mincore 206
#define TARGET_NR_gettid 207
#define TARGET_NR_tkill 208
#define TARGET_NR_setxattr 209
#define TARGET_NR_lsetxattr 210
#define TARGET_NR_fsetxattr 211
#define TARGET_NR_getxattr 212
#define TARGET_NR_lgetxattr 213
#define TARGET_NR_fgetxattr 214
#define TARGET_NR_listxattr 215
#define TARGET_NR_llistxattr 216
#define TARGET_NR_flistxattr 217
#define TARGET_NR_removexattr 218
#define TARGET_NR_lremovexattr 219
#define TARGET_NR_fremovexattr 220
#define TARGET_NR_futex 221
#define TARGET_NR_sched_setaffinity 222
#define TARGET_NR_sched_getaffinity 223
/* 224 currently unused */
#define TARGET_NR_tuxcall 225
#if !defined(TARGET_PPC64) || defined(TARGET_ABI32)
#define TARGET_NR_sendfile64 226
#endif
#define TARGET_NR_io_setup 227
#define TARGET_NR_io_destroy 228
#define TARGET_NR_io_getevents 229
#define TARGET_NR_io_submit 230
#define TARGET_NR_io_cancel 231
#define TARGET_NR_set_tid_address 232
#define TARGET_NR_fadvise64 233
#define TARGET_NR_exit_group 234
#define TARGET_NR_lookup_dcookie 235
#define TARGET_NR_epoll_create 236
#define TARGET_NR_epoll_ctl 237
#define TARGET_NR_epoll_wait 238
#define TARGET_NR_remap_file_pages 239
#define TARGET_NR_timer_create 240
#define TARGET_NR_timer_settime 241
#define TARGET_NR_timer_gettime 242
#define TARGET_NR_timer_getoverrun 243
#define TARGET_NR_timer_delete 244
#define TARGET_NR_clock_settime 245
#define TARGET_NR_clock_gettime 246
#define TARGET_NR_clock_getres 247
#define TARGET_NR_clock_nanosleep 248
#define TARGET_NR_swapcontext 249
#define TARGET_NR_tgkill 250
#define TARGET_NR_utimes 251
#define TARGET_NR_statfs64 252
#define TARGET_NR_fstatfs64 253
#if !defined(TARGET_PPC64) || defined(TARGET_ABI32)
#define TARGET_NR_fadvise64_64 254
#endif
#define TARGET_NR_rtas 255
#define TARGET_NR_sys_debug_setcontext 256
/* Number 257 is reserved for vserver */
#define TARGET_NR_migrate_pages 258
#define TARGET_NR_mbind 259
#define TARGET_NR_get_mempolicy 260
#define TARGET_NR_set_mempolicy 261
#define TARGET_NR_mq_open 262
#define TARGET_NR_mq_unlink 263
#define TARGET_NR_mq_timedsend 264
#define TARGET_NR_mq_timedreceive 265
#define TARGET_NR_mq_notify 266
#define TARGET_NR_mq_getsetattr 267
#define TARGET_NR_kexec_load 268
#define TARGET_NR_add_key 269
#define TARGET_NR_request_key 270
#define TARGET_NR_keyctl 271
#define TARGET_NR_waitid 272
#define TARGET_NR_ioprio_set 273
#define TARGET_NR_ioprio_get 274
#define TARGET_NR_inotify_init 275
#define TARGET_NR_inotify_add_watch 276
#define TARGET_NR_inotify_rm_watch 277
#define TARGET_NR_spu_run 278
#define TARGET_NR_spu_create 279
#define TARGET_NR_pselect6 280
#define TARGET_NR_ppoll 281
#define TARGET_NR_unshare 282
#define TARGET_NR_splice 283
#define TARGET_NR_tee 284
#define TARGET_NR_vmsplice 285
#define TARGET_NR_openat 286
#define TARGET_NR_mkdirat 287
#define TARGET_NR_mknodat 288
#define TARGET_NR_fchownat 289
#define TARGET_NR_futimesat 290
#if defined(TARGET_PPC64) && !defined(TARGET_ABI32)
#define TARGET_NR_newfstatat 291
#else
#define TARGET_NR_fstatat64 291
#endif
#define TARGET_NR_unlinkat 292
#define TARGET_NR_renameat 293
#define TARGET_NR_linkat 294
#define TARGET_NR_symlinkat 295
#define TARGET_NR_readlinkat 296
#define TARGET_NR_fchmodat 297
#define TARGET_NR_faccessat 298
#define TARGET_NR_get_robust_list 299
#define TARGET_NR_set_robust_list 300
#define TARGET_NR_move_pages 301
#define TARGET_NR_getcpu 302
#define TARGET_NR_epoll_pwait 303
#define TARGET_NR_utimensat 304
#define TARGET_NR_signalfd 305
#define TARGET_NR_timerfd 306
#define TARGET_NR_eventfd 307
#define TARGET_NR_sync_file_range2 308
#define TARGET_NR_fallocate 309
#define TARGET_NR_subpage_prot 310
#define TARGET_NR_timerfd_settime 311
#define TARGET_NR_timerfd_gettime 312
#define TARGET_NR_signalfd4 313
#define TARGET_NR_eventfd2 314
#define TARGET_NR_epoll_create1 315
#define TARGET_NR_dup3 316
#define TARGET_NR_pipe2 317
#define TARGET_NR_inotify_init1 318
#define TARGET_NR_perf_event_open 319
#define TARGET_NR_preadv 320
#define TARGET_NR_pwritev 321
#define TARGET_NR_rt_tgsigqueueinfo 322
#define TARGET_NR_fanotify_init 323
#define TARGET_NR_fanotify_mark 324
#define TARGET_NR_prlimit64 325
#define TARGET_NR_socket 326
#define TARGET_NR_bind 327
#define TARGET_NR_connect 328
#define TARGET_NR_listen 329
#define TARGET_NR_accept 330
#define TARGET_NR_getsockname 331
#define TARGET_NR_getpeername 332
#define TARGET_NR_socketpair 333
#define TARGET_NR_send 334
#define TARGET_NR_sendto 335
#define TARGET_NR_recv 336
#define TARGET_NR_recvfrom 337
#define TARGET_NR_shutdown 338
#define TARGET_NR_setsockopt 339
#define TARGET_NR_getsockopt 340
#define TARGET_NR_sendmsg 341
#define TARGET_NR_recvmsg 342
#define TARGET_NR_recvmmsg 343
#define TARGET_NR_accept4 344
#define TARGET_NR_name_to_handle_at 345
#define TARGET_NR_open_by_handle_at 346
#define TARGET_NR_clock_adjtime 347
#define TARGET_NR_syncfs 348
#define TARGET_NR_sendmmsg 349
#define TARGET_NR_setns 350
#define TARGET_NR_process_vm_readv 351
#define TARGET_NR_process_vm_writev 352
#define TARGET_NR_finit_module 353
#define TARGET_NR_kcmp 354
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 5,700
|
{"url":"https:\/\/questions.examside.com\/past-years\/jee\/question\/the-projectile-motion-of-a-particle-of-mass-5-g-is-shown-in-jee-main-physics-units-and-measurements-x0wgmdej4wins6iu","text":"Javascript is required\n1\nJEE Main 2021 (Online) 18th March Evening Shift\nNumerical\u00a0\u00a0+4\u00a0\u00a0-1\nThe projectile motion of a particle of mass 5 g is shown in the figure.\n\nThe initial velocity of the particle is $$5\\sqrt 2$$ ms-1 and the air resistance is assumed to be negligible. The magnitude of the change in momentum between the points A and B is x $$\\times$$ 10-2 kgms-1. The value of x, to the nearest integer, is __________.\n2\nJEE Main 2021 (Online) 18th March Morning Shift\nNumerical\u00a0\u00a0+4\u00a0\u00a0-1\nA ball of mass 10 kg moving with a velocity 10$$\\sqrt 3$$ m\/s along the x-axis, hits another ball of mass 20 kg which is at rest. After the collision, first ball comes to rest while the second ball disintegrates into two equal pieces. One piece starts moving along y-axis with a speed of 10 m\/s. The second piece starts moving at an angle of 30$$^\\circ$$ with respect to the x-axis. The velocity of the ball moving at 30$$^\\circ$$ with x-axis is x m\/s. The configuration of pieces after collision is shown in the figure below. The value of x to the nearest integer is ____________.\n\n3\nJEE Main 2021 (Online) 17th March Evening Shift\nNumerical\u00a0\u00a0+4\u00a0\u00a0-1\nThe disc of mass M with uniform surface mass density $$\\sigma$$ is shown in the figure. The centre of mass of the quarter disc (the shaded area) is at the position $${x \\over 3}{a \\over \\pi },{x \\over 3}{a \\over \\pi }$$ where x is _____________. (Round off to the Nearest Integer).\n\n[a is an area as shown in the figure]\n\nA force $$\\overrightarrow F$$ = $${4\\widehat i + 3t\\widehat j + 4\\widehat k}$$ is applied on an intersection point of x = 2 plane and x-axis. The magnitude of torque of this force about a point (2, 3, 4) is ___________. (Round off to the Nearest Integer)","date":"2023-01-30 02:48:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.43479451537132263, \"perplexity\": 313.7923939454098}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499790.41\/warc\/CC-MAIN-20230130003215-20230130033215-00681.warc.gz\"}"}
| null | null |
Curb Your Enthusiasm: A.I. Will Not Save Us From Phishing
Phishing is by far the biggest security threat to any organization. It's the most common source of attacks and has been at the root of spectacular cyber incidents like the Sony hack and the Petya ransomware attack. Due to the grave danger that phishing poses, remedies are highly sought after. Lately, companies have turned to machine learning and artificial intelligence, the supposed solutions to all societal problems, to combat phishing.
It's a seductive proposition: AI-based products promise flawless security – "it's a machine, it doesn't make mistakes!" - alongside convenience, since they automate the process. One implementation of this is Microsoft's Office 365 "Advanced Threat Protection" security, which checks emails for words and phrases that could hint at malicious activity, such as impersonating legitimate companies (e.g. Apple or Microsoft) and sentences asking for payments or password resets.
However, researchers at Avanan revealed on June 19th how hackers can bypass Microsoft's tool by adding random words to the email body at font size zero ('ZeroFont'), which will appear to the scanner as unstructured, non-malicious gibberish. To the recipient, the ZeroFont text is invisible, thus creating an entirely different email.
You read that right! A technique used by students to pad their assignment papers is enough to fool a cutting edge algorithm.
This is not the first time scammers have duped algorithms using simplistic methods: a few weeks earlier, Avanan also discovered that something similar was possible with URL links.
It is likely that hackers will continue to study how these bots function and find new ways to fool them. These criminals are creative, adaptable and unpredictable, whilst the machine defender is uncreative, inflexible and predictable. As such, these tools could lead to more breaches, as they teach consumers to switch off their scepticism whilst the AI serves up scams as "trusted messages".
Luckily, there are simple behaviours and habits that humans can learn to spot phishing attempts. For example, it is not possible to change the font in the sender address field. Therefore, one look at the sender lets you to identify a phishing mail, even if it slipped through the filter.
In order to truly protect yourself against phishing, learning ways to spot a phishing mail rather than relying on a machine will prove to be more sustainable. It takes humans to outsmart other humans.
Philipp Blaas
Co-founder, social engineering specialist and amateur philosopher.
2021 © Cyberlight Security Privacy PolicyTheme by SiteOrigin
This site uses cookies: What does that mean, exactly?
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,485
|
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Maverick;
namespace MavHttp
{
class FiniteState
{
public FiniteState(params object[] states)
{
lock (sync)
{
_states = new SortedSet<object>(states);
}
}
public override int GetHashCode()
{
return activeState.GetHashCode();
}
public override bool Equals(object obj)
{
return object.Equals(activeState,obj);
}
public static bool operator ==(FiniteState x, object y)
{
return object.Equals(x.activeState, y);
}
public static bool operator !=(FiniteState x, object y)
{
return x == y;
}
public void TransitionTo(object state)
{
activeState = state;
}
object sync = new object();
readonly SortedSet<object> _states;
object _activeState;
object activeState
{
get { lock (sync) { return _activeState; } }
set
{
lock (sync)
{
if (_states.Contains(value)) { _activeState = value; }
else
{
throw new InvalidOperationException("Invalid state.");
}
}
}
}
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 335
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.