text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Anne Valerie Aflalo (born 7 October 1976 in Malmö) is a Swedish model, beauty queen and fashion designer. Aflalo is perhaps most known for being crowned Miss Sweden 2000 and representing her country at Miss Universe 2000 in Nicosia, Cyprus. Since 2005, she has designed clothing for her own label "Valerie." Aflalo was also the host of a morning radio talkshow on NRJ. Valerie Aflalo is the daughter of Pierre Maurice Aflalo and Ann-Christine Aflalo. She studied at Borgarskolan in Malmö during 1995. References External links Valerie-Stockholm 1976 births Living people Miss Universe 2000 contestants People from Malmö Swedish beauty pageant winners Swedish female models Swedish radio presenters Swedish women radio presenters Swedish fashion designers Swedish women fashion designers 20th-century Swedish women
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,180
Q: kserve updating from 0.7 to 0.9. My .mar file works on 0.7 but not on 0.9. Was able to run the example without issue on 0.9 I have been tasked with updating kserve from 0.7 to 0.9. Our company mar files run fine on 0.7 but when I update to kserve 0.9 the pods are brought up without issue. However, when I when a request is sent it returns a 500 error. The logs are given below. model being used is: pytorch Deployment type: RawDeployment Kubernetes version: 1.25 Defaulted container "kserve-container" out of: kserve-container, storage-initializer (init) WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance. 2022-11-18T13:37:44,001 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Initializing plugins manager... 2022-11-18T13:37:44,203 [INFO ] main org.pytorch.serve.ModelServer - Torchserve version: 0.6.0 TS Home: /usr/local/lib/python3.8/dist-packages Current directory: /home/model-server Temp directory: /home/model-server/tmp Number of GPUs: 0 Number of CPUs: 1 Max heap size: 494 M Python executable: /usr/bin/python Config file: /mnt/models/config/config.properties Inference address: http://0.0.0.0:8085 Management address: http://0.0.0.0:8085 Metrics address: http://0.0.0.0:8082 Model Store: /mnt/models/model-store Initial Models: N/A Log dir: /home/model-server/logs Metrics dir: /home/model-server/logs Netty threads: 4 Netty client threads: 0 Default workers per model: 1 Blacklist Regex: N/A Maximum Response Size: 6553500 Maximum Request Size: 6553500 Limit Maximum Image Pixels: true Prefer direct buffer: false Allowed Urls: [file://.*|http(s)?://.*] Custom python dependency for model allowed: true Metrics report format: prometheus Enable metrics API: true Workflow Store: /mnt/models/model-store Model config: N/A 2022-11-18T13:37:44,208 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Loading snapshot serializer plugin... 2022-11-18T13:37:44,288 [INFO ] main org.pytorch.serve.snapshot.SnapshotManager - Started restoring 2022-11-18T13:37:44,297 [INFO ] main org.pytorch.serve.snapshot.SnapshotManager - Validating snapshot startup.cfg 2022-11-18T13:37:44,298 [INFO ] main org.pytorch.serve.snapshot.SnapshotManager - Snapshot startup.cfg validated successfully [I 221118 13:37:46 __main__:75] Wrapper : Model names ['modelname'], inference address http//0.0.0.0:8085, management address http://0.0.0.0:8085, model store /mnt/models/model-store [I 221118 13:37:46 TorchserveModel:54] kfmodel Predict URL set to 0.0.0.0:8085 [I 221118 13:37:46 TorchserveModel:56] kfmodel Explain URL set to 0.0.0.0:8085 [I 221118 13:37:46 TSModelRepository:30] TSModelRepo is initialized [I 221118 13:37:46 model_server:150] Registering model: modelname [I 221118 13:37:46 model_server:123] Listening on port 8080 [I 221118 13:37:46 model_server:125] Will fork 1 workers [I 221118 13:37:46 model_server:128] Setting max asyncio worker threads as 12 2022-11-18T13:37:54,738 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model modelname 2022-11-18T13:37:54,738 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model modelname [I 221118 13:40:12 TorchserveModel:78] PREDICTOR_HOST : 0.0.0.0:8085 [E 221118 13:40:12 web:1789] Uncaught exception POST /v1/models/modelname:predict (127.0.0.1) HTTPServerRequest(protocol='http', host='localhost:5000', method='POST', uri='/v1/models/modelname:predict', version='HTTP/1.1', remote_ip='127.0.0.1') Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/tornado/web.py", line 1704, in _execute result = await result File "/usr/local/lib/python3.8/dist-packages/kserve/handlers/predict.py", line 70, in post response = await model(body) File "/usr/local/lib/python3.8/dist-packages/kserve/model.py", line 86, in __call__ response = (await self.predict(request)) if inspect.iscoroutinefunction(self.predict) \ File "/home/model-server/kserve_wrapper/TorchserveModel.py", line 80, in predict response = await self._http_client.fetch( ConnectionRefusedError: [Errno 111] Connection refused [E 221118 13:40:12 web:2239] 500 POST /v1/models/modelname:predict (127.0.0.1) 9.66ms [I 221118 13:40:13 TorchserveModel:78] PREDICTOR_HOST : 0.0.0.0:8085 [E 221118 13:40:13 web:1789] Uncaught exception POST /v1/models/modelname:predict (127.0.0.1) HTTPServerRequest(protocol='http', host='localhost:5000', method='POST', uri='/v1/models/modelname:predict', version='HTTP/1.1', remote_ip='127.0.0.1') Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/tornado/web.py", line 1704, in _execute result = await result File "/usr/local/lib/python3.8/dist-packages/kserve/handlers/predict.py", line 70, in post response = await model(body) File "/usr/local/lib/python3.8/dist-packages/kserve/model.py", line 86, in __call__ response = (await self.predict(request)) if inspect.iscoroutinefunction(self.predict) \ File "/home/model-server/kserve_wrapper/TorchserveModel.py", line 80, in predict response = await self._http_client.fetch( ConnectionRefusedError: [Errno 111] Connection refused [E 221118 13:40:13 web:2239] 500 POST /v1/models/modelname:predict (127.0.0.1) 3.31ms [I 221118 13:40:14 TorchserveModel:78] PREDICTOR_HOST : 0.0.0.0:8085 [E 221118 13:40:14 web:1789] Uncaught exception POST /v1/models/modelname:predict (127.0.0.1) HTTPServerRequest(protocol='http', host='localhost:5000', method='POST', uri='/v1/models/modelname:predict', version='HTTP/1.1', remote_ip='127.0.0.1') Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/tornado/web.py", line 1704, in _execute result = await result File "/usr/local/lib/python3.8/dist-packages/kserve/handlers/predict.py", line 70, in post response = await model(body) File "/usr/local/lib/python3.8/dist-packages/kserve/model.py", line 86, in __call__ response = (await self.predict(request)) if inspect.iscoroutinefunction(self.predict) \ File "/home/model-server/kserve_wrapper/TorchserveModel.py", line 80, in predict response = await self._http_client.fetch( ConnectionRefusedError: [Errno 111] Connection refused [E 221118 13:40:14 web:2239] 500 POST /v1/models/modelname:predict (127.0.0.1) 3.38ms I was not able to find the package (/usr/local/lib/python3.8/dist-packages/tornado/web.py) tornado inside the mar file. So I don't think it is being used directly by the model. I tried deploying it on both kserver 0.7 and 0.9. our mar file works on kserve 0.7 but fails on kserve 0.9. I also deployed the sample inference (https://kserve.github.io/website/0.9/modelserving/v1beta1/torchserve/#create-the-torchserve-inferenceservice) on kserve 0.9 and it worked as expected. deployed it on GKE, rke2 and Docker Desktop Kubernetes.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,226
Lepiota babruzalka is an agaric mushroom of the genus Lepiota in the order Agaricales. Described as new to science in 2009, it is found in Kerala State, India, where it grows on the ground in litterfall around bamboo stems. Fruit bodies have caps that measure up to in diameter, and are covered with reddish-brown scales. The cap is supported by a long and slender stem up to long and thick. One of the distinguishing microscopic features of the species is the variably shaped cystidia found on the edges of the gills. Taxonomy The species was first described by Arun Kumar Thirovoth Kottuvetta and P. Manimohan in the journal Mycotaxon in 2009, in a survey of the genus Lepiota in Kerala State in southern India. The holotype collection was made in 2004 in Chelavur, located in the Kozhikode District; it is now kept in the herbarium of Kew Gardens. The specific epithet babruzalka derives from the Sanskrit word for "brown-scaled". Description The fruit bodies of Lepiota babruzalka have caps that start out roughly spherical, and as they expand become broadly convex, and eventually flat, with a blunt umbo. The cap attains a diameter of . Its whitish surface is covered with small, reddish-brown, pressed-down scales that are more numerous in the center. The margin is initially curved inward, but straightens out in age, and retains hanging remnants of the partial veil. The gills are white, and free from attachment to the stem. They are crowded together, with two or three tiers of interspersed lamellulae (short gills that do not extend fully from the cap edge to the stem). Viewed with a hand lens, the edges of the gills appear to be fringed. The stem is cylindrical with a bulbous base, initially solid before becoming hollow, and measures long by 1–1.5 mm thick. The stem surface is whitish, but will stain a light brown color if handled. In young fruit bodies, the stems have a whitish, membranous ring on the upper half, but the ring does not last long before disintegrating. The flesh is thin (up to 1 mm), whitish, and lacks any appreciable odor. Lepiota babruzalka produces a white spore print. Spores are roughly elliptical to somewhat cylindrical, hyaline (translucent), and measure 5.5–10.5 by 3.5–4.5 µm. They are thick-walled and contain a refractive oil droplet. The basidia (spore-bearing cells) are club-shaped, hyaline, and are one- to four-spored with sterigmata up to 8 µm long; the dimensions of the basidia are 15–20 by 7–8 µm. Cheilocystidia (cystidia on the edge of the gill) are plentiful, and can assume a number of shapes, including cylindrical to club-shaped, utriform (like a wineskin bottle), to ventricose-rostrate (where the basal and middle portions are swollen and the apex extends into a beak-like protrusion). The cheilocystidia are thin-walled, and measure 13–32 by 7–12 µm; there are no cystidia on the gill faces (pleurocystidia). The gill tissue is made of thin-walled hyphae containing a septum, which are hyaline to pale yellow, and measure 3–15 µm wide. The cap tissue comprises interwoven, inflated hyphae with widths between 2 and 25 µm. Neither the gill tissue nor the cap tissue show any color reaction when stained with Melzer's reagent. Clamp connections are rare in the hyphae of Lepiota babruzalka. Similar species According to the authors, the only Lepiota bearing a close resemblance to L. babruzalka is L. roseoalba, an edible mushroom described by Paul Christoph Hennings in 1891. Found in Africa and Iran, L. roseoalba lacks the reddish-brown scales on the cap, has radial grooves on the cap margin, and its stem is not as slender as those of L. babruzalka. Habitat and distribution Fruit bodies of Lepiota babruzalka grow singly or scattered on the ground among decaying leaf litter around the base of bamboo stands. The species has been documented only from Chelavur and Nilambur in the Kozhikode and Malappuram Districts of Kerala State. As of 2009, there are 22 Lepiota taxa (21 species and 1 variety) known from Kerala, which is recognized as a biodiversity hotspot. See also List of Lepiota species References External links babruka Fungi of India Fungi described in 2009
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,981
package io.strimzi.systemtest.security; import io.fabric8.kubernetes.api.model.DeletionPropagation; import io.fabric8.kubernetes.api.model.Pod; import io.fabric8.kubernetes.api.model.Quantity; import io.fabric8.kubernetes.api.model.ResourceRequirementsBuilder; import io.fabric8.kubernetes.api.model.Secret; import io.fabric8.kubernetes.api.model.SecretBuilder; import io.strimzi.api.kafka.model.AclOperation; import io.strimzi.api.kafka.model.CertificateAuthority; import io.strimzi.api.kafka.model.CertificateAuthorityBuilder; import io.strimzi.api.kafka.model.CruiseControlResources; import io.strimzi.api.kafka.model.KafkaConnect; import io.strimzi.api.kafka.model.KafkaConnectResources; import io.strimzi.api.kafka.model.KafkaExporterResources; import io.strimzi.api.kafka.model.KafkaMirrorMaker; import io.strimzi.api.kafka.model.KafkaResources; import io.strimzi.api.kafka.model.listener.KafkaListenerAuthenticationTls; import io.strimzi.api.kafka.model.listener.arraylistener.GenericKafkaListenerBuilder; import io.strimzi.api.kafka.model.listener.arraylistener.KafkaListenerType; import io.strimzi.operator.cluster.model.Ca; import io.strimzi.operator.common.model.Labels; import io.strimzi.systemtest.AbstractST; import io.strimzi.systemtest.Constants; import io.strimzi.systemtest.Environment; import io.strimzi.systemtest.annotations.ParallelSuite; import io.strimzi.systemtest.kafkaclients.externalClients.ExternalKafkaClient; import io.strimzi.systemtest.annotations.ParallelNamespaceTest; import io.strimzi.systemtest.kafkaclients.internalClients.KafkaClients; import io.strimzi.systemtest.kafkaclients.internalClients.KafkaClientsBuilder; import io.strimzi.systemtest.resources.crd.KafkaConnectResource; import io.strimzi.systemtest.resources.crd.KafkaMirrorMakerResource; import io.strimzi.systemtest.resources.crd.KafkaResource; import io.strimzi.systemtest.storage.TestStorage; import io.strimzi.systemtest.templates.crd.KafkaConnectTemplates; import io.strimzi.systemtest.templates.crd.KafkaMirrorMakerTemplates; import io.strimzi.systemtest.templates.crd.KafkaTemplates; import io.strimzi.systemtest.templates.crd.KafkaTopicTemplates; import io.strimzi.systemtest.templates.crd.KafkaUserTemplates; import io.strimzi.systemtest.utils.ClientUtils; import io.strimzi.systemtest.utils.RollingUpdateUtils; import io.strimzi.systemtest.utils.StUtils; import io.strimzi.systemtest.utils.kafkaUtils.KafkaConnectUtils; import io.strimzi.systemtest.utils.kafkaUtils.KafkaMirrorMakerUtils; import io.strimzi.systemtest.utils.kafkaUtils.KafkaTopicUtils; import io.strimzi.systemtest.utils.kafkaUtils.KafkaUtils; import io.strimzi.systemtest.utils.kubeUtils.controllers.DeploymentUtils; import io.strimzi.systemtest.utils.kubeUtils.objects.PodUtils; import io.strimzi.systemtest.utils.kubeUtils.objects.SecretUtils; import io.strimzi.test.TestUtils; import org.apache.kafka.common.config.SslConfigs; import org.apache.kafka.common.errors.GroupAuthorizationException; import org.apache.kafka.common.security.auth.SecurityProtocol; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; import org.hamcrest.Matchers; import org.junit.jupiter.api.Tag; import org.junit.jupiter.api.extension.ExtensionContext; import java.io.InputStream; import java.security.cert.X509Certificate; import java.time.LocalDateTime; import java.util.ArrayList; import java.util.Arrays; import java.util.Date; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Random; import java.util.stream.Collectors; import java.util.stream.IntStream; import static io.strimzi.api.kafka.model.KafkaResources.clientsCaCertificateSecretName; import static io.strimzi.api.kafka.model.KafkaResources.clientsCaKeySecretName; import static io.strimzi.api.kafka.model.KafkaResources.clusterCaCertificateSecretName; import static io.strimzi.api.kafka.model.KafkaResources.clusterCaKeySecretName; import static io.strimzi.systemtest.Constants.ACCEPTANCE; import static io.strimzi.systemtest.Constants.CONNECT; import static io.strimzi.systemtest.Constants.CONNECT_COMPONENTS; import static io.strimzi.systemtest.Constants.EXTERNAL_CLIENTS_USED; import static io.strimzi.systemtest.Constants.INTERNAL_CLIENTS_USED; import static io.strimzi.systemtest.Constants.MIRROR_MAKER; import static io.strimzi.systemtest.Constants.NODEPORT_SUPPORTED; import static io.strimzi.systemtest.Constants.REGRESSION; import static io.strimzi.systemtest.Constants.ROLLING_UPDATE; import static io.strimzi.systemtest.Constants.SANITY; import static io.strimzi.test.k8s.KubeClusterResource.kubeClient; import static java.util.Collections.singletonMap; import static org.hamcrest.CoreMatchers.is; import static org.hamcrest.CoreMatchers.not; import static org.hamcrest.CoreMatchers.notNullValue; import static org.hamcrest.CoreMatchers.sameInstance; import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.Matchers.containsString; import static org.junit.jupiter.api.Assertions.assertNotNull; import static org.junit.jupiter.api.Assertions.assertThrows; @Tag(REGRESSION) @ParallelSuite class SecurityST extends AbstractST { private static final Logger LOGGER = LogManager.getLogger(SecurityST.class); private static final String OPENSSL_RETURN_CODE = "Verify return code: 0 (ok)"; private final String namespace = testSuiteNamespaceManager.getMapOfAdditionalNamespaces().get(SecurityST.class.getSimpleName()).stream().findFirst().get(); @ParallelNamespaceTest void testCertificates(ExtensionContext extensionContext) { final String namespaceName = StUtils.getNamespaceBasedOnRbac(namespace, extensionContext); final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName()); LOGGER.info("Running testCertificates {}", clusterName); resourceManager.createResource(extensionContext, KafkaTemplates.kafkaEphemeral(clusterName, 3).build()); LOGGER.info("Check Kafka bootstrap certificate"); String outputCertificate = SystemTestCertManager.generateOpenSslCommandByComponent(namespaceName, KafkaResources.tlsBootstrapAddress(clusterName), KafkaResources.bootstrapServiceName(clusterName), KafkaResources.kafkaPodName(clusterName, 0), "kafka", false); LOGGER.info("OPENSSL OUTPUT: \n\n{}\n\n", outputCertificate); verifyCerts(clusterName, outputCertificate, "kafka"); if (!Environment.isKRaftModeEnabled()) { LOGGER.info("Check zookeeper client certificate"); outputCertificate = SystemTestCertManager.generateOpenSslCommandByComponent(namespaceName, KafkaResources.zookeeperServiceName(clusterName) + ":2181", KafkaResources.zookeeperServiceName(clusterName), KafkaResources.kafkaPodName(clusterName, 0), "kafka"); verifyCerts(clusterName, outputCertificate, "zookeeper"); } List<String> kafkaPorts = new ArrayList<>(Arrays.asList("9091", "9093")); List<String> zkPorts = new ArrayList<>(Arrays.asList("2181", "3888")); IntStream.rangeClosed(0, 2).forEach(podId -> { String output; LOGGER.info("Checking certificates for podId {}", podId); for (String kafkaPort : kafkaPorts) { LOGGER.info("Check kafka certificate for port {}", kafkaPort); output = SystemTestCertManager.generateOpenSslCommandByComponentUsingSvcHostname(namespaceName, KafkaResources.kafkaPodName(clusterName, podId), KafkaResources.brokersServiceName(clusterName), kafkaPort, "kafka"); verifyCerts(clusterName, output, "kafka"); } if (!Environment.isKRaftModeEnabled()) { for (String zkPort : zkPorts) { LOGGER.info("Check zookeeper certificate for port {}", zkPort); output = SystemTestCertManager.generateOpenSslCommandByComponentUsingSvcHostname(namespaceName, KafkaResources.zookeeperPodName(clusterName, podId), KafkaResources.zookeeperHeadlessServiceName(clusterName), zkPort, "zookeeper"); verifyCerts(clusterName, output, "zookeeper"); } } }); } // synchronized avoiding data-race (list of string is allocated on the heap), but has different reference on stack it's ok // but Strings parameters provided are not created in scope of this method synchronized private static void verifyCerts(String clusterName, String certificate, String component) { List<String> certificateChains = SystemTestCertManager.getCertificateChain(clusterName + "-" + component); assertThat(certificate, containsString(certificateChains.get(0))); assertThat(certificate, containsString(certificateChains.get(1))); assertThat(certificate, containsString(OPENSSL_RETURN_CODE)); } @ParallelNamespaceTest @Tag(INTERNAL_CLIENTS_USED) @Tag(ROLLING_UPDATE) @Tag("ClusterCaCerts") void testAutoRenewClusterCaCertsTriggeredByAnno(ExtensionContext extensionContext) { autoRenewSomeCaCertsTriggeredByAnno( extensionContext, /* ZK node need new certs */ true, /* brokers need new certs */ true, /* eo needs new cert */ true, true); } @ParallelNamespaceTest @Tag(INTERNAL_CLIENTS_USED) @Tag(ROLLING_UPDATE) @Tag("ClientsCaCerts") void testAutoRenewClientsCaCertsTriggeredByAnno(ExtensionContext extensionContext) { autoRenewSomeCaCertsTriggeredByAnno( extensionContext, /* no communication between clients and zk, so no need to roll */ false, /* brokers need to trust client certs with new cert */ true, /* eo needs to generate new client certs */ false, false); } @ParallelNamespaceTest @Tag(SANITY) @Tag(ACCEPTANCE) @Tag(INTERNAL_CLIENTS_USED) @Tag(ROLLING_UPDATE) @Tag("AllCaCerts") void testAutoRenewAllCaCertsTriggeredByAnno(ExtensionContext extensionContext) { autoRenewSomeCaCertsTriggeredByAnno( extensionContext, true, true, true, true); } @SuppressWarnings({"checkstyle:MethodLength", "checkstyle:NPathComplexity"}) void autoRenewSomeCaCertsTriggeredByAnno( ExtensionContext extensionContext, boolean zkShouldRoll, boolean kafkaShouldRoll, boolean eoShouldRoll, boolean keAndCCShouldRoll) { final TestStorage testStorage = new TestStorage(extensionContext, namespace); createKafkaCluster(extensionContext, testStorage.getClusterName()); List<String> secrets; // to make it parallel we need decision maker... if (extensionContext.getTags().contains("ClusterCaCerts")) { secrets = Arrays.asList(clusterCaCertificateSecretName(testStorage.getClusterName())); } else if (extensionContext.getTags().contains("ClientsCaCerts")) { secrets = Arrays.asList(clientsCaCertificateSecretName(testStorage.getClusterName())); } else { // AllCaKeys secrets = Arrays.asList(clusterCaCertificateSecretName(testStorage.getClusterName()), clientsCaCertificateSecretName(testStorage.getClusterName())); } resourceManager.createResource(extensionContext, KafkaUserTemplates.tlsUser(testStorage.getClusterName(), testStorage.getUserName()).build(), KafkaTopicTemplates.topic(testStorage.getClusterName(), testStorage.getTopicName()).build() ); KafkaClients kafkaClients = new KafkaClientsBuilder() .withTopicName(testStorage.getTopicName()) .withMessageCount(MESSAGE_COUNT) .withBootstrapAddress(KafkaResources.tlsBootstrapAddress(testStorage.getClusterName())) .withProducerName(testStorage.getProducerName()) .withConsumerName(testStorage.getConsumerName()) .withNamespaceName(testStorage.getNamespaceName()) .withUserName(testStorage.getUserName()) .build(); resourceManager.createResource(extensionContext, kafkaClients.producerTlsStrimzi(testStorage.getClusterName()), kafkaClients.consumerTlsStrimzi(testStorage.getClusterName())); ClientUtils.waitForClientsSuccess(testStorage.getProducerName(), testStorage.getConsumerName(), testStorage.getNamespaceName(), MESSAGE_COUNT); // Get all pods, and their resource versions Map<String, String> zkPods = PodUtils.podSnapshot(testStorage.getNamespaceName(), testStorage.getZookeeperSelector()); Map<String, String> kafkaPods = PodUtils.podSnapshot(testStorage.getNamespaceName(), testStorage.getKafkaSelector()); Map<String, String> eoPod = DeploymentUtils.depSnapshot(testStorage.getNamespaceName(), KafkaResources.entityOperatorDeploymentName(testStorage.getClusterName())); Map<String, String> ccPod = DeploymentUtils.depSnapshot(testStorage.getNamespaceName(), CruiseControlResources.deploymentName(testStorage.getClusterName())); Map<String, String> kePod = DeploymentUtils.depSnapshot(testStorage.getNamespaceName(), KafkaExporterResources.deploymentName(testStorage.getClusterName())); LOGGER.info("Triggering CA cert renewal by adding the annotation"); Map<String, String> initialCaCerts = new HashMap<>(); for (String secretName : secrets) { Secret secret = kubeClient().getSecret(testStorage.getNamespaceName(), secretName); String value = secret.getData().get("ca.crt"); assertThat("ca.crt in " + secretName + " should not be null", value, is(notNullValue())); initialCaCerts.put(secretName, value); Secret annotated = new SecretBuilder(secret) .editMetadata() .addToAnnotations(Ca.ANNO_STRIMZI_IO_FORCE_RENEW, "true") .endMetadata() .build(); LOGGER.info("Patching secret {} with {}", secretName, Ca.ANNO_STRIMZI_IO_FORCE_RENEW); kubeClient().patchSecret(testStorage.getNamespaceName(), secretName, annotated); } if (!Environment.isKRaftModeEnabled()) { if (zkShouldRoll) { LOGGER.info("Wait for zk to rolling restart ..."); RollingUpdateUtils.waitTillComponentHasRolledAndPodsReady(testStorage.getNamespaceName(), testStorage.getZookeeperSelector(), 3, zkPods); } } if (kafkaShouldRoll) { LOGGER.info("Wait for kafka to rolling restart ..."); RollingUpdateUtils.waitTillComponentHasRolledAndPodsReady(testStorage.getNamespaceName(), testStorage.getKafkaSelector(), 3, kafkaPods); } if (eoShouldRoll) { LOGGER.info("Wait for EO to rolling restart ..."); eoPod = DeploymentUtils.waitTillDepHasRolled(testStorage.getNamespaceName(), KafkaResources.entityOperatorDeploymentName(testStorage.getClusterName()), 1, eoPod); } if (keAndCCShouldRoll) { LOGGER.info("Wait for CC and KE to rolling restart ..."); kePod = DeploymentUtils.waitTillDepHasRolled(testStorage.getNamespaceName(), KafkaExporterResources.deploymentName(testStorage.getClusterName()), 1, kePod); ccPod = DeploymentUtils.waitTillDepHasRolled(testStorage.getNamespaceName(), CruiseControlResources.deploymentName(testStorage.getClusterName()), 1, ccPod); } LOGGER.info("Checking the certificates have been replaced"); for (String secretName : secrets) { Secret secret = kubeClient().getSecret(testStorage.getNamespaceName(), secretName); assertThat("Secret " + secretName + " should exist", secret, is(notNullValue())); assertThat("CA cert in " + secretName + " should have non-null 'data'", is(notNullValue())); String value = secret.getData().get("ca.crt"); assertThat("CA cert in " + secretName + " should have changed", value, is(not(initialCaCerts.get(secretName)))); } kafkaClients = new KafkaClientsBuilder(kafkaClients) .withConsumerGroup(ClientUtils.generateRandomConsumerGroup()) .build(); resourceManager.createResource(extensionContext, kafkaClients.consumerTlsStrimzi(testStorage.getClusterName())); ClientUtils.waitForClientSuccess(testStorage.getConsumerName(), testStorage.getNamespaceName(), MESSAGE_COUNT); // Check a new client (signed by new client key) can consume String bobUserName = "bob-" + testStorage.getUserName(); resourceManager.createResource(extensionContext, KafkaUserTemplates.tlsUser(testStorage.getClusterName(), bobUserName).build()); kafkaClients = new KafkaClientsBuilder(kafkaClients) .withConsumerGroup(ClientUtils.generateRandomConsumerGroup()) .withBootstrapAddress(KafkaResources.tlsBootstrapAddress(testStorage.getClusterName())) .withUserName(bobUserName) .build(); resourceManager.createResource(extensionContext, kafkaClients.consumerTlsStrimzi(testStorage.getClusterName())); ClientUtils.waitForClientSuccess(testStorage.getConsumerName(), testStorage.getNamespaceName(), MESSAGE_COUNT); if (!Environment.isKRaftModeEnabled()) { if (!zkShouldRoll) { assertThat("ZK pods should not roll, but did.", PodUtils.podSnapshot(testStorage.getNamespaceName(), testStorage.getZookeeperSelector()), is(zkPods)); } } if (!kafkaShouldRoll) { assertThat("Kafka pods should not roll, but did.", PodUtils.podSnapshot(testStorage.getNamespaceName(), testStorage.getKafkaSelector()), is(kafkaPods)); } if (!eoShouldRoll) { assertThat("EO pod should not roll, but did.", DeploymentUtils.depSnapshot(testStorage.getNamespaceName(), KafkaResources.entityOperatorDeploymentName(testStorage.getClusterName())), is(eoPod)); } if (!keAndCCShouldRoll) { assertThat("CC pod should not roll, but did.", DeploymentUtils.depSnapshot(testStorage.getNamespaceName(), CruiseControlResources.deploymentName(testStorage.getClusterName())), is(ccPod)); assertThat("KE pod should not roll, but did.", DeploymentUtils.depSnapshot(testStorage.getNamespaceName(), KafkaExporterResources.deploymentName(testStorage.getClusterName())), is(kePod)); } } @ParallelNamespaceTest @Tag(INTERNAL_CLIENTS_USED) @Tag(ROLLING_UPDATE) @Tag("ClusterCaKeys") void testAutoReplaceClusterCaKeysTriggeredByAnno(ExtensionContext extensionContext) { autoReplaceSomeKeysTriggeredByAnno( extensionContext, 3, // additional third rolling due to the removal of the older cluster CA certificate true, true, true, true); } @ParallelNamespaceTest @Tag(INTERNAL_CLIENTS_USED) @Tag(ROLLING_UPDATE) @Tag("ClientsCaKeys") void testAutoReplaceClientsCaKeysTriggeredByAnno(ExtensionContext extensionContext) { autoReplaceSomeKeysTriggeredByAnno( extensionContext, 2, false, true, false, false); } @ParallelNamespaceTest @Tag(INTERNAL_CLIENTS_USED) @Tag(ROLLING_UPDATE) @Tag("AllCaKeys") void testAutoReplaceAllCaKeysTriggeredByAnno(ExtensionContext extensionContext) { autoReplaceSomeKeysTriggeredByAnno( extensionContext, 3, // additional third rolling due to the removal of the older cluster CA certificate true, true, true, true); } @SuppressWarnings({"checkstyle:MethodLength", "checkstyle:NPathComplexity", "checkstyle:CyclomaticComplexity"}) void autoReplaceSomeKeysTriggeredByAnno(ExtensionContext extensionContext, int expectedRolls, boolean zkShouldRoll, boolean kafkaShouldRoll, boolean eoShouldRoll, boolean keAndCCShouldRoll) { final TestStorage testStorage = new TestStorage(extensionContext, namespace); List<String> secrets = null; // to make it parallel we need decision maker... if (extensionContext.getTags().contains("ClusterCaKeys")) { secrets = Arrays.asList(clusterCaKeySecretName(testStorage.getClusterName())); } else if (extensionContext.getTags().contains("ClientsCaKeys")) { secrets = Arrays.asList(clientsCaKeySecretName(testStorage.getClusterName())); } else { // AllCaKeys secrets = Arrays.asList(clusterCaKeySecretName(testStorage.getClusterName()), clientsCaKeySecretName(testStorage.getClusterName())); } createKafkaCluster(extensionContext, testStorage.getClusterName()); resourceManager.createResource(extensionContext, KafkaUserTemplates.tlsUser(testStorage.getClusterName(), testStorage.getUserName()).build(), KafkaTopicTemplates.topic(testStorage.getClusterName(), testStorage.getTopicName()).build() ); KafkaClients kafkaClients = new KafkaClientsBuilder() .withTopicName(testStorage.getTopicName()) .withMessageCount(MESSAGE_COUNT) .withBootstrapAddress(KafkaResources.tlsBootstrapAddress(testStorage.getClusterName())) .withProducerName(testStorage.getProducerName()) .withConsumerName(testStorage.getConsumerName()) .withNamespaceName(testStorage.getNamespaceName()) .withUserName(testStorage.getUserName()) .build(); resourceManager.createResource(extensionContext, kafkaClients.producerTlsStrimzi(testStorage.getClusterName()), kafkaClients.consumerTlsStrimzi(testStorage.getClusterName())); ClientUtils.waitForClientsSuccess(testStorage.getProducerName(), testStorage.getConsumerName(), testStorage.getNamespaceName(), MESSAGE_COUNT); // Get all pods, and their resource versions Map<String, String> zkPods = PodUtils.podSnapshot(testStorage.getNamespaceName(), testStorage.getZookeeperSelector()); Map<String, String> kafkaPods = PodUtils.podSnapshot(testStorage.getNamespaceName(), testStorage.getKafkaSelector()); Map<String, String> eoPod = DeploymentUtils.depSnapshot(testStorage.getNamespaceName(), KafkaResources.entityOperatorDeploymentName(testStorage.getClusterName())); Map<String, String> ccPod = DeploymentUtils.depSnapshot(testStorage.getNamespaceName(), CruiseControlResources.deploymentName(testStorage.getClusterName())); Map<String, String> kePod = DeploymentUtils.depSnapshot(testStorage.getNamespaceName(), KafkaExporterResources.deploymentName(testStorage.getClusterName())); LOGGER.info("Triggering CA cert renewal by adding the annotation"); Map<String, String> initialCaKeys = new HashMap<>(); for (String secretName : secrets) { Secret secret = kubeClient().getSecret(testStorage.getNamespaceName(), secretName); String value = secret.getData().get("ca.key"); assertThat("ca.key in " + secretName + " should not be null", value, is(Matchers.notNullValue())); initialCaKeys.put(secretName, value); Secret annotated = new SecretBuilder(secret) .editMetadata() .addToAnnotations(Ca.ANNO_STRIMZI_IO_FORCE_REPLACE, "true") .endMetadata() .build(); LOGGER.info("Patching secret {} with {}", secretName, Ca.ANNO_STRIMZI_IO_FORCE_REPLACE); kubeClient().patchSecret(testStorage.getNamespaceName(), secretName, annotated); } for (int i = 1; i <= expectedRolls; i++) { if (!Environment.isKRaftModeEnabled()) { if (zkShouldRoll) { LOGGER.info("Wait for zk to rolling restart ({})...", i); zkPods = i < expectedRolls ? RollingUpdateUtils.waitTillComponentHasRolled(testStorage.getNamespaceName(), testStorage.getZookeeperSelector(), zkPods) : RollingUpdateUtils.waitTillComponentHasRolledAndPodsReady(testStorage.getNamespaceName(), testStorage.getZookeeperSelector(), 3, zkPods); } } if (kafkaShouldRoll) { LOGGER.info("Wait for kafka to rolling restart ({})...", i); kafkaPods = i < expectedRolls ? RollingUpdateUtils.waitTillComponentHasRolled(testStorage.getNamespaceName(), testStorage.getKafkaSelector(), kafkaPods) : RollingUpdateUtils.waitTillComponentHasRolledAndPodsReady(testStorage.getNamespaceName(), testStorage.getKafkaSelector(), 3, kafkaPods); } if (eoShouldRoll) { LOGGER.info("Wait for EO to rolling restart ({})...", i); eoPod = i < expectedRolls ? DeploymentUtils.waitTillDepHasRolled(testStorage.getNamespaceName(), KafkaResources.entityOperatorDeploymentName(testStorage.getClusterName()), eoPod) : DeploymentUtils.waitTillDepHasRolled(testStorage.getNamespaceName(), KafkaResources.entityOperatorDeploymentName(testStorage.getClusterName()), 1, eoPod); } if (keAndCCShouldRoll) { LOGGER.info("Wait for KafkaExporter and CruiseControl to rolling restart ({})...", i); kePod = i < expectedRolls ? DeploymentUtils.waitTillDepHasRolled(testStorage.getNamespaceName(), KafkaExporterResources.deploymentName(testStorage.getClusterName()), kePod) : DeploymentUtils.waitTillDepHasRolled(testStorage.getNamespaceName(), KafkaExporterResources.deploymentName(testStorage.getClusterName()), 1, kePod); ccPod = i < expectedRolls ? DeploymentUtils.waitTillDepHasRolled(testStorage.getNamespaceName(), CruiseControlResources.deploymentName(testStorage.getClusterName()), ccPod) : DeploymentUtils.waitTillDepHasRolled(testStorage.getNamespaceName(), CruiseControlResources.deploymentName(testStorage.getClusterName()), 1, ccPod); } } LOGGER.info("Checking the certificates have been replaced"); for (String secretName : secrets) { Secret secret = kubeClient().getSecret(testStorage.getNamespaceName(), secretName); assertThat("Secret " + secretName + " should exist", secret, is(notNullValue())); assertThat("CA key in " + secretName + " should have non-null 'data'", secret.getData(), is(notNullValue())); String value = secret.getData().get("ca.key"); assertThat("CA key in " + secretName + " should exist", value, is(notNullValue())); assertThat("CA key in " + secretName + " should have changed", value, is(not(initialCaKeys.get(secretName)))); } kafkaClients = new KafkaClientsBuilder(kafkaClients) .withConsumerGroup(ClientUtils.generateRandomConsumerGroup()) .build(); resourceManager.createResource(extensionContext, kafkaClients.consumerTlsStrimzi(testStorage.getClusterName())); ClientUtils.waitForClientSuccess(testStorage.getConsumerName(), testStorage.getNamespaceName(), MESSAGE_COUNT); // Finally check a new client (signed by new client key) can consume final String bobUserName = "bobik-" + testStorage.getUserName(); resourceManager.createResource(extensionContext, KafkaUserTemplates.tlsUser(testStorage.getClusterName(), bobUserName).build()); kafkaClients = new KafkaClientsBuilder(kafkaClients) .withConsumerGroup(ClientUtils.generateRandomConsumerGroup()) .withBootstrapAddress(KafkaResources.tlsBootstrapAddress(testStorage.getClusterName())) .withUserName(bobUserName) .build(); resourceManager.createResource(extensionContext, kafkaClients.consumerTlsStrimzi(testStorage.getClusterName())); ClientUtils.waitForClientSuccess(testStorage.getConsumerName(), testStorage.getNamespaceName(), MESSAGE_COUNT); if (!Environment.isKRaftModeEnabled()) { if (!zkShouldRoll) { assertThat("ZK pods should not roll, but did.", PodUtils.podSnapshot(testStorage.getNamespaceName(), testStorage.getZookeeperSelector()), is(zkPods)); } } if (!kafkaShouldRoll) { assertThat("Kafka pods should not roll, but did.", PodUtils.podSnapshot(testStorage.getNamespaceName(), testStorage.getKafkaSelector()), is(kafkaPods)); } if (!eoShouldRoll) { assertThat("EO pod should not roll, but did.", DeploymentUtils.depSnapshot(testStorage.getNamespaceName(), KafkaResources.entityOperatorDeploymentName(testStorage.getClusterName())), is(eoPod)); } if (!keAndCCShouldRoll) { assertThat("CC pod should not roll, but did.", DeploymentUtils.depSnapshot(testStorage.getNamespaceName(), CruiseControlResources.deploymentName(testStorage.getClusterName())), is(ccPod)); assertThat("KE pod should not roll, but did.", DeploymentUtils.depSnapshot(testStorage.getNamespaceName(), KafkaExporterResources.deploymentName(testStorage.getClusterName())), is(kePod)); } } private void createKafkaCluster(ExtensionContext extensionContext, String clusterName) { LOGGER.info("Creating a cluster"); resourceManager.createResource(extensionContext, KafkaTemplates.kafkaPersistent(clusterName, 3) .editSpec() .editKafka() .withListeners(new GenericKafkaListenerBuilder() .withName(Constants.PLAIN_LISTENER_DEFAULT_NAME) .withPort(9092) .withType(KafkaListenerType.INTERNAL) .withTls(false) .build(), new GenericKafkaListenerBuilder() .withName(Constants.TLS_LISTENER_DEFAULT_NAME) .withPort(9093) .withType(KafkaListenerType.INTERNAL) .withTls(true) .withNewKafkaListenerAuthenticationTlsAuth() .endKafkaListenerAuthenticationTlsAuth() .build()) .withConfig(singletonMap("default.replication.factor", 3)) .withNewPersistentClaimStorage() .withSize("2Gi") .withDeleteClaim(true) .endPersistentClaimStorage() .endKafka() .editZookeeper() .withNewPersistentClaimStorage() .withSize("2Gi") .withDeleteClaim(true) .endPersistentClaimStorage() .endZookeeper() .withNewCruiseControl() .endCruiseControl() .withNewKafkaExporter() .endKafkaExporter() .endSpec() .build()); } @ParallelNamespaceTest @Tag(INTERNAL_CLIENTS_USED) void testAutoRenewCaCertsTriggerByExpiredCertificate(ExtensionContext extensionContext) { final TestStorage testStorage = new TestStorage(extensionContext, namespace); // 1. Create the Secrets already, and a certificate that's already expired InputStream secretInputStream = getClass().getClassLoader().getResourceAsStream("security-st-certs/expired-cluster-ca.crt"); String clusterCaCert = TestUtils.readResource(secretInputStream); SecretUtils.createSecret(testStorage.getNamespaceName(), clusterCaCertificateSecretName(testStorage.getClusterName()), "ca.crt", clusterCaCert); // 2. Now create a cluster createKafkaCluster(extensionContext, testStorage.getClusterName()); resourceManager.createResource(extensionContext, KafkaUserTemplates.tlsUser(testStorage.getClusterName(), testStorage.getUserName()).build(), KafkaTopicTemplates.topic(testStorage.getClusterName(), testStorage.getTopicName()).build() ); KafkaClients kafkaClients = new KafkaClientsBuilder() .withTopicName(testStorage.getTopicName()) .withMessageCount(MESSAGE_COUNT) .withBootstrapAddress(KafkaResources.tlsBootstrapAddress(testStorage.getClusterName())) .withProducerName(testStorage.getProducerName()) .withConsumerName(testStorage.getConsumerName()) .withNamespaceName(testStorage.getNamespaceName()) .withUserName(testStorage.getUserName()) .build(); resourceManager.createResource(extensionContext, kafkaClients.producerTlsStrimzi(testStorage.getClusterName()), kafkaClients.consumerTlsStrimzi(testStorage.getClusterName())); ClientUtils.waitForClientsSuccess(testStorage.getProducerName(), testStorage.getConsumerName(), testStorage.getNamespaceName(), MESSAGE_COUNT); // Wait until the certificates have been replaced SecretUtils.waitForCertToChange(testStorage.getNamespaceName(), clusterCaCert, clusterCaCertificateSecretName(testStorage.getClusterName())); // Wait until the pods are all up and ready KafkaUtils.waitForClusterStability(testStorage.getNamespaceName(), testStorage.getClusterName()); resourceManager.createResource(extensionContext, kafkaClients.producerTlsStrimzi(testStorage.getClusterName()), kafkaClients.consumerTlsStrimzi(testStorage.getClusterName())); ClientUtils.waitForClientsSuccess(testStorage.getProducerName(), testStorage.getConsumerName(), testStorage.getNamespaceName(), MESSAGE_COUNT); } @ParallelNamespaceTest @Tag(INTERNAL_CLIENTS_USED) void testCertRenewalInMaintenanceTimeWindow(ExtensionContext extensionContext) { final TestStorage testStorage = new TestStorage(extensionContext, namespace); final String clusterSecretName = KafkaResources.clusterCaCertificateSecretName(testStorage.getClusterName()); final String clientsSecretName = KafkaResources.clientsCaCertificateSecretName(testStorage.getClusterName()); LocalDateTime maintenanceWindowStart = LocalDateTime.now().withSecond(0); long maintenanceWindowDuration = 14; maintenanceWindowStart = maintenanceWindowStart.plusMinutes(15); final long windowStartMin = maintenanceWindowStart.getMinute(); final long windowStopMin = windowStartMin + maintenanceWindowDuration > 59 ? windowStartMin + maintenanceWindowDuration - 60 : windowStartMin + maintenanceWindowDuration; String maintenanceWindowCron = "* " + windowStartMin + "-" + windowStopMin + " * * * ? *"; LOGGER.info("Initial maintenanceTimeWindow is: {}", maintenanceWindowCron); resourceManager.createResource(extensionContext, KafkaTemplates.kafkaPersistent(testStorage.getClusterName(), 3, 1) .editSpec() .addToMaintenanceTimeWindows(maintenanceWindowCron) .editKafka() .withListeners(new GenericKafkaListenerBuilder() .withName(Constants.TLS_LISTENER_DEFAULT_NAME) .withPort(9093) .withType(KafkaListenerType.INTERNAL) .withTls(true) .withAuth(new KafkaListenerAuthenticationTls()) .build()) .endKafka() .withNewClusterCa() .withRenewalDays(15) .withValidityDays(20) .endClusterCa() .withNewClientsCa() .withRenewalDays(15) .withValidityDays(20) .endClientsCa() .endSpec() .build()); resourceManager.createResource(extensionContext, KafkaUserTemplates.tlsUser(testStorage.getClusterName(), testStorage.getUserName()).build(), KafkaTopicTemplates.topic(testStorage.getClusterName(), testStorage.getTopicName()).build() ); Secret kafkaUserSecret = kubeClient(testStorage.getNamespaceName()).getSecret(testStorage.getNamespaceName(), testStorage.getUserName()); KafkaClients kafkaClients = new KafkaClientsBuilder() .withTopicName(testStorage.getTopicName()) .withMessageCount(MESSAGE_COUNT) .withBootstrapAddress(KafkaResources.tlsBootstrapAddress(testStorage.getClusterName())) .withProducerName(testStorage.getProducerName()) .withConsumerName(testStorage.getConsumerName()) .withNamespaceName(testStorage.getNamespaceName()) .withUserName(testStorage.getUserName()) .build(); Map<String, String> kafkaPods = PodUtils.podSnapshot(testStorage.getNamespaceName(), testStorage.getKafkaSelector()); CertificateAuthority newCAValidity = new CertificateAuthority(); newCAValidity.setRenewalDays(150); newCAValidity.setValidityDays(200); KafkaResource.replaceKafkaResourceInSpecificNamespace(testStorage.getClusterName(), k -> { k.getSpec().setClusterCa(newCAValidity); k.getSpec().setClientsCa(newCAValidity); }, testStorage.getNamespaceName() ); KafkaUtils.waitForKafkaStatusUpdate(testStorage.getNamespaceName(), testStorage.getClusterName()); Secret secretCaCluster = kubeClient(testStorage.getNamespaceName()).getSecret(testStorage.getNamespaceName(), clusterSecretName); Secret secretCaClients = kubeClient(testStorage.getNamespaceName()).getSecret(testStorage.getNamespaceName(), clientsSecretName); assertThat("Cluster CA certificate has been renewed outside of maintenanceTimeWindows", secretCaCluster.getMetadata().getAnnotations().get(Ca.ANNO_STRIMZI_IO_CA_CERT_GENERATION), is("0")); assertThat("Clients CA certificate has been renewed outside of maintenanceTimeWindows", secretCaClients.getMetadata().getAnnotations().get(Ca.ANNO_STRIMZI_IO_CA_CERT_GENERATION), is("0")); assertThat("Rolling update was performed outside of maintenanceTimeWindows", kafkaPods, is(PodUtils.podSnapshot(testStorage.getNamespaceName(), testStorage.getKafkaSelector()))); maintenanceWindowCron = "* " + LocalDateTime.now().getMinute() + "-" + windowStopMin + " * * * ? *"; LOGGER.info("Set maintenanceTimeWindow to start now to save time: {}", maintenanceWindowCron); List<String> maintenanceTimeWindows = KafkaResource.kafkaClient().inNamespace(testStorage.getNamespaceName()).withName(testStorage.getClusterName()).get().getSpec().getMaintenanceTimeWindows(); maintenanceTimeWindows.add(maintenanceWindowCron); KafkaResource.replaceKafkaResourceInSpecificNamespace(testStorage.getClusterName(), kafka -> kafka.getSpec().setMaintenanceTimeWindows(maintenanceTimeWindows), testStorage.getNamespaceName()); resourceManager.createResource(extensionContext, KafkaTemplates.kafkaPersistent(testStorage.getClusterName(), 3, 1) .editSpec() .addToMaintenanceTimeWindows(maintenanceWindowCron) .endSpec() .build()); LOGGER.info("Wait until rolling update is triggered during maintenanceTimeWindows"); RollingUpdateUtils.waitTillComponentHasRolled(testStorage.getNamespaceName(), testStorage.getKafkaSelector(), 3, kafkaPods); Secret kafkaUserSecretRolled = kubeClient(testStorage.getNamespaceName()).getSecret(testStorage.getNamespaceName(), testStorage.getUserName()); secretCaCluster = kubeClient(testStorage.getNamespaceName()).getSecret(testStorage.getNamespaceName(), clusterSecretName); secretCaClients = kubeClient(testStorage.getNamespaceName()).getSecret(testStorage.getNamespaceName(), clientsSecretName); assertThat("Cluster CA certificate has not been renewed within maintenanceTimeWindows", secretCaCluster.getMetadata().getAnnotations().get(Ca.ANNO_STRIMZI_IO_CA_CERT_GENERATION), is("1")); assertThat("Clients CA certificate has not been renewed within maintenanceTimeWindows", secretCaClients.getMetadata().getAnnotations().get(Ca.ANNO_STRIMZI_IO_CA_CERT_GENERATION), is("1")); assertThat("KafkaUser certificate has not been renewed within maintenanceTimeWindows", kafkaUserSecret, not(sameInstance(kafkaUserSecretRolled))); resourceManager.createResource(extensionContext, kafkaClients.producerTlsStrimzi(testStorage.getClusterName()), kafkaClients.consumerTlsStrimzi(testStorage.getClusterName())); ClientUtils.waitForClientsSuccess(testStorage.getProducerName(), testStorage.getConsumerName(), testStorage.getNamespaceName(), MESSAGE_COUNT); } @ParallelNamespaceTest @Tag(INTERNAL_CLIENTS_USED) void testCertRegeneratedAfterInternalCAisDeleted(ExtensionContext extensionContext) { final TestStorage testStorage = new TestStorage(extensionContext, namespace); resourceManager.createResource(extensionContext, KafkaTemplates.kafkaPersistent(testStorage.getClusterName(), 3, 1).build()); Map<String, String> kafkaPods = PodUtils.podSnapshot(testStorage.getNamespaceName(), testStorage.getKafkaSelector()); resourceManager.createResource(extensionContext, KafkaUserTemplates.tlsUser(testStorage.getClusterName(), testStorage.getUserName()).build(), KafkaTopicTemplates.topic(testStorage.getClusterName(), testStorage.getTopicName()).build() ); KafkaClients kafkaClients = new KafkaClientsBuilder() .withTopicName(testStorage.getTopicName()) .withMessageCount(MESSAGE_COUNT) .withBootstrapAddress(KafkaResources.tlsBootstrapAddress(testStorage.getClusterName())) .withProducerName(testStorage.getProducerName()) .withConsumerName(testStorage.getConsumerName()) .withNamespaceName(testStorage.getNamespaceName()) .withUserName(testStorage.getUserName()) .build(); List<Secret> secrets = kubeClient().listSecrets(testStorage.getNamespaceName()).stream() .filter(secret -> secret.getMetadata().getName().startsWith(testStorage.getClusterName()) && secret.getMetadata().getName().endsWith("ca-cert")) .collect(Collectors.toList()); for (Secret s : secrets) { LOGGER.info("Verifying that secret {} with name {} is present", s, s.getMetadata().getName()); assertThat(s.getData(), is(notNullValue())); } for (Secret s : secrets) { LOGGER.info("Deleting secret {}", s.getMetadata().getName()); kubeClient().deleteSecret(testStorage.getNamespaceName(), s.getMetadata().getName()); } PodUtils.verifyThatRunningPodsAreStable(testStorage.getNamespaceName(), KafkaResources.kafkaStatefulSetName(testStorage.getClusterName())); RollingUpdateUtils.waitTillComponentHasRolled(testStorage.getNamespaceName(), testStorage.getKafkaSelector(), 3, kafkaPods); for (Secret s : secrets) { SecretUtils.waitForSecretReady(testStorage.getNamespaceName(), s.getMetadata().getName(), () -> { }); } List<Secret> regeneratedSecrets = kubeClient().listSecrets(testStorage.getNamespaceName()).stream() .filter(secret -> secret.getMetadata().getName().endsWith("ca-cert")) .collect(Collectors.toList()); for (int i = 0; i < secrets.size(); i++) { assertThat("Certificates has different cert UIDs", !secrets.get(i).getData().get("ca.crt").equals(regeneratedSecrets.get(i).getData().get("ca.crt"))); } resourceManager.createResource(extensionContext, kafkaClients.producerTlsStrimzi(testStorage.getClusterName()), kafkaClients.consumerTlsStrimzi(testStorage.getClusterName())); ClientUtils.waitForClientsSuccess(testStorage.getProducerName(), testStorage.getConsumerName(), testStorage.getNamespaceName(), MESSAGE_COUNT); } @ParallelNamespaceTest @Tag(CONNECT) @Tag(CONNECT_COMPONENTS) void testTlsHostnameVerificationWithKafkaConnect(ExtensionContext extensionContext) { final String namespaceName = StUtils.getNamespaceBasedOnRbac(namespace, extensionContext); final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName()); resourceManager.createResource(extensionContext, KafkaTemplates.kafkaEphemeral(clusterName, 3, 1).build()); LOGGER.info("Getting IP of the bootstrap service"); String ipOfBootstrapService = kubeClient(namespaceName).getService(namespaceName, KafkaResources.bootstrapServiceName(clusterName)).getSpec().getClusterIP(); LOGGER.info("KafkaConnect without config {} will not connect to {}:9093", "ssl.endpoint.identification.algorithm", ipOfBootstrapService); resourceManager.createResource(extensionContext, false, KafkaConnectTemplates.kafkaConnect(clusterName, clusterName, 1) .editSpec() .withNewTls() .addNewTrustedCertificate() .withSecretName(KafkaResources.clusterCaCertificateSecretName(clusterName)) .withCertificate("ca.crt") .endTrustedCertificate() .endTls() .withBootstrapServers(ipOfBootstrapService + ":9093") .endSpec() .build()); PodUtils.waitUntilPodIsPresent(namespaceName, clusterName + "-connect"); String kafkaConnectPodName = kubeClient(namespaceName).listPods(namespaceName, clusterName, Labels.STRIMZI_KIND_LABEL, KafkaConnect.RESOURCE_KIND).get(0).getMetadata().getName(); PodUtils.waitUntilPodIsInCrashLoopBackOff(namespaceName, kafkaConnectPodName); assertThat("CrashLoopBackOff", is(kubeClient(namespaceName).getPod(namespaceName, kafkaConnectPodName).getStatus().getContainerStatuses() .get(0).getState().getWaiting().getReason())); KafkaConnectResource.replaceKafkaConnectResourceInSpecificNamespace(clusterName, kc -> { kc.getSpec().getConfig().put("ssl.endpoint.identification.algorithm", ""); }, namespaceName); LOGGER.info("KafkaConnect with config {} will connect to {}:9093", "ssl.endpoint.identification.algorithm", ipOfBootstrapService); KafkaConnectUtils.waitForConnectReady(namespaceName, clusterName); } @ParallelNamespaceTest @Tag(MIRROR_MAKER) void testTlsHostnameVerificationWithMirrorMaker(ExtensionContext extensionContext) { final String namespaceName = StUtils.getNamespaceBasedOnRbac(namespace, extensionContext); final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName()); final String sourceKafkaCluster = clusterName + "-source"; final String targetKafkaCluster = clusterName + "-target"; resourceManager.createResource(extensionContext, KafkaTemplates.kafkaEphemeral(sourceKafkaCluster, 1, 1).build()); resourceManager.createResource(extensionContext, KafkaTemplates.kafkaEphemeral(targetKafkaCluster, 1, 1).build()); LOGGER.info("Getting IP of the source bootstrap service for consumer"); String ipOfSourceBootstrapService = kubeClient(namespaceName).getService(namespaceName, KafkaResources.bootstrapServiceName(sourceKafkaCluster)).getSpec().getClusterIP(); LOGGER.info("Getting IP of the target bootstrap service for producer"); String ipOfTargetBootstrapService = kubeClient(namespaceName).getService(namespaceName, KafkaResources.bootstrapServiceName(targetKafkaCluster)).getSpec().getClusterIP(); LOGGER.info("KafkaMirrorMaker without config {} will not connect to consumer with address {}:9093", "ssl.endpoint.identification.algorithm", ipOfSourceBootstrapService); LOGGER.info("KafkaMirrorMaker without config {} will not connect to producer with address {}:9093", "ssl.endpoint.identification.algorithm", ipOfTargetBootstrapService); resourceManager.createResource(extensionContext, false, KafkaMirrorMakerTemplates.kafkaMirrorMaker(clusterName, sourceKafkaCluster, targetKafkaCluster, ClientUtils.generateRandomConsumerGroup(), 1, true) .editSpec() .editConsumer() .withNewTls() .addNewTrustedCertificate() .withSecretName(KafkaResources.clusterCaCertificateSecretName(sourceKafkaCluster)) .withCertificate("ca.crt") .endTrustedCertificate() .endTls() .withBootstrapServers(ipOfSourceBootstrapService + ":9093") .endConsumer() .editProducer() .withNewTls() .addNewTrustedCertificate() .withSecretName(KafkaResources.clusterCaCertificateSecretName(targetKafkaCluster)) .withCertificate("ca.crt") .endTrustedCertificate() .endTls() .withBootstrapServers(ipOfTargetBootstrapService + ":9093") .endProducer() .endSpec() .build()); PodUtils.waitUntilPodIsPresent(namespaceName, clusterName + "-mirror-maker"); String kafkaMirrorMakerPodName = kubeClient(namespaceName).listPods(namespaceName, clusterName, Labels.STRIMZI_KIND_LABEL, KafkaMirrorMaker.RESOURCE_KIND).get(0).getMetadata().getName(); PodUtils.waitUntilPodIsInCrashLoopBackOff(namespaceName, kafkaMirrorMakerPodName); assertThat("CrashLoopBackOff", is(kubeClient(namespaceName).getPod(namespaceName, kafkaMirrorMakerPodName).getStatus().getContainerStatuses().get(0) .getState().getWaiting().getReason())); LOGGER.info("KafkaMirrorMaker with config {} will connect to consumer with address {}:9093", "ssl.endpoint.identification.algorithm", ipOfSourceBootstrapService); LOGGER.info("KafkaMirrorMaker with config {} will connect to producer with address {}:9093", "ssl.endpoint.identification.algorithm", ipOfTargetBootstrapService); LOGGER.info("Adding configuration {} to the mirror maker...", "ssl.endpoint.identification.algorithm"); KafkaMirrorMakerResource.replaceMirrorMakerResourceInSpecificNamespace(clusterName, mm -> { mm.getSpec().getConsumer().getConfig().put("ssl.endpoint.identification.algorithm", ""); // disable hostname verification mm.getSpec().getProducer().getConfig().put("ssl.endpoint.identification.algorithm", ""); // disable hostname verification }, namespaceName); KafkaMirrorMakerUtils.waitForKafkaMirrorMakerReady(namespaceName, clusterName); } @ParallelNamespaceTest @Tag(NODEPORT_SUPPORTED) @Tag(EXTERNAL_CLIENTS_USED) void testAclRuleReadAndWrite(ExtensionContext extensionContext) { final String namespaceName = StUtils.getNamespaceBasedOnRbac(namespace, extensionContext); final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName()); final String topicName = mapWithTestTopics.get(extensionContext.getDisplayName()); final String kafkaUserWrite = "kafka-user-write"; final String kafkaUserRead = "kafka-user-read"; final int numberOfMessages = 500; final String consumerGroupName = "consumer-group-name-1"; resourceManager.createResource(extensionContext, KafkaTemplates.kafkaEphemeral(clusterName, 3, 1) .editSpec() .editKafka() .withNewKafkaAuthorizationSimple() .endKafkaAuthorizationSimple() .withListeners(new GenericKafkaListenerBuilder() .withName(Constants.EXTERNAL_LISTENER_DEFAULT_NAME) .withPort(9094) .withType(KafkaListenerType.NODEPORT) .withTls(true) .withAuth(new KafkaListenerAuthenticationTls()) .build()) .endKafka() .endSpec() .build()); resourceManager.createResource(extensionContext, KafkaTopicTemplates.topic(clusterName, topicName).build()); resourceManager.createResource(extensionContext, KafkaUserTemplates.tlsUser(clusterName, kafkaUserWrite) .editSpec() .withNewKafkaUserAuthorizationSimple() .addNewAcl() .withNewAclRuleTopicResource() .withName(topicName) .endAclRuleTopicResource() .withOperation(AclOperation.WRITE) .endAcl() .addNewAcl() .withNewAclRuleTopicResource() .withName(topicName) .endAclRuleTopicResource() .withOperation(AclOperation.DESCRIBE) // describe is for that user can find out metadata .endAcl() .addNewAcl() .withNewAclRuleTopicResource() .withName(topicName) .endAclRuleTopicResource() .withOperation(AclOperation.CREATE) // describe is for that user can find out metadata .endAcl() .endKafkaUserAuthorizationSimple() .endSpec() .build()); LOGGER.info("Checking KafkaUser {} that is able to send messages to topic '{}'", kafkaUserWrite, topicName); ExternalKafkaClient externalKafkaClient = new ExternalKafkaClient.Builder() .withTopicName(topicName) .withNamespaceName(namespaceName) .withClusterName(clusterName) .withKafkaUsername(kafkaUserWrite) .withMessageCount(numberOfMessages) .withSecurityProtocol(SecurityProtocol.SSL) .withListenerName(Constants.EXTERNAL_LISTENER_DEFAULT_NAME) .build(); assertThat(externalKafkaClient.sendMessagesTls(), is(numberOfMessages)); assertThrows(GroupAuthorizationException.class, externalKafkaClient::receiveMessagesTls); resourceManager.createResource(extensionContext, KafkaUserTemplates.tlsUser(clusterName, kafkaUserRead) .editSpec() .withNewKafkaUserAuthorizationSimple() .addNewAcl() .withNewAclRuleTopicResource() .withName(topicName) .endAclRuleTopicResource() .withOperation(AclOperation.READ) .endAcl() .addNewAcl() .withNewAclRuleGroupResource() .withName(consumerGroupName) .endAclRuleGroupResource() .withOperation(AclOperation.READ) .endAcl() .addNewAcl() .withNewAclRuleTopicResource() .withName(topicName) .endAclRuleTopicResource() .withOperation(AclOperation.DESCRIBE) //s describe is for that user can find out metadata .endAcl() .endKafkaUserAuthorizationSimple() .endSpec() .build()); ExternalKafkaClient newExternalKafkaClient = externalKafkaClient.toBuilder() .withKafkaUsername(kafkaUserRead) .withConsumerGroupName(consumerGroupName) .build(); assertThat(newExternalKafkaClient.receiveMessagesTls(), is(numberOfMessages)); LOGGER.info("Checking KafkaUser {} that is not able to send messages to topic '{}'", kafkaUserRead, topicName); assertThrows(Exception.class, newExternalKafkaClient::sendMessagesTls); } @ParallelNamespaceTest @Tag(NODEPORT_SUPPORTED) @Tag(EXTERNAL_CLIENTS_USED) void testAclWithSuperUser(ExtensionContext extensionContext) { final String namespaceName = StUtils.getNamespaceBasedOnRbac(namespace, extensionContext); final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName()); final String topicName = mapWithTestTopics.get(extensionContext.getDisplayName()); final String userName = mapWithTestUsers.get(extensionContext.getDisplayName()); resourceManager.createResource(extensionContext, KafkaTemplates.kafkaEphemeral(clusterName, 3, 1) .editSpec() .editKafka() .withNewKafkaAuthorizationSimple() .withSuperUsers("CN=" + userName) .endKafkaAuthorizationSimple() .withListeners(new GenericKafkaListenerBuilder() .withName(Constants.EXTERNAL_LISTENER_DEFAULT_NAME) .withPort(9094) .withType(KafkaListenerType.NODEPORT) .withTls(true) .withAuth(new KafkaListenerAuthenticationTls()) .build()) .endKafka() .endSpec() .build()); resourceManager.createResource(extensionContext, KafkaTopicTemplates.topic(clusterName, topicName).build()); resourceManager.createResource(extensionContext, KafkaUserTemplates.tlsUser(clusterName, userName) .editSpec() .withNewKafkaUserAuthorizationSimple() .addNewAcl() .withNewAclRuleTopicResource() .withName(topicName) .endAclRuleTopicResource() .withOperation(AclOperation.WRITE) .endAcl() .addNewAcl() .withNewAclRuleTopicResource() .withName(topicName) .endAclRuleTopicResource() .withOperation(AclOperation.DESCRIBE) // describe is for that user can find out metadata .endAcl() .endKafkaUserAuthorizationSimple() .endSpec() .build()); LOGGER.info("Checking kafka super user:{} that is able to send messages to topic:{}", userName, topicName); ExternalKafkaClient externalKafkaClient = new ExternalKafkaClient.Builder() .withTopicName(topicName) .withNamespaceName(namespaceName) .withClusterName(clusterName) .withKafkaUsername(userName) .withMessageCount(MESSAGE_COUNT) .withSecurityProtocol(SecurityProtocol.SSL) .withListenerName(Constants.EXTERNAL_LISTENER_DEFAULT_NAME) .build(); assertThat(externalKafkaClient.sendMessagesTls(), is(MESSAGE_COUNT)); LOGGER.info("Checking kafka super user:{} that is able to read messages to topic:{} regardless that " + "we configured Acls with only write operation", userName, topicName); assertThat(externalKafkaClient.receiveMessagesTls(), is(MESSAGE_COUNT)); String nonSuperuserName = userName + "-non-super-user"; resourceManager.createResource(extensionContext, KafkaUserTemplates.tlsUser(clusterName, nonSuperuserName) .editSpec() .withNewKafkaUserAuthorizationSimple() .addNewAcl() .withNewAclRuleTopicResource() .withName(topicName) .endAclRuleTopicResource() .withOperation(AclOperation.WRITE) .endAcl() .addNewAcl() .withNewAclRuleTopicResource() .withName(topicName) .endAclRuleTopicResource() .withOperation(AclOperation.DESCRIBE) // describe is for that user can find out metadata .endAcl() .endKafkaUserAuthorizationSimple() .endSpec() .build()); LOGGER.info("Checking kafka super user:{} that is able to send messages to topic:{}", nonSuperuserName, topicName); externalKafkaClient = externalKafkaClient.toBuilder() .withKafkaUsername(nonSuperuserName) .build(); assertThat(externalKafkaClient.sendMessagesTls(), is(MESSAGE_COUNT)); LOGGER.info("Checking kafka super user:{} that is not able to read messages to topic:{} because of defined" + " ACLs on only write operation", nonSuperuserName, topicName); ExternalKafkaClient newExternalKafkaClient = externalKafkaClient.toBuilder() .withConsumerGroupName(ClientUtils.generateRandomConsumerGroup()) .build(); assertThrows(GroupAuthorizationException.class, newExternalKafkaClient::receiveMessagesTls); } @ParallelNamespaceTest @Tag(INTERNAL_CLIENTS_USED) void testCaRenewalBreakInMiddle(ExtensionContext extensionContext) { final TestStorage testStorage = new TestStorage(extensionContext, namespace); resourceManager.createResource(extensionContext, KafkaTemplates.kafkaPersistent(testStorage.getClusterName(), 3, 3) .editSpec() .withNewClusterCa() .withRenewalDays(1) .withValidityDays(3) .endClusterCa() .endSpec() .build()); resourceManager.createResource(extensionContext, KafkaUserTemplates.tlsUser(testStorage.getClusterName(), testStorage.getUserName()).build(), KafkaTopicTemplates.topic(testStorage.getClusterName(), testStorage.getTopicName()).build() ); KafkaClients kafkaClients = new KafkaClientsBuilder() .withTopicName(testStorage.getTopicName()) .withMessageCount(MESSAGE_COUNT) .withBootstrapAddress(KafkaResources.tlsBootstrapAddress(testStorage.getClusterName())) .withProducerName(testStorage.getProducerName()) .withConsumerName(testStorage.getConsumerName()) .withNamespaceName(testStorage.getNamespaceName()) .withUserName(testStorage.getUserName()) .build(); resourceManager.createResource(extensionContext, kafkaClients.producerTlsStrimzi(testStorage.getClusterName()), kafkaClients.consumerTlsStrimzi(testStorage.getClusterName())); ClientUtils.waitForClientsSuccess(testStorage.getProducerName(), testStorage.getConsumerName(), testStorage.getNamespaceName(), MESSAGE_COUNT); Map<String, String> zkPods = PodUtils.podSnapshot(testStorage.getNamespaceName(), testStorage.getZookeeperSelector()); Map<String, String> kafkaPods = PodUtils.podSnapshot(testStorage.getNamespaceName(), testStorage.getKafkaSelector()); Map<String, String> eoPods = DeploymentUtils.depSnapshot(testStorage.getNamespaceName(), KafkaResources.entityOperatorDeploymentName(testStorage.getClusterName())); InputStream secretInputStream = getClass().getClassLoader().getResourceAsStream("security-st-certs/expired-cluster-ca.crt"); String clusterCaCert = TestUtils.readResource(secretInputStream); SecretUtils.createSecret(testStorage.getNamespaceName(), clusterCaCertificateSecretName(testStorage.getClusterName()), "ca.crt", clusterCaCert); KafkaResource.replaceKafkaResourceInSpecificNamespace(testStorage.getClusterName(), k -> { k.getSpec() .getKafka() .setResources(new ResourceRequirementsBuilder() .addToRequests("cpu", new Quantity("100000m")) .build()); k.getSpec().setClusterCa(new CertificateAuthorityBuilder() .withRenewalDays(4) .withValidityDays(7) .build()); }, testStorage.getNamespaceName()); TestUtils.waitFor("Waiting for some kafka pod to be in the pending phase because of selected high cpu resource", Constants.GLOBAL_POLL_INTERVAL, Constants.GLOBAL_TIMEOUT, () -> { List<Pod> pendingPods = kubeClient().listPodsByPrefixInName(testStorage.getNamespaceName(), KafkaResources.kafkaStatefulSetName(testStorage.getClusterName())) .stream().filter(pod -> pod.getStatus().getPhase().equals("Pending")).collect(Collectors.toList()); if (pendingPods.isEmpty()) { LOGGER.info("No pods of {} are in desired state", KafkaResources.kafkaStatefulSetName(testStorage.getClusterName())); return false; } else { LOGGER.info("Pod in 'Pending' state: {}", pendingPods.get(0).getMetadata().getName()); return true; } } ); kafkaClients = new KafkaClientsBuilder(kafkaClients) .withConsumerGroup(ClientUtils.generateRandomConsumerGroup()) .build(); resourceManager.createResource(extensionContext, kafkaClients.consumerTlsStrimzi(testStorage.getClusterName())); ClientUtils.waitForClientSuccess(testStorage.getConsumerName(), testStorage.getNamespaceName(), MESSAGE_COUNT); KafkaResource.replaceKafkaResourceInSpecificNamespace(testStorage.getClusterName(), k -> { k.getSpec() .getKafka() .setResources(new ResourceRequirementsBuilder() .addToRequests("cpu", new Quantity("200m")) .build()); }, testStorage.getNamespaceName()); // Wait until the certificates have been replaced SecretUtils.waitForCertToChange(testStorage.getNamespaceName(), clusterCaCert, KafkaResources.clusterCaCertificateSecretName(testStorage.getClusterName())); if (!Environment.isKRaftModeEnabled()) { RollingUpdateUtils.waitTillComponentHasRolledAndPodsReady(testStorage.getNamespaceName(), testStorage.getZookeeperSelector(), 3, zkPods); } RollingUpdateUtils.waitTillComponentHasRolledAndPodsReady(testStorage.getNamespaceName(), testStorage.getKafkaSelector(), 3, kafkaPods); DeploymentUtils.waitTillDepHasRolled(testStorage.getNamespaceName(), KafkaResources.entityOperatorDeploymentName(testStorage.getClusterName()), 1, eoPods); kafkaClients = new KafkaClientsBuilder(kafkaClients) .withConsumerGroup(ClientUtils.generateRandomConsumerGroup()) .build(); resourceManager.createResource(extensionContext, kafkaClients.consumerTlsStrimzi(testStorage.getClusterName())); ClientUtils.waitForClientSuccess(testStorage.getConsumerName(), testStorage.getNamespaceName(), MESSAGE_COUNT); // Try to send and receive messages with new certificates String topicName = KafkaTopicUtils.generateRandomNameOfTopic(); resourceManager.createResource(extensionContext, KafkaTopicTemplates.topic(testStorage.getClusterName(), topicName).build()); kafkaClients = new KafkaClientsBuilder(kafkaClients) .withConsumerGroup(ClientUtils.generateRandomConsumerGroup()) .withTopicName(topicName) .build(); resourceManager.createResource(extensionContext, kafkaClients.producerTlsStrimzi(testStorage.getClusterName()), kafkaClients.consumerTlsStrimzi(testStorage.getClusterName())); ClientUtils.waitForClientsSuccess(testStorage.getProducerName(), testStorage.getConsumerName(), testStorage.getNamespaceName(), MESSAGE_COUNT); } @ParallelNamespaceTest @Tag(CONNECT) @Tag(CONNECT_COMPONENTS) void testKafkaAndKafkaConnectTlsVersion(ExtensionContext extensionContext) { final String namespaceName = StUtils.getNamespaceBasedOnRbac(namespace, extensionContext); final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName()); final Map<String, Object> configWithNewestVersionOfTls = new HashMap<>(); final String tlsVersion12 = "TLSv1.2"; final String tlsVersion1 = "TLSv1"; configWithNewestVersionOfTls.put(SslConfigs.SSL_ENABLED_PROTOCOLS_CONFIG, tlsVersion12); configWithNewestVersionOfTls.put(SslConfigs.SSL_PROTOCOL_CONFIG, SslConfigs.DEFAULT_SSL_PROTOCOL); LOGGER.info("Deploying Kafka cluster with the support {} TLS", tlsVersion12); resourceManager.createResource(extensionContext, KafkaTemplates.kafkaEphemeral(clusterName, 3) .editSpec() .editKafka() .withConfig(configWithNewestVersionOfTls) .endKafka() .endSpec() .build()); Map<String, Object> configsFromKafkaCustomResource = KafkaResource.kafkaClient().inNamespace(namespaceName).withName(clusterName).get().getSpec().getKafka().getConfig(); LOGGER.info("Verifying that Kafka cluster has the accepted configuration:\n" + "{} -> {}\n" + "{} -> {}", SslConfigs.SSL_ENABLED_PROTOCOLS_CONFIG, configsFromKafkaCustomResource.get(SslConfigs.SSL_ENABLED_PROTOCOLS_CONFIG), SslConfigs.SSL_PROTOCOL_CONFIG, configsFromKafkaCustomResource.get(SslConfigs.SSL_PROTOCOL_CONFIG)); assertThat(configsFromKafkaCustomResource.get(SslConfigs.SSL_ENABLED_PROTOCOLS_CONFIG), is(tlsVersion12)); assertThat(configsFromKafkaCustomResource.get(SslConfigs.SSL_PROTOCOL_CONFIG), is(SslConfigs.DEFAULT_SSL_PROTOCOL)); Map<String, Object> configWithLowestVersionOfTls = new HashMap<>(); configWithLowestVersionOfTls.put(SslConfigs.SSL_ENABLED_PROTOCOLS_CONFIG, tlsVersion1); configWithLowestVersionOfTls.put(SslConfigs.SSL_PROTOCOL_CONFIG, tlsVersion1); resourceManager.createResource(extensionContext, false, KafkaConnectTemplates.kafkaConnect(clusterName, clusterName, 1) .editSpec() .withConfig(configWithLowestVersionOfTls) .endSpec() .build()); LOGGER.info("Verifying that Kafka Connect status is NotReady because of different TLS version"); KafkaConnectUtils.waitForConnectNotReady(namespaceName, clusterName); LOGGER.info("Replacing Kafka Connect config to the newest(TLSv1.2) one same as the Kafka broker has."); KafkaConnectResource.replaceKafkaConnectResourceInSpecificNamespace(clusterName, kafkaConnect -> kafkaConnect.getSpec().setConfig(configWithNewestVersionOfTls), namespaceName); LOGGER.info("Verifying that Kafka Connect has the accepted configuration:\n {} -> {}\n {} -> {}", SslConfigs.SSL_ENABLED_PROTOCOLS_CONFIG, tlsVersion12, SslConfigs.SSL_PROTOCOL_CONFIG, SslConfigs.DEFAULT_SSL_PROTOCOL); KafkaConnectUtils.waitForKafkaConnectConfigChange(SslConfigs.SSL_ENABLED_PROTOCOLS_CONFIG, tlsVersion12, namespaceName, clusterName); KafkaConnectUtils.waitForKafkaConnectConfigChange(SslConfigs.SSL_PROTOCOL_CONFIG, SslConfigs.DEFAULT_SSL_PROTOCOL, namespaceName, clusterName); LOGGER.info("Verifying that Kafka Connect is stable"); PodUtils.verifyThatRunningPodsAreStable(namespaceName, KafkaConnectResources.deploymentName(clusterName)); LOGGER.info("Verifying that Kafka Connect status is Ready because of same TLS version"); KafkaConnectUtils.waitForConnectReady(namespaceName, clusterName); } @ParallelNamespaceTest @Tag(CONNECT) @Tag(CONNECT_COMPONENTS) void testKafkaAndKafkaConnectCipherSuites(ExtensionContext extensionContext) { final String namespaceName = StUtils.getNamespaceBasedOnRbac(namespace, extensionContext); final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName()); final Map<String, Object> configWithCipherSuitesSha384 = new HashMap<>(); final String cipherSuitesSha384 = "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"; final String cipherSuitesSha256 = "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256"; configWithCipherSuitesSha384.put(SslConfigs.SSL_CIPHER_SUITES_CONFIG, cipherSuitesSha384); LOGGER.info("Deploying Kafka cluster with the support {} cipher algorithms", cipherSuitesSha384); resourceManager.createResource(extensionContext, KafkaTemplates.kafkaEphemeral(clusterName, 3) .editSpec() .editKafka() .withConfig(configWithCipherSuitesSha384) .endKafka() .endSpec() .build()); Map<String, Object> configsFromKafkaCustomResource = KafkaResource.kafkaClient().inNamespace(namespaceName).withName(clusterName).get().getSpec().getKafka().getConfig(); LOGGER.info("Verifying that Kafka Connect has the accepted configuration:\n {} -> {}", SslConfigs.SSL_CIPHER_SUITES_CONFIG, configsFromKafkaCustomResource.get(SslConfigs.SSL_CIPHER_SUITES_CONFIG)); assertThat(configsFromKafkaCustomResource.get(SslConfigs.SSL_CIPHER_SUITES_CONFIG), is(cipherSuitesSha384)); Map<String, Object> configWithCipherSuitesSha256 = new HashMap<>(); configWithCipherSuitesSha256.put(SslConfigs.SSL_CIPHER_SUITES_CONFIG, cipherSuitesSha256); resourceManager.createResource(extensionContext, false, KafkaConnectTemplates.kafkaConnect(clusterName, clusterName, 1) .editSpec() .withConfig(configWithCipherSuitesSha256) .endSpec() .build()); LOGGER.info("Verifying that Kafka Connect status is NotReady because of different cipher suites complexity of algorithm"); KafkaConnectUtils.waitForConnectNotReady(namespaceName, clusterName); LOGGER.info("Replacing Kafka Connect config to the cipher suites same as the Kafka broker has."); KafkaConnectResource.replaceKafkaConnectResourceInSpecificNamespace(clusterName, kafkaConnect -> kafkaConnect.getSpec().setConfig(configWithCipherSuitesSha384), namespaceName); LOGGER.info("Verifying that Kafka Connect has the accepted configuration:\n {} -> {}", SslConfigs.SSL_CIPHER_SUITES_CONFIG, configsFromKafkaCustomResource.get(SslConfigs.SSL_CIPHER_SUITES_CONFIG)); KafkaConnectUtils.waitForKafkaConnectConfigChange(SslConfigs.SSL_CIPHER_SUITES_CONFIG, cipherSuitesSha384, namespaceName, clusterName); LOGGER.info("Verifying that Kafka Connect is stable"); PodUtils.verifyThatRunningPodsAreStable(namespaceName, KafkaConnectResources.deploymentName(clusterName)); LOGGER.info("Verifying that Kafka Connect status is Ready because of the same cipher suites complexity of algorithm"); KafkaConnectUtils.waitForConnectReady(namespaceName, clusterName); } @ParallelNamespaceTest void testOwnerReferenceOfCASecrets(ExtensionContext extensionContext) { /* Different name for Kafka cluster to make the test quicker -> KafkaRoller is waiting for pods of "my-cluster" to become ready for 5 minutes -> this will prevent the waiting. */ final String namespaceName = StUtils.getNamespaceBasedOnRbac(namespace, extensionContext); final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName()); final String secondClusterName = "my-second-cluster-" + clusterName; resourceManager.createResource(extensionContext, KafkaTemplates.kafkaEphemeral(clusterName, 3) .editOrNewSpec() .withNewClusterCa() .withGenerateSecretOwnerReference(false) .endClusterCa() .withNewClientsCa() .withGenerateSecretOwnerReference(false) .endClientsCa() .endSpec() .build()); LOGGER.info("Listing all cluster CAs for {}", clusterName); List<Secret> caSecrets = kubeClient(namespaceName).listSecrets(namespaceName).stream() .filter(secret -> secret.getMetadata().getName().contains(KafkaResources.clusterCaKeySecretName(clusterName)) || secret.getMetadata().getName().contains(KafkaResources.clientsCaKeySecretName(clusterName))).collect(Collectors.toList()); LOGGER.info("Deleting Kafka:{}", clusterName); KafkaResource.kafkaClient().inNamespace(namespaceName).withName(clusterName).withPropagationPolicy(DeletionPropagation.FOREGROUND).delete(); KafkaUtils.waitForKafkaDeletion(namespaceName, clusterName); LOGGER.info("Checking actual secrets after Kafka deletion"); caSecrets.forEach(caSecret -> { String secretName = caSecret.getMetadata().getName(); LOGGER.info("Checking that {} secret is still present", secretName); assertNotNull(kubeClient(namespaceName).getSecret(namespaceName, secretName)); LOGGER.info("Deleting secret: {}", secretName); kubeClient(namespaceName).deleteSecret(namespaceName, secretName); }); LOGGER.info("Deploying Kafka with generateSecretOwnerReference set to true"); resourceManager.createResource(extensionContext, KafkaTemplates.kafkaEphemeral(secondClusterName, 3) .editOrNewSpec() .editOrNewClusterCa() .withGenerateSecretOwnerReference(true) .endClusterCa() .editOrNewClientsCa() .withGenerateSecretOwnerReference(true) .endClientsCa() .endSpec() .build()); caSecrets = kubeClient(namespaceName).listSecrets(namespaceName).stream() .filter(secret -> secret.getMetadata().getName().contains(KafkaResources.clusterCaKeySecretName(secondClusterName)) || secret.getMetadata().getName().contains(KafkaResources.clientsCaKeySecretName(secondClusterName))).collect(Collectors.toList()); LOGGER.info("Deleting Kafka:{}", secondClusterName); KafkaResource.kafkaClient().inNamespace(namespaceName).withName(secondClusterName).withPropagationPolicy(DeletionPropagation.FOREGROUND).delete(); KafkaUtils.waitForKafkaDeletion(namespaceName, secondClusterName); LOGGER.info("Checking actual secrets after Kafka deletion"); caSecrets.forEach(caSecret -> { String secretName = caSecret.getMetadata().getName(); LOGGER.info("Checking that {} secret is deleted", secretName); TestUtils.waitFor("secret " + secretName + "deletion", Constants.GLOBAL_POLL_INTERVAL, Constants.GLOBAL_TIMEOUT, () -> kubeClient().getSecret(namespaceName, secretName) == null); }); } @ParallelNamespaceTest void testClusterCACertRenew(ExtensionContext extensionContext) { final TestStorage ts = new TestStorage(extensionContext, namespace); resourceManager.createResource(extensionContext, KafkaTemplates.kafkaEphemeral(ts.getClusterName(), 3) .editOrNewSpec() .withNewClusterCa() .withRenewalDays(15) .withValidityDays(20) .endClusterCa() .endSpec() .build()); final Map<String, String> zkPods = PodUtils.podSnapshot(ts.getNamespaceName(), ts.getZookeeperSelector()); final Map<String, String> kafkaPods = PodUtils.podSnapshot(ts.getNamespaceName(), ts.getKafkaSelector()); final Map<String, String> eoPod = DeploymentUtils.depSnapshot(ts.getNamespaceName(), ts.getEoDeploymentName()); Secret clusterCASecret = kubeClient(ts.getNamespaceName()).getSecret(ts.getNamespaceName(), KafkaResources.clusterCaCertificateSecretName(ts.getClusterName())); X509Certificate cacert = SecretUtils.getCertificateFromSecret(clusterCASecret, "ca.crt"); Date initialCertStartTime = cacert.getNotBefore(); Date initialCertEndTime = cacert.getNotAfter(); // Check Broker kafka certificate dates Secret brokerCertCreationSecret = kubeClient(ts.getNamespaceName()).getSecret(ts.getNamespaceName(), ts.getClusterName() + "-kafka-brokers"); X509Certificate kafkaBrokerCert = SecretUtils.getCertificateFromSecret(brokerCertCreationSecret, ts.getClusterName() + "-kafka-0.crt"); Date initialKafkaBrokerCertStartTime = kafkaBrokerCert.getNotBefore(); Date initialKafkaBrokerCertEndTime = kafkaBrokerCert.getNotAfter(); Date initialZkCertStartTime = null; Date initialZkCertEndTime = null; Secret zkCertCreationSecret = null; X509Certificate zkBrokerCert = null; if (!Environment.isKRaftModeEnabled()) { // Check Zookeeper certificate dates zkCertCreationSecret = kubeClient(ts.getNamespaceName()).getSecret(ts.getNamespaceName(), ts.getClusterName() + "-zookeeper-nodes"); zkBrokerCert = SecretUtils.getCertificateFromSecret(zkCertCreationSecret, ts.getClusterName() + "-zookeeper-0.crt"); initialZkCertStartTime = zkBrokerCert.getNotBefore(); initialZkCertEndTime = zkBrokerCert.getNotAfter(); } LOGGER.info("Change of kafka validity and renewal days - reconciliation should start."); CertificateAuthority newClusterCA = new CertificateAuthority(); newClusterCA.setRenewalDays(150); newClusterCA.setValidityDays(200); KafkaResource.replaceKafkaResourceInSpecificNamespace(ts.getClusterName(), k -> k.getSpec().setClusterCa(newClusterCA), ts.getNamespaceName()); // On the next reconciliation, the Cluster Operator performs a `rolling update`: // a) ZooKeeper // b) Kafka // c) and other components to trust the new Cluster CA certificate. (i.e., EntityOperator) if (!Environment.isKRaftModeEnabled()) { RollingUpdateUtils.waitTillComponentHasRolledAndPodsReady(ts.getNamespaceName(), ts.getZookeeperSelector(), 3, zkPods); } RollingUpdateUtils.waitTillComponentHasRolledAndPodsReady(ts.getNamespaceName(), ts.getKafkaSelector(), 3, kafkaPods); DeploymentUtils.waitTillDepHasRolled(ts.getNamespaceName(), ts.getEoDeploymentName(), 1, eoPod); // Read renewed secret/certs again clusterCASecret = kubeClient(ts.getNamespaceName()).getSecret(ts.getNamespaceName(), KafkaResources.clusterCaCertificateSecretName(ts.getClusterName())); cacert = SecretUtils.getCertificateFromSecret(clusterCASecret, "ca.crt"); Date changedCertStartTime = cacert.getNotBefore(); Date changedCertEndTime = cacert.getNotAfter(); // Check renewed Broker kafka certificate dates brokerCertCreationSecret = kubeClient(ts.getNamespaceName()).getSecret(ts.getNamespaceName(), ts.getClusterName() + "-kafka-brokers"); kafkaBrokerCert = SecretUtils.getCertificateFromSecret(brokerCertCreationSecret, ts.getClusterName() + "-kafka-0.crt"); Date changedKafkaBrokerCertStartTime = kafkaBrokerCert.getNotBefore(); Date changedKafkaBrokerCertEndTime = kafkaBrokerCert.getNotAfter(); LOGGER.info("Initial ClusterCA cert dates: " + initialCertStartTime + " --> " + initialCertEndTime); LOGGER.info("Changed ClusterCA cert dates: " + changedCertStartTime + " --> " + changedCertEndTime); LOGGER.info("KafkaBroker cert creation dates: " + initialKafkaBrokerCertStartTime + " --> " + initialKafkaBrokerCertEndTime); LOGGER.info("KafkaBroker cert changed dates: " + changedKafkaBrokerCertStartTime + " --> " + changedKafkaBrokerCertEndTime); String msg = "Error: original cert-end date: '" + initialCertEndTime + "' ends sooner than changed (prolonged) cert date '" + changedCertEndTime + "'!"; assertThat(msg, initialCertEndTime.compareTo(changedCertEndTime) < 0); assertThat("Broker certificates start dates have not been renewed.", initialKafkaBrokerCertStartTime.compareTo(changedKafkaBrokerCertStartTime) < 0); assertThat("Broker certificates end dates have not been renewed.", initialKafkaBrokerCertEndTime.compareTo(changedKafkaBrokerCertEndTime) < 0); if (!Environment.isKRaftModeEnabled()) { // Check renewed Zookeeper certificate dates zkCertCreationSecret = kubeClient(ts.getNamespaceName()).getSecret(ts.getNamespaceName(), ts.getClusterName() + "-zookeeper-nodes"); zkBrokerCert = SecretUtils.getCertificateFromSecret(zkCertCreationSecret, ts.getClusterName() + "-zookeeper-0.crt"); Date changedZkCertStartTime = zkBrokerCert.getNotBefore(); Date changedZkCertEndTime = zkBrokerCert.getNotAfter(); LOGGER.info("Zookeeper cert creation dates: " + initialZkCertStartTime + " --> " + initialZkCertEndTime); LOGGER.info("Zookeeper cert changed dates: " + changedZkCertStartTime + " --> " + changedZkCertEndTime); assertThat("Zookeeper certificates start dates have not been renewed.", initialZkCertStartTime.compareTo(changedZkCertStartTime) < 0); assertThat("Zookeeper certificates end dates have not been renewed.", initialZkCertEndTime.compareTo(changedZkCertEndTime) < 0); } } @ParallelNamespaceTest void testClientsCACertRenew(ExtensionContext extensionContext) { final TestStorage ts = new TestStorage(extensionContext, namespace); resourceManager.createResource(extensionContext, KafkaTemplates.kafkaEphemeral(ts.getClusterName(), 3) .editOrNewSpec() .withNewClientsCa() .withRenewalDays(15) .withValidityDays(20) .endClientsCa() .endSpec() .build()); String username = "strimzi-tls-user-" + new Random().nextInt(Integer.MAX_VALUE); resourceManager.createResource(extensionContext, KafkaUserTemplates.tlsUser(ts.getClusterName(), username).build()); final Map<String, String> zkPods = PodUtils.podSnapshot(ts.getNamespaceName(), ts.getZookeeperSelector()); final Map<String, String> kafkaPods = PodUtils.podSnapshot(ts.getNamespaceName(), ts.getKafkaSelector()); final Map<String, String> eoPod = DeploymentUtils.depSnapshot(ts.getNamespaceName(), ts.getEoDeploymentName()); // Check initial clientsCA validity days Secret clientsCASecret = kubeClient(ts.getNamespaceName()).getSecret(ts.getNamespaceName(), KafkaResources.clientsCaCertificateSecretName(ts.getClusterName())); X509Certificate cacert = SecretUtils.getCertificateFromSecret(clientsCASecret, "ca.crt"); Date initialCertStartTime = cacert.getNotBefore(); Date initialCertEndTime = cacert.getNotAfter(); // Check initial kafkauser validity days X509Certificate userCert = SecretUtils.getCertificateFromSecret(kubeClient(ts.getNamespaceName()).getSecret(ts.getNamespaceName(), username), "user.crt"); Date initialKafkaUserCertStartTime = userCert.getNotBefore(); Date initialKafkaUserCertEndTime = userCert.getNotAfter(); LOGGER.info("Change of kafka validity and renewal days - reconciliation should start."); CertificateAuthority newClientsCA = new CertificateAuthority(); newClientsCA.setRenewalDays(150); newClientsCA.setValidityDays(200); KafkaResource.replaceKafkaResourceInSpecificNamespace(ts.getClusterName(), k -> k.getSpec().setClientsCa(newClientsCA), ts.getNamespaceName()); // On the next reconciliation, the Cluster Operator performs a `rolling update` only for the // `Kafka pods`. // a) ZooKeeper must not roll if (!Environment.isKRaftModeEnabled()) { RollingUpdateUtils.waitForNoRollingUpdate(ts.getNamespaceName(), ts.getZookeeperSelector(), zkPods); } // b) Kafka has to roll RollingUpdateUtils.waitTillComponentHasRolledAndPodsReady(ts.getNamespaceName(), ts.getKafkaSelector(), 3, kafkaPods); // c) EO must roll (because User Operator uses Clients CA for issuing user certificates) DeploymentUtils.waitTillDepHasRolled(ts.getNamespaceName(), ts.getEoDeploymentName(), 1, eoPod); // Read renewed secret/certs again clientsCASecret = kubeClient(ts.getNamespaceName()).getSecret(ts.getNamespaceName(), KafkaResources.clientsCaCertificateSecretName(ts.getClusterName())); cacert = SecretUtils.getCertificateFromSecret(clientsCASecret, "ca.crt"); Date changedCertStartTime = cacert.getNotBefore(); Date changedCertEndTime = cacert.getNotAfter(); userCert = SecretUtils.getCertificateFromSecret(kubeClient(ts.getNamespaceName()).getSecret(ts.getNamespaceName(), username), "user.crt"); Date changedKafkaUserCertStartTime = userCert.getNotBefore(); Date changedKafkaUserCertEndTime = userCert.getNotAfter(); LOGGER.info("Initial ClientsCA cert dates: " + initialCertStartTime + " --> " + initialCertEndTime); LOGGER.info("Changed ClientsCA cert dates: " + changedCertStartTime + " --> " + changedCertEndTime); LOGGER.info("Initial userCert dates: " + initialKafkaUserCertStartTime + " --> " + initialKafkaUserCertEndTime); LOGGER.info("Changed userCert dates: " + changedKafkaUserCertStartTime + " --> " + changedKafkaUserCertEndTime); String msg = "Error: original cert-end date: '" + initialCertEndTime + "' ends sooner than changed (prolonged) cert date '" + changedCertEndTime + "'"; assertThat(msg, initialCertEndTime.compareTo(changedCertEndTime) < 0); assertThat("UserCert start date has been renewed", initialKafkaUserCertStartTime.compareTo(changedKafkaUserCertStartTime) < 0); assertThat("UserCert end date has been renewed", initialKafkaUserCertEndTime.compareTo(changedKafkaUserCertEndTime) < 0); } }
{ "redpajama_set_name": "RedPajamaGithub" }
7,336
\section{Introduction} The basic analogous of Appell functions were defined and studied by Jackson \cite{j1, j2}. In \cite{sr1, sr2, sr3}, Srivastava defined and investigated bibasic $q$-Appell functions. In that paper we had shown that an Humbert confluent hypergeometric series with two basis can be reduced to an expression with only one base. We had also given an expansion formula for the Humbert hypergeometric function. For most of the notations and definitions needed in this work, the reader is referred to the papers by Agarwal et al. \cite{ajc}, Andrews \cite{an}, Thomas Ernst \cite{te1, te2}, Ghany \cite{gh}, Sahai and Verma \cite{sv}, Purohit \cite{p}, Verma and Sahai \cite{vs}, Yadav et al. \cite{yp}, Srivastava and Shehata \cite{ss}, and to the book by Gasper and Rahman \cite{gr}. Recently, Shehata investigated the $(p,q)$-Humbert, $(p,q)$-Bessel functions in \cite{sh1, sh2}. In \cite{sh3, sh4}, Shehata introduced for basic Horn functions $H_{3}$, $H_{4}$, $H_{6}$ and $H_{7}$. Motivated by the aforementioned work, we derive the $q$-recurrence relations, $q$-derivatives formulas, $q$-partial derivative relations, and summation formula for these bibasic Humbert confluent hypergeometric function $\Phi_{1}$ on two independent bases $q$ and $q_{1}$. We think these results are not found in the literature. For $0<|q|<1$, $q\in\mathbb{C}$, the $q$-shifted factorial $(q^{a};q)_{k}$ is defined as \begin{eqnarray} \begin{split} &(q^{a};q)_{k}=\left\{ \begin{array}{ll} \prod_{r=0}^{k-1}(1-q^{a+r}), & \hbox{$k\geq1$;} \\ 1, & \hbox{$k=0$.} \end{array} \right.\\ & =\left\{ \begin{array}{ll} (1-q^{a})(1-q^{a+1})\ldots(1-q^{a+k-1}), & \hbox{$k\in\Bbb{N}, q^{a}\in\mathbb{C}\setminus \{1, q^{-1}, q^{-2},\ldots,q^{1-k}\}$;} \\ 1, & \hbox{$k=0,a\in\mathbb{C}$.} \end{array} \right.\label{1.1} \end{split} \end{eqnarray} where $\mathbb{C}$ and $\mathbb{N}$ are the sets of complex and natural numbers. Let $f$ be a function defined on a subset of the real or complex plane. We define the $q$-derivative also referred to as the Jackson derivative \cite{j3} as follows \begin{equation} \begin{split} D_{q}f(x)=\frac{f(x)-f(qx)}{(1-q)x},\label{1.2} \end{split} \end{equation} For $n\geq 0$ and $k\geq 0$, the relation is given by (Rainville \cite{ra}) \begin{eqnarray} \begin{split} \sum_{n=0}^{\infty}\sum_{k=0}^{\infty}\mathfrak{A}(k,n)=\sum_{n=0}^{\infty}\sum_{k=0}^{n}\mathfrak{A}(k,n-k).\label{1.3} \end{split} \end{eqnarray} For $0<|q|<1$, $0<|q_{1}|<1$, $q, q_{1}\in\mathbb{C}$, we define the bibasic Humbert hypergeometric function $\Phi_{1}$ on two independent bases $q$ and $q_{1}$ as follows \begin{eqnarray} \begin{split} &\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)=\sum_{\ell,k=0}^{\infty}\frac{(q^{a};q)_{\ell+k}(q^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell+k}(q_{1};q_{1})_{\ell}(q;q)_{k}}x^{\ell}y^{k},q^{c}\neq 1, q^{-1}, q^{-2},\ldots.\label{1.4} \end{split} \end{eqnarray} \section{Main Results} Here we show that these results can be utilized to derive certain formulae results with the basic analogue of the bibasic Humbert confluent hypergeometric function $\Phi_{1}$ on two independent bases $q$ and $q_{1}$ of two variables. \begin{thm} The following relations for $\Phi_{1}$ are true \begin{eqnarray} \begin{split} &\Phi_{1}(q^{a+1},q_{1}^{b};q^{c};q,q_{1},x,y)=\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)+\frac{q^{a}(1-q)y}{1-q^{c}}\Phi_{1}(q^{a+1},q_{1}^{b};q^{c+1};q,q_{1},x,y)\\ &+\frac{q^{a}}{1-q^{a}}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,q y)-\frac{q^{a}}{1-q^{a}}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},q x,q y),q^{a},q^{c}\neq 1,\label{2.1} \end{split} \end{eqnarray} \begin{eqnarray} \begin{split} &\Phi_{1}(q^{a+1},q_{1}^{b};q^{c};q,q_{1},x,y)=\frac{1}{1-q^{a}}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)-\frac{q^{a}}{1-q^{a}}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},q x,y)\\ &+\frac{q^{a}(1-q)y}{1-q^{c}}\Phi_{1}(q^{a+1},q_{1}^{b};q^{c+1};q,q_{1},q x,y),q^{a},q^{c}\neq 1,\label{2.2} \end{split} \end{eqnarray} \begin{eqnarray} \begin{split} &\Phi_{1}(q^{a},q_{1}^{b};q^{c-1};q,q_{1},x,y)=\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)\\ &+\frac{q^{c-1}(1-q)(1-q^{a})y}{(1-q^{c-1})(1-q^{c})}\Phi_{1}(q^{a+1},q_{1}^{b};q^{c+1};q,q_{1},x,y)+\frac{q^{c-1}}{1-q^{c-1}}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,q y)\\ &-\frac{q^{c-1}}{1-q^{c-1}}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},q x,q y),q^{c},q^{c-1}\neq 1\label{2.3} \end{split} \end{eqnarray} and \begin{eqnarray} \begin{split} &\Phi_{1}(q^{a},q_{1}^{b};q^{c-1};q,q_{1},x,y)=\frac{1}{1-q^{c-1}}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)\\ &-\frac{q^{c-1}}{1-q^{c-1}}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},q x,y)+\frac{q^{c-1}(1-q)(1-q^{a})y}{(1-q^{c-1})(1-q^{c})}\Phi_{1}(q^{a+1},q_{1}^{b};q^{c+1};q,q_{1},q x,y),q^{c-1}\neq 1.\label{2.4} \end{split} \end{eqnarray} \end{thm} \begin{proof} We first prove identity (\ref{2.1}). Using the relation \begin{eqnarray*} \begin{split} (q^{a};q)_{\ell+k+1}=(1-q^{a})(q^{a+1};q)_{\ell+k},\\ (q^{c};q)_{\ell+k+1}=(1-q^{c})(q^{c+1};q)_{\ell+k}, \end{split} \end{eqnarray*} we have \begin{eqnarray*} \begin{split} &\Phi_{1}(q^{a+1},q_{1}^{b};q^{c};q,q_{1},x,y)-\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)\\ &=q^{a}\sum_{\ell,k=0}^{\infty}\bigg{[}\frac{1-q^{\ell+k}}{1-q^{a}}\bigg{]}\frac{(q^{a};q)_{\ell+k}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell+k}(q_{1};q_{1})_{\ell}(q;q)_{k}}x^{\ell}y^{k}\\ &=\frac{q^{a}(1-q)}{1-q^{a}}\sum_{\ell,k=0}^{\infty}\frac{(q^{a};q)_{\ell+k+1}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell+k+1}(q_{1};q_{1})_{\ell}(q;q)_{k}}x^{\ell}y^{k+1}+\frac{q^{a}}{1-q^{a}}\sum_{\ell,k=0}^{\infty}\frac{(q^{a};q)_{\ell+k}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell+k}(q_{1};q_{1})_{\ell}(q;q)_{k}}x^{\ell}(qy)^{k}\\ &-\frac{q^{a}}{1-q^{a}}\sum_{\ell,k=0}^{\infty}\frac{(q^{a};q)_{\ell+k}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell+k}(q_{1};q_{1})_{\ell}(q;q)_{k}}(qx)^{\ell}(qy)^{k}\\ &=\frac{q^{a}(1-q)y}{1-q^{c}}\Phi_{1}(q^{a+1},q_{1}^{b};q^{c+1};q,q_{1},x,y)+\frac{q^{a}}{1-q^{a}}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,q y)\\ &-\frac{q^{a}}{1-q^{a}}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},q x,q y). \end{split} \end{eqnarray*} In a similar way to the proof of equation (\ref{2.1}), we obtain the relations (\ref{2.2})-(\ref{2.4}) \end{proof} \begin{thm} The relations for $\Phi_{1}$ hold true \begin{eqnarray} \begin{split} (1-q^{a})\Phi_{1}(q^{a+1},q_{1}^{b};q^{c};q,q_{1},x,y)&=(1-q^{a+1-c})\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)\\ &+q^{a+1-c}(1-q^{c-1})\Phi_{1}(q^{a},q_{1}^{b};q^{c-1};q,q_{1},x,y),\label{2.5} \end{split} \end{eqnarray} \begin{eqnarray} \begin{split} \Phi_{1}(q^{a},q_{1}^{b};q^{c+1};q,q_{1},x,y)=&q^{c}\Phi_{1}(q^{a},q_{1}^{b};q^{c+1};q,q_{1},q x,q y)\\ &+(1-q^{c})\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y),\label{2.6} \end{split} \end{eqnarray} \begin{eqnarray} \begin{split} \Phi_{1}(q^{a-1},q_{1}^{b};q^{c};q,q_{1},x,y)=&q^{a-1}\Phi_{1}(q^{a-1},q_{1}^{b};q^{c};q,q_{1},q x,q y)\\ &+(1-q^{a-1})\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y),\label{2.7} \end{split} \end{eqnarray} \begin{eqnarray} \begin{split} (1-q^{a})\Phi_{1}(q^{a+1},q_{1}^{b};q^{c+1};q,q_{1},x,y)=&(1-q^{a-c})\Phi_{1}(q^{a},q_{1}^{b};q^{c+1};q,q_{1},x,y)\\ +q^{a-c}(1-q^{c})\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)\label{2.8} \end{split} \end{eqnarray} and \begin{eqnarray} \begin{split} &q^{-c}(1-q^{a})\Phi_{1}(q^{a+1},q_{1}^{b};q^{c+1};q,q_{1},x,y)=(1-q^{a-c})\Phi_{1}(q^{a},q_{1}^{b};q^{c+1};q,q_{1},qx,qy)\\ &+q^{-c}(1-q^{c})\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y).\label{2.9} \end{split} \end{eqnarray} \end{thm} \begin{proof} From the definition of (\ref{1.4}) and using the relationship \begin{eqnarray*} \begin{split} \frac{1-q^{c-1}}{(q^{c-1};q)_{\ell+k}}=\frac{1-q^{c+\ell+k-1}}{(q^{c};q)_{\ell+k}}=\frac{1}{(q^{c};q)_{\ell+k-1}}, \end{split} \end{eqnarray*} we get \begin{eqnarray*} \begin{split} &q^{a+1-c}(1-q^{c-1})\Phi_{1}(q^{a},q_{1}^{b};q^{c-1};q,q_{1},x,y)=\sum_{\ell,k=0}^{\infty}\frac{(q^{a+1-c}-q^{a+\ell+k})(q^{a};q)_{\ell+k}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell+k}(q_{1};q_{1})_{\ell}(q;q)_{k}}x^{\ell}y^{k}\\ &=\sum_{\ell,k=0}^{\infty}\frac{(1-q^{a+\ell+k})(q^{a};q)_{\ell+k}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell+k}(q_{1};q_{1})_{\ell}(q;q)_{k}}x^{\ell}y^{k}-\sum_{\ell,k=0}^{\infty}\frac{(1-q^{a+1-c})(q^{a};q)_{\ell+k}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell+k}(q_{1};q_{1})_{\ell}(q;q)_{k}}x^{\ell}y^{k}\\ &=(1-q^{a})\Phi_{1}(q^{a+1},q_{1}^{b};q^{c};q,q_{1},x,y)-(1-q^{a+1-c})\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y). \end{split} \end{eqnarray*} Similarly, we obtain the results (\ref{2.6})-(\ref{2.9}) \end{proof} \begin{thm} The following relations hold true \begin{eqnarray} \begin{split} &(1-q_{1}^{b})\Phi_{1}(q^{a},q_{1}^{b+1};q^{c};q,q_{1},x,y)+q_{1}^{b}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},q_{1}x,y)=\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y),\label{2.10} \end{split} \end{eqnarray} \begin{eqnarray} \begin{split} &\Phi_{1}(q^{a},q_{1}^{b+1};q^{c};q,q_{1},x,y)=\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)\\ &+\frac{xq_{1}^{b}(1-q^{a})}{(1-q^{c})}\Phi_{1}(q^{a+1},q_{1}^{b+1};q^{c+1};q,q_{1},x,y),q^{c}\neq 1\label{2.11} \end{split} \end{eqnarray} and \begin{eqnarray} \begin{split} &\Phi_{1}(q^{a},q_{1}^{b+1};q^{c};q,q_{1},x,y)=\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},q_{1}x,y)\\ &+\frac{x(1-q^{a})}{(1-q^{c})}\Phi_{1}(q^{a+1},q_{1}^{b+1};q^{c+1};q,q_{1},x,y),q^{c}\neq 1.\label{2.12} \end{split} \end{eqnarray} \end{thm} \begin{proof} Using the relation \begin{eqnarray*} \begin{split} (q_{1}^{b};q_{1})_{\ell+1}=(1-q_{1}^{b})(q_{1}^{b+1};q_{1})_{\ell}=(1-q_{1}^{b+\ell})(q_{1}^{b};q_{1})_{\ell} \end{split} \end{eqnarray*} and (\ref{1.4}), we have \begin{eqnarray*} \begin{split} &(1-q_{1}^{b})\Phi_{1}(q^{a},q_{1}^{b+1};q^{c};q,q_{1},x,y)=\sum_{\ell,k=0}^{\infty}\frac{(q^{a};q)_{\ell+k}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell+k}(q_{1};q_{1})_{\ell}(q;q)_{k}}x^{\ell}y^{k}\\ &-q_{1}^{b}\sum_{\ell,k=0}^{\infty}\frac{(q^{a};q)_{\ell+k}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell+k}(q_{1};q_{1})_{\ell}(q;q)_{k}}(q_{1}x)^{\ell}y^{k}\\ &=\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)-q_{1}^{b}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},q_{1}x,y). \end{split} \end{eqnarray*} In a similar manner, we get the subsequent results (\ref{2.11})-(\ref{2.12}). \end{proof} \begin{thm} The $q$-derivatives relations for $\Phi_{1}$ hold \begin{eqnarray} \begin{split} D_{x,q_{1}}^{r}&\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)=\frac{(q^{a};q)_{r}(q_{1}^{b};q_{1})_{r}}{(1-q_{1})^{r}(q^{c};q)_{r}}\Phi_{1}(q^{a+r},q_{1}^{b+r};q^{c+r};q,q_{1},x,y),\label{2.14} \end{split} \end{eqnarray} \begin{eqnarray} \begin{split} D_{y,q}^{s}&\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)=\frac{(q^{a};q)_{s}}{(1-q)^{s}(q^{c};q)_{s}}\Phi_{1}(q^{a+s},q_{1}^{b};q^{c+s};q,q_{1},x,y)\label{2.15} \end{split} \end{eqnarray} and \begin{eqnarray} \begin{split} D_{x,q_{1}}^{r}D_{y,q}^{s}&\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)=\frac{(q^{a};q)_{r+s}(q_{1}^{b};q_{1})_{r}}{(1-q_{1})^{r}(1-q)^{s}(q^{c};q)_{r+s}}\Phi_{1}(q^{a+r+s},q_{1}^{b+r};q^{c+r+s};q,q_{1},x,y).\label{2.16} \end{split} \end{eqnarray} \end{thm} \begin{proof} Using the $q$-derivative in (\ref{1.3}), we get \begin{eqnarray} \begin{split} D_{x,q_{1}}&\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)=\sum_{\ell,k=0}^{\infty}\frac{1-q_{1}^{\ell}}{1-q_{1}}\frac{(q^{a};q)_{\ell+k}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell+k}(q_{1};q_{1})_{\ell}(q;q)_{k}}x^{\ell-1}y^{k}\\ &=\sum_{\ell,k=0}^{\infty}\frac{1}{1-q_{1}}\frac{(q^{a};q)_{\ell+k+1}(q_{1}^{b};q_{1})_{\ell+1}}{(q^{c};q)_{\ell+k+1}(q_{1};q_{1})_{\ell}(q;q)_{k}}x^{\ell}y^{k}\\ &=\frac{(1-q^{a})(1-q_{1}^{b})}{(1-q_{1})(1-q^{c})}\sum_{\ell,k=0}^{\infty}\frac{(q^{a+1};q)_{\ell+k}(q_{1}^{b+1};q_{1})_{\ell}}{(q^{c+1};q)_{\ell+k}(q_{1};q_{1})_{\ell}(q;q)_{k}}x^{\ell}y^{k}\\ &=\frac{(1-q^{a})(1-q_{1}^{b})}{(1-q_{1})(1-q^{c})}\Phi_{1}(q^{a+1},q_{1}^{b+1};q^{c+1};q,q_{1},x,y)\label{2.17} \end{split} \end{eqnarray} and \begin{eqnarray} \begin{split} D_{y,q}&\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)=\sum_{\ell=0,k=1}^{\infty}\frac{1}{1-q}\frac{(q^{a};q)_{\ell+k}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell+k}(q_{1};q_{1})_{\ell}(q;q)_{k-1}}x^{\ell}y^{k-1}\\ &=\frac{(1-q^{a})}{(1-q)(1-q^{c})}\sum_{\ell,k=0}^{\infty}\frac{(q^{a+1};q)_{\ell+k}(q_{1}^{b};q_{1})_{\ell}}{(q^{c+1};q)_{\ell+k}(q_{1};q_{1})_{\ell}(q;q)_{k}}x^{\ell}y^{k}\\ &=\frac{(1-q^{a})}{(1-q)(1-q^{c})}\Phi_{1}(q^{a+1},q_{1}^{b};q^{c+1};q,q_{1},x,y).\label{2.18} \end{split} \end{eqnarray} Iterating this $q$-derivative on $\Phi_{1}$ for $r$-times and $s$-times, we obtain (\ref{2.14}) and (\ref{2.15}). The $q$-derivatives given by (\ref{2.16}) can be easily obtained \end{proof} \begin{thm} The $q$-differential recursion relations for $\Phi_{1}$ hold \begin{eqnarray} \begin{split} xD_{x,q_{1}}&\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)=\frac{(1-q_{1}^{b})}{(1-q_{1})q_{1}^{b}}\bigg{[}\Phi_{1}(q^{a},q_{1}^{b+1};q^{c};q,q_{1},x,y)-\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)\bigg{]},\label{2.19} \end{split} \end{eqnarray} \begin{eqnarray} \begin{split} xD_{x,q_{1}}&\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)=\frac{(1-q_{1}^{b})}{(1-q_{1})}\bigg{[}\Phi_{1}(q^{a},q_{1}^{b+1};q^{c};q,q_{1},x,y)-\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},q_{1}x,y)\bigg{]},\label{2.20} \end{split} \end{eqnarray} \begin{eqnarray} \begin{split} D_{y,q}&\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)=\frac{1}{1-q}\bigg{[}\frac{(1-q^{a-c})}{(1-q^{c})}\Phi_{1}(q^{a},q_{1}^{b};q^{c+1};q,q_{1},x,y)\\ &+q^{a-c}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)\bigg{]},q^{c}\neq 1\label{2.21} \end{split} \end{eqnarray} and \begin{eqnarray} \begin{split} D_{y,q}&\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)=\frac{1}{1-q}\bigg{[}\frac{q^{c}(1-q^{a-c})}{(1-q^{c})}\Phi_{1}(q^{a},q_{1}^{b};q^{c+1};q,q_{1},qx,qy)\\ &+\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)\bigg{]},q^{c}\neq 1.\label{2.22} \end{split} \end{eqnarray} \end{thm} \begin{proof} From (\ref{2.17}) and (\ref{2.12}), we get (\ref{2.19}). Similarly, we obtain the results (\ref{2.20})-(\ref{2.22}). \end{proof} \begin{thm} the following relations for $\Phi_{1}$ hold: \begin{eqnarray} \begin{split} &(1-q^{c-1})\Phi_{1}(q^{a},q_{1}^{b};q^{c-1};q,q_{1},x,xy)=(1-q^{c-1})\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,xy)\\ &+(1-q)q^{c-1}xD_{x,q}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,xy),\label{2.23} \end{split} \end{eqnarray} \begin{eqnarray} \begin{split} &(1-q^{a})\Phi_{1}(q^{a+1},q_{1}^{b};q^{c};q,q_{1},x,xy)=(1-q^{a})\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,xy)\\ &+(1-q)q^{a}xD_{x,q}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,xy),\label{2.24} \end{split} \end{eqnarray} \begin{eqnarray} \begin{split} &(1-q_{1}^{b})\Phi_{1}(q^{a},q_{1}^{b+1};q^{c};q,q_{1},x,y)=(1-q_{1}^{b})\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)\\ &+q_{1}^{b}(1-q_{1})xD_{x,q_{1}}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)\label{2.25} \end{split} \end{eqnarray} and \begin{eqnarray} \begin{split} &(1-q_{1}^{b})\Phi_{1}(q^{a},q_{1}^{b+1};q^{c};q,q_{1},x,y)=(1-q_{1})xD_{x,q_{1}}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)\\ &+(1-q_{1}^{b})\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},q_{1}x,y).\label{2.26} \end{split} \end{eqnarray} \end{thm} \begin{proof} From (\ref{1.4}), we have \begin{eqnarray*} \begin{split} &(1-q^{c-1})\Phi_{1}(q^{a},q_{1}^{b};q^{c-1};q,q_{1},x,xy)\\ &=\sum_{\ell,k=0}^{\infty}\frac{(q^{a};q)_{\ell+k}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell+k-1}(q_{1};q_{1})_{\ell}(q;q)_{k}}x^{\ell+k}y^{k}\\ &=\sum_{\ell,k=0}^{\infty}\frac{(1-q^{c+\ell+k-1})(q^{a};q)_{\ell+k}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell+k}(q_{1};q_{1})_{\ell}(q;q)_{k}}x^{\ell+k}y^{k}\\ &=\sum_{\ell,k=0}^{\infty}\frac{(1-q^{c-1}+q^{c-1}(1-q^{\ell+k}))(q^{a};q)_{\ell+k}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell+k}(q_{1};q_{1})_{\ell}(q;q)_{k}}x^{\ell+k}y^{k}\\ &=(1-q^{c-1})\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,xy)+(1-q)q^{c-1}xD_{x,q}\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,xy). \end{split} \end{eqnarray*} Similarly, we obtain the results (\ref{2.24})-(\ref{2.26}). \end{proof} \begin{thm} The summation formula for $\Phi_{1}$ hold true \begin{eqnarray} \begin{split} \Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)=\sum_{\ell}^{\infty}\frac{(q^{a};q)_{\ell}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell}(q_{1};q_{1})_{\ell}}x^{\ell} \;_{2}\Phi_{1}(q^{a+\ell},0;q^{c+\ell};q,y).\label{2.27} \end{split} \end{eqnarray} \end{thm} \begin{proof} We start with the definition of $\Phi_{1}$ and using (\ref{1.3}), we have \begin{eqnarray} \begin{split} &\Phi_{1}(q^{a},q_{1}^{b};q^{c};q,q_{1},x,y)=\sum_{\ell,k=0}^{\infty}\frac{(q^{a};q)_{\ell}(q^{a+\ell};q)_{k}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell}(q^{c+\ell};q)_{k}(q_{1};q_{1})_{\ell}(q;q)_{k}}x^{\ell}y^{k}\\ &=\sum_{\ell=0}^{\infty}\frac{(q^{a};q)_{\ell}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell}(q_{1};q_{1})_{\ell}}x^{\ell}\sum_{k=0}^{\infty}\frac{(q^{a+\ell};q)_{k}}{(q^{c+\ell};q)_{k}(q;q)_{k}}y^{k}\\ &=\sum_{\ell=0}^{\infty}\frac{(q^{a};q)_{\ell}(q_{1}^{b};q_{1})_{\ell}}{(q^{c};q)_{\ell}(q_{1};q_{1})_{\ell}}x^{\ell}\;_{2}\Phi_{1}(q^{a+\ell},0;q^{c+\ell};q,y). \end{split} \end{eqnarray} \end{proof} \section{Concluding remarks} We conclude with the remark that the technique used here can be employed to yield a variety of interesting results involving the relations of the family for the bibasic Humbert hypergeometric function of two variables. As with the bibasic Humbert hypergeometric function, these recursion formulas may find applications in numerous branches of mathematics, mathematical physics, engineering, and associated areas of study.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,708
Heringhausen ist ein Dorf ("Tor zum Valmetal") und Ortsteil der Gemeinde Bestwig (Hochsauerlandkreis). Im Januar 2020 hatte Heringhausen 816 Einwohner. Geographie Der Ort liegt auf einer Höhe von durchschnittlich 380 m. Die Fläche der Heringhauser Gemarkung beträgt 6,15 km². Davon umfasst die bebaute Fläche 0,3 km², hinzukommen 0,2 km² Straßen, Wege und Plätze. Etwa 4 km² sind Wald. Bekannt ist der Ort durch seine sehr großen Weihnachtsbaumschonungen, deren Produkte in ganz Deutschland und im Ausland vertrieben werden. Unterhalb des Ortes befindet sich ein kleiner Stausee der Valme. Geschichte Entstanden ist der Ort vermutlich im 9. oder 10. Jahrhundert. Erstmals urkundlich erwähnt wurde der Ort 1314 in einem Güterverzeichnis des Stifts Meschede. Bereits vor dem Dreißigjährigen Krieg existierte ein mit Wasserkraft betriebener Eisenhammer. Heringhausen gehörte in preußischer Zeit nach 1816 zunächst zur Bürgermeisterei Eversberg, seit 1841 bzw. Zur 1826 gegründeten Steuergemeinde Heringhausen gehörten ursprünglich auch Halbeswig, Grimlinghausen (heute Nierbachtal) und die Bergarbeiterkolonien Ziegelwiese, Dörnberg und das 1854 gegründete Andreasberg. 1858 gehörte der Ort zur Gemeinde Velmede. Etwa im Jahr 1865 wurde er aus Velmede ausgegliedert und somit eine selbständige Gemeinde. Wegen der bergbaulichen Prägung wurden die drei letztgenannten Orte am 1. April 1910 nach Ramsbeck umgegliedert. Seit der kommunalen Neugliederung, die am 1. Januar 1975 in Kraft trat, ist Heringhausen Teil der Gemeinde Bestwig. Kultur In Heringhausen gibt es 14 Vereine, die sich um die Gestaltung des kulturellen Lebens kümmern. Größter Verein ist die Schützenbruderschaft St. Jakobus 1873 e.V. Heringhausen mit ca. 430 Mitgliedern. Am Pfingstwochenende wird in Heringhausen jährlich das Schützenfest gefeiert. Am ersten Septemberwochenende 2017 richteten die Heringhauser Schützen das 24. Kreisschützenfest des Kreisschützenbundes Meschede e.V. aus. Am gleichen Wochenende im Jahr 2018 fand das Gemeindeschützenfest Bestwig in Heringhausen statt. Der jüngste Verein wurde 2011 unter dem Namen "Dorfgemeinschaft Heringhausen e.V." gegründet. Er hat ca. 300 Mitglieder. Unter Federführung der Dorfgemeinschaft wurde 2014 das 700-jährige Dorfjubiläum mit einem Festwochenende gefeiert. Im Jahr 2013 wurde ein Nachbau des Heringhauser Eisenhammers in der Ortsmitte eingeweiht, welcher von der Dorfgemeinschaft erstellt wurde. Die Dorfgemeinschaft hat sich mit Unterstützung von LEADER 4 Mitten im Sauerland um die aufwändige Neugestaltung des Rast- und Bergbauspielplatzes hinter der Schützenhalle gekümmert. Hier soll auch die Geschichte des "Bähnchens", der ehemaligen Schmalspurbahn zum Transport der Erze vom Bergwerk Ramsbeck zum Bahnhof Bestwig, dargestellt werden. Am Bähnchen-Radweg in Heringhausen befindet sich ein ebenfalls von der Dorfgemeinschaft errichtetes Schwalbenhotel. Aktuell kümmern sich die Heringhauserinnen und Heringhauser um die Aufwertung des Kreuzweges auf dem Kreuzberg, der aus dem Jahr 1873 stammt. Die Dorfgemeinschaft Heringhausen erstellt derzeit eine "Mammut-Bank" und ein "Corona-Kreuz". Jährlich am Karnevals-Sonntag findet in Heringhausen der vom Karnevals-Club Mühls (KCM) organisierte Karnevalsumzug statt. Anfang Oktober richten die Jakobus-Schützen seit einigen Jahren die "Heringhauser Wasen" aus. Heringhausen liegt am Jakobsweg, der in unserer Region von Paderborn nach Elspe führt. In der St.-Nikolaus-Kirche befindet sich seit 2022 eine Jakobus-Pilgerstation mit einer Pilgerstele, an der Wanderer und Pilger den Heringhauser Pilgerstempel erhalten können. Politik Wappen Sehenswürdigkeiten Sehenswert ist die ortsbildprägende katholische Pfarrkirche St. Nikolaus, konsekriert im Jahr 1965. In der Kirche befindet sich eine Pilgerstation des Jakobus-Pilgerweges, der von Paderborn nach Elspe führt. Eine der schönsten Vereinsgebäude der Umgebung ist die Schützenhalle St. Jakobus an der Bestwiger Straße, fertiggestellt im Jahr 1921. Oberhalb des Dorfes "auf der Borg" befindet sich ein Marienbildstock aus dem Jahr 1975. In der Dorfmitte wurde im Jahr 2013 ein Nachbau des einst in Heringhausen betriebenen Eisenhammers eingeweiht. Seit 2018 werden in den Sommermonaten vom Ziegenzuchtverein Heringhausen auf einer Wiese oberhalb des Dorfes (ehemalige "Ziegenweide") eine Herde von Ziegen beheimatet. Im Jahr 2019/20 wurde der "Rast- und Bergbauspielplatz Am Bähnchen" aufgewertet. Söhne und Töchter des Ortes Gottfried Hoberg (1857–1924) war ein katholischer Theologe, Philologe, Priester und Hochschullehrer Weblinks Homepage der Schützenbruderschaft St. Jakobus 1873 e.V. Heringhausen. Darin: Reinhard Schmidtmann: Aus der Geschichte des Dorfes Heringhausen Homepage der Dorfgemeinschaft Heringhausen e.V. Homepage der Gemeinde Bestwig Einzelnachweise Ortsteil von Bestwig Ehemalige Gemeinde (Hochsauerlandkreis) Ersterwähnung 1314 Gemeindegründung 1865 Gemeindeauflösung 1975
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,441
\section{Introduction} Strain-induced control of complex magnetic states such as spin spirals or skyrmions~\cite{bergmann_interface-induced_2014, nagaosa_topological_2013} is a multifaceted approach towards manipulation of spin structures. Mechanical or piezoelectrical~\cite{cui_method_2013} setups as well as direct influence on the sample growth are used to investigate such effects with multiple experimental techniques on various systems. The observed phenomena are ascribed to changes of the effective anisotropy: uniaxial mechanical compression tunes the stability of the skyrmion lattice phase, e.g. in the helimagnet MnSi~\cite{nii_uniaxial_2015}, where hydrostatic pressure also decreases the helix period and the critical temperature~\cite{fak_pressure_2005}. On the other side, epitaxial strain created by growing multiferroic BiFeO$_3$ thin films on different substrates can destroy the cycloidal spin spiral and stabilize an antiferromagnetic state~\cite{sando_crafting_2013}, again via effective anisotropy. The non-collinear magnetic structures mentioned above are stabilized by the Dzyaloshinskii-Moriya interaction (DMI), which can also be affected by strain. This mechanism was found in FeGe, where uniaxial tensile strain dramatically distorts the skyrmion lattice~\cite{shibata_large_2015}. Although previous studies were mostly focused on the effects of spatially uniform strain, we investigate here the influence of strain variations on a local scale in an ultrathin magnetic film using spin-polarized scanning tunneling microscopy (SP-STM)~\cite{wiesendanger_spin_2009}. Our system consists of three atomic layers of Fe deposited on Ir(111). Because of the misfit between Ir (fcc, lattice constant \SI{3.84}{\angstrom}) and Fe (bcc, lattice constant \SI{2.86}{\angstrom}), the triple layer Fe film exhibits a dislocation line network. We observe that the epitaxial strain relief is not uniform within the ultrathin film, resulting in different regions exhibiting spin spirals with various periods and vanishing at different magnetic field intensities. We attribute these differences to spatial variations in the strength of the effective exchange coupling. Depending on the atom arrangement in the magnetic film, these spirals may or may not split up into single magnetic skyrmions~\cite{hsu_electric_2016}. \section{Experimental details} The experiments were performed in a ultrahigh vacuum system with a base pressure below $10^{-11}$ \si{\milli\bar}. Different chambers are used for substrate cleaning, Fe deposition and STM measurements. The Ir single crystal substrate was prepared by cycles of Ar-ion sputtering at \SI{800}{\electronvolt} and annealing up to \SI{1600}{\kelvin} for \SI{60}{\second}. The Fe film was then evaporated onto the clean substrate at elevated temperature (about \SI{200}{\celsius}) at a deposition rate around 0.2 atomic layer per minute. Four different scanning tunneling microscopes were used for this work, three low temperature microscopes with He bath cryostats operating at respectively \SI{4}{\kelvin}, \SI{5}{\kelvin} and \SI{8}{\kelvin} as well as a variable temperature system equipped with a He flow cryostat. Out-of-plane external magnetic fields were applied during the low temperature measurements using superconducting coil magnets. We used antiferromagnetic bulk Cr tips for the spin-resolved measurements. All the STM topography images were measured in constant-current mode, where the stabilization current is kept constant by a closed feedback loop. The differential conductance maps were simultaneously recorded at a fixed sample bias voltage using lock-in technique. \section{Morphology of the Fe film} The STM topography map presented in Fig.~\ref{structure}(a) shows a typical triple layer Fe film on an Ir(111) substrate. The surface of the film exhibits a dense network of dislocation lines. These lines follow the three equivalent high-symmetry directions $\left\langle 11\bar{2} \right\rangle$ of the fcc(111) surface~\cite{hsu_electric_2016} as already reported for the double layer Fe film~\cite{hsu_guiding_2016}. \begin{figure} \includegraphics[scale=1]{figure1.pdf} \caption{\label{structure} (a) Constant-current STM topography map of the triple layer Fe film on Ir(111). Double and single lines are indicated by the arrows. (b),(c) Zoom-in on double line areas. At positive bias, one can see that bright and dark lines alternate and show a double line feature, whereas at negative bias, the lines look all very similar. (d),(e) STM topography of single line regions. The single lines have the same appearance at any sample bias voltage. (f) Constant-current STM topography map of a triple layer Fe island on top of a double layer Fe film. The numbers in green circles indicate the local thickness of the film. The color scale was adjusted separately for the two terraces in order to highlight the matching of the dislocation lines. The areas between the bright lines on the double layer are bcc. (g),(h) Proposed structure models for the triple layer Fe film from the experimental observations presented in (a)-(f). \emph{Measurement parameters:} ${I = \SI{1}{\nano\ampere}}$ and (a)~${U = +\SI{200}{\milli\volt}}$, ${T = \SI{8}{\kelvin}}$, ${B = \SI{3.5}{\tesla}}$ ; (b)~${T = \SI{5}{\kelvin}}$, ${B = \SI{4.5}{\tesla}}$ ; (c)~${T = \SI{5}{\kelvin}}$, ${B = \SI{3}{\tesla}}$ ; (d)~${T = \SI{8}{\kelvin}}$, ${B = \SI{0}{\tesla}}$ ; (e)~${T = \SI{8}{\kelvin}}$, ${B = \SI{2.5}{\tesla}}$ ; (f)~${U = \SI{-700}{\milli\volt}}$, ${T = \SI{153}{\kelvin}}$, ${B = \SI{3}{\tesla}}$.} \end{figure} Two types of dislocation lines can be distinguished from the STM image of Fig.~\ref{structure}(a): at positive sample bias voltage, the dislocation lines in the region spanned by the white dashed arrow show an alternating bright and dark contrast as well as double line features (see detail in Fig.~\ref{structure}(b)). These areas will thus be named \emph{double lines} in the following. The spacing for double lines is ranging from \SI{2.3}{\nano\meter} to \SI{3}{\nano\meter}, which corresponds to half the structural period. On the other hand, in the region marked by the gray arrow, the lines are denser (spacing between \SI{1.8}{\nano\meter} and \SI{2.2}{\nano\meter}) and they all have the same appearance at any bias voltage (see Fig.~\ref{structure}(d) and Fig.~\ref{structure}(e)). Hence these lines will be designated as \emph{single lines}. At negative sample bias voltage, the distinction between single and double lines from the sole topography becomes challenging as illustrated in Fig.~\ref{structure}(c) and Fig.~\ref{structure}(e). Figures~\ref{structure}(g) and~\ref{structure}(h) show proposed atomic structure models for the triple layer Fe film. In both cases, the first layer Fe is pseudomorphic with respect to the Ir(111) substrate lattice~\cite{heinze_spontaneous_2011}. The double lines are located exactly on top of the dislocation lines of the double layer Fe film as shown in Fig.~\ref{structure}(f). The double layer lines seem to get closer when they approach the island: the strain relief is larger in the triple than in the double layer. In the atomic model, the second layer is uniaxially compressed with respect to a pseudomorphic layer and the arrangement of the Fe atoms is getting close to bcc(110). The lines correspond to the positions where the atoms are located on the fcc or hcp sites and the triple layer follows a bcc stacking on top of the double layer. In the case of the single lines, also the second layer grows pseudomorphically but in bcc stacking (see Fig.~\ref{structure}(h)) and only the third layer on top is uniaxially compressed. All the lines are identical and induced by lateral shifts of the atoms. As will be shown later by looking at the magnetic structure, the double line regions have mirror planes along the dislocation lines, whereas two mirror symmetric domains exist for the single line regions, in agreement with the structure model. \section{Zero field non-collinear magnetic structure} The difference between single and double line areas becomes more striking when we look at the magnetic structure. Figures~\ref{magnetism}(a) and \ref{magnetism}(b) show respectively a spin-resolved constant-current map and a simultaneously spin-resolved differential conductance map measured with an out-of-plane spin-sensitive Cr bulk tip at low temperature. These SP-STM measurements reveal that spin spirals propagate along both types of dislocation lines. The nature of the spin spirals is cycloidal: this has been shown already for the double layer Fe spirals~\cite{hsu_guiding_2016} as well as for the double line regions in the triple layer Fe~\cite{hsu_electric_2016} and is assumed also to be the case for the single line areas since the Fe-Ir interface-induced DMI is large~\cite{heinze_spontaneous_2011}. However, the appearance of the spin spirals varies in the different regions of the Fe film. The wavelength of the spirals is smaller in the double line regions (\SI{3}{\nano\meter} to \SI{4.5}{\nano\meter}) than in the single line regions (between \SI{5}{\nano\meter} and \SI{10}{\nano\meter}). \begin{figure}[H] \centering \includegraphics[scale=1]{figure2.pdf} \caption{\label{magnetism} (a),(b) Spin-resolved constant-current map and spin-resolved differential tunneling conductance map of a triple layer Fe film on Ir(111) measured simultaneously using a Cr bulk tip with out-of-plane spin sensitivity. The magnetic sensitivity of the tip is simply deduced from the magnetic contrast on the spirals, which is identical for all the propagation directions. The magnetic contrast is visible on both the constant-current and differential conductance maps. The ellipses on image (b) mark a zone where the period of the spiral is changing: this can be correlated with the spacing between the dislocation lines indicated in (a). (c) Detail of the spin-resolved differential tunneling conductance map of a double line region. The spin spiral propagates along the lines and its wavefront exhibits a zigzag shape. (d),(f) Line profiles taken in the double and single line regions, respectively, at the positions indicated in (b). The displayed profiles are the mean values within the rectangles, each data line was laterally shifted to straighten the wavefront and offsets were applied to the resulting profiles for clarity. The red dashed lines are the results of fits with sine functions, showing that the spirals are homogeneous regardless of the considered region. (e) Spin-resolved differential conductance map of an area with single lines. Spin spirals with various wavelengths are visible and their wavefronts are tilted by about $\pm \SI{62}{\degree}$ with respect to the lines. \emph{Measurement~parameters:} no external magnetic field, (a),(b)~${U = \SI{-700}{\milli\volt}}$, ${I = \SI{1}{\nano\ampere}}$, ${T = \SI{8}{\kelvin}}$ ; (c)~${U = \SI{-700}{\milli\volt}}$, ${I = \SI{750}{\pico\ampere}}$, ${T = \SI{4}{\kelvin}}$; (e)~${U = \SI{-700}{\milli\volt}}$, ${I = \SI{1}{\nano\ampere}}$, $ {T = \SI{8}{\kelvin}}$.} \end{figure} Furthermore, the wavefront of the double line spirals has a zigzag shape~\cite{hsu_guiding_2016, hsu_electric_2016}, whereas that of the spirals in the single line regions is straight but tilted with respect to the lines as can be seen from the details of spin-resolved differential conductance maps in Fig.~\ref{magnetism}(c) and Fig.~\ref{magnetism}(e). The proposed structure models can explain these shapes for the wavefronts: the wavevector prefers to follow the bcc[001]-like rows of atoms as observed for the double layer Fe on Ir(111)~\cite{hsu_guiding_2016}, Cu(111)~\cite{phark_reduced-dimensionality-induced_2014} and W(110)~\cite{meckler_real-space_2009}. For the double lines, the direction of the rows alternate and this creates the magnetic zigzag structure, whereas the direction does not change for the single lines, as shown by the bcc(110)-like unit cells marked in red in Fig.~\ref{structure}(g) and Fig.~\ref{structure}(h). However, for the single line regions, two mirror symmetric structural domains are possible and found in the SP-STM data. For a strict coupling of the wavevector to the bcc [001] direction, the expected angle $\alpha$ between the wavefront and the dislocation lines (as defined in Fig.~\ref{magnetism}(c) and Fig.~\ref{magnetism}(e)) can be computed from the structure models: \begin{equation} \tan \alpha = \frac{\sqrt{3} \delta}{\delta+a} \end{equation} where $\delta$ is the line spacing and ${a = \SI{2.715}{\angstrom}}$ the in-plane interatomic distance for Ir(111). This expression is the same for both structures. This angle is below \SI{60}{\degree} and decreases with the line spacing. Yet the angle obtained from the data is slightly larger (up to \SI{10}{\degree}) and seems to change randomly. This deviation which is going towards a straighter wavefront for the double line regions was previously observed for the double layer spirals~\cite{hsu_guiding_2016}. The guiding of the wavevector might compete here with the presence of energetically disadvantageous kinks in the wavefront. For the single line areas, the angle is also larger, meaning that the wavefront is more perpendicular to the dislocation lines than expected. This effect could be attributed to boundary effects as well as to domain wall-like structures preferring to be as short as possible. Line profiles of spirals (Fig.~\ref{magnetism}(d) and Fig.~\ref{magnetism}(f)) taken in various regions can be fitted with sine functions. This sinusoidal shape indicates that the spirals are homogeneous, i.e. no direction of the magnetic moments is more favorable than the others. Yet epitaxial strain in ultrathin films creates an effective anisotropy via magnetoelastic coupling~\cite{bruno_magnetic_1989} and since the strain relief is not uniform in the triple layer Fe film, the total effective magnetic anisotropy is expected to vary between the different regions. This should result in more or less pronounced distortions of the spiral profiles depending on the dislocation line spacing. We therefore conclude from the homogeneity of all the observed spiral profiles that the effective magnetic anisotropy and its variations in the triple layer Fe film are small enough to be neglected in the following. \section{Tuning of the spiral period with strain relief} In Fig.~\ref{magnetism}(b), a single line area in which the wavelength of the spiral is changing is marked with ellipses. This variation occurs because the dislocation lines become more distant in the green zone than in the surrounding red and blue ones. The distance between the lines in each ellipse is given in the constant-current map of Fig.~\ref{magnetism}(a). The closer the lines, i.e. the larger the compression in the third layer Fe, the larger the spiral period. \begin{figure} \includegraphics[scale=1]{figure3.pdf} \caption{\label{plot_mag_strain} Dependence of the spiral period on the spacing of the dislocation lines for the triple layer Fe on Ir(111), both double and single line regions. The SP-STM measurements were performed at low temperature (\SI{4}{\kelvin} or \SI{8}{\kelvin}) on several different samples and the spiral periods were extracted using 2D Fourier transformation and fits of the real space data to sine functions, keeping only points with an error bar below 15\%. For the double line regions, the zigzag shape of the wavefront was not considered and the period was measured along the lines. The actual wavelength along the direction of the wavevector might be 10\% smaller. On the single line regions, the angle of the wavefront was taken into account in the determination of the period. It is evident that if the distance between the dislocation lines decreases, the spiral period increases, as observed already in Fig.~\protect\ref{magnetism}. The insets show the data (both spin-resolved constant-current map and differential conductance map as well as their Fourier transforms) used to determine the point marked with the red square. The axis on the right indicates that the effective exchange parameter should vary between \SI{0.6}{\pico\joule\per\meter} and \SI{2.2}{\pico\joule\per\meter} to create spirals with wavelengths between \SI{3}{\nano\meter} and \SI{10}{\nano\meter} from the simple model described in the text. } \end{figure} The graph in Fig.~\ref{plot_mag_strain} presents a systematic investigation of the effect of the strain relief on the wavelength of the spirals, for both double and single line regions. The spiral period and the spacing between dislocation lines were measured using Fourier transforms and fits of the real space data to sine functions in several different regions on several samples. The trend observed in Fig.~\ref{magnetism} is confirmed, the spiral period decreases when the line spacing increases. The line spacing is linked to the amplitude of the strain relief by the structure models: the mean value of the distance between the Fe atoms in the fcc [1\=10] direction in the top layer decreases with the line spacing. To give an order of magnitude, this distance is 5\% smaller for a \SI{1.8}{\nano\meter} than for a \SI{3}{\nano\meter} line spacing. When the distance between the dislocation lines changes, all the interatomic distances in the Fe film are modified, thus it is expected that the strengths of the magnetic interactions will be affected. The relevant interactions for this system are the exchange couplings and the interface-induced DMI~\cite{fert_role_1980, bergmann_interface-induced_2014}. Based on the data shown in Fig.~\ref{magnetism}(d) and Fig.~\ref{magnetism}(f), we assume that the effective anisotropy is negligible. The DMI originates mainly from the Fe-Ir interface and since the first layer Fe layer is pseudomorphic regardless of the considered region, it should not be significantly influenced (contrary to bulk systems like FeGe~\cite{shibata_large_2015}). We therefore infer that the strain relief is acting mostly on the exchange couplings to modify the period of the spirals. \textit{Ab initio} calculations for a free-standing Fe bcc(110) layer~\cite{shimada_ab_2012} also assigned the period variation of the spirals as well as their stability under in-plane strain to modifications of the exchange couplings. However, the spin spiral state is only stable in the free layer when a compressive strain is applied and the compression reduces the period of the spiral. The calculated evolution of the spirals with the strain is thus opposite to what we measured but this discrepancy could result from the presence of the substrate and the complicated atomic structure of the Fe film. In order to estimate the magnitude of the strain effect, we consider a simplified one-dimensional micromagnetic model derived from the one proposed by Bogdanov and Hubert~\cite{bogdanov_thermodynamically_1994} in which only the effective isotropic exchange coupling and the DMI are kept. We completely ignore here that the film is not spatially uniform, resulting in spatially inhomogeneous magnetic coupling constants. Only average values are taken into account. The energy density is thus: \begin{equation} \label{energy_density} \mathcal{E} = A \sum_i \left(\frac{\partial \mathrm{\mathbf{m}}}{\partial x_i} \right)^2 + D \left( \mathrm{m}_z \frac{\partial \mathrm{m}_x}{\partial x} - \mathrm{m}_x \frac{\partial \mathrm{m}_z}{\partial x} \right) \end{equation} where $A$ is the effective exchange stiffness constant, $D$ the effective DMI constant and $\mathrm{\mathbf{m}}$ the reduced dimensionless magnetization. In this particular case, there is always a stable cycloidal spin spiral state in the system and its period $\lambda$ is simply: \begin{equation} \label{period} \lambda = 4\pi\frac{A}{D} \end{equation} For the monolayer Fe on Ir(111), the value of the DMI constant from \textit{ab initio} calculations is ${|d| = \SI{1.8}{\milli\electronvolt}}$~ \cite{heinze_spontaneous_2011} which gives the micromagnetic parameter ${D = \SI{2.8}{\milli\joule\per\square\meter}}$ for a three-layer-thick Fe film. As shown in Fig.~\ref{plot_mag_strain}, the exchange stiffness parameter should then vary between \SI{0.6}{\pico\joule\per\meter} and \SI{2.2}{\pico\joule\per\meter} to obtain magnetic periods ranging from \SI{3}{\nano\meter} to \SI{10}{\nano\meter}. These values are similar to ${A = \SI{2.0}{\pico\joule\per\meter}}$ found for the PdFe bilayer on Ir(111)~ \cite{romming_field-dependent_2015} with a spiral period of about \SI{6}{\nano\meter}. Since the DMI originates from the interface, its effect should decrease when the thickness of the film increases, i.e. the typical length scale of the magnetic structure is expected to become larger. Indeed, the period of the nanoskyrmion lattice on the monolayer Fe on Ir(111) is \SI{1}{\nano\meter}~\cite{heinze_spontaneous_2011}, the wavelength of the double layer spiral is about \SI{1.6}{\nano\meter}~\cite{hsu_guiding_2016} and for patches of a size below \SI{100}{\nano\meter} (larger ones were not found on the samples), the quadruple layer appears ferromagnetic (as, for example, in Fig.~\ref{hysteresis}). \section{Skyrmions and domain walls in magnetic field} In external out-of-plane magnetic fields between \SI{1}{\tesla} and \SI{3}{\tesla}, the spin spirals in the double line regions split up into distorted magnetic skyrmions~\cite{hsu_electric_2016}, as illustrated in Fig.~\ref{adj_regions}. In contrast, no skyrmion is created on the single line regions. There, the spirals become inhomogeneous and behave like an assembly of independent \SI{360}{\degree} domain walls which disappear one by one when the field is increased. Adding a Zeeman term to the model proposed before (equation~(\ref{energy_density})) does not allow to understand this difference. For a isotropic film without effective anisotropy, there is a stable skyrmionic phase for any couple $(A,D)$ under the appropriate external magnetic field. The crucial point for the triple layer Fe film on Ir(111) are the dislocation lines which break the rotational symmetry. Thus one cannot expect the existence of the typical circular shaped skyrmions~\cite{hagemeister_pattern_2016} as observed, for example, in the PdFe bilayer on Ir(111)~\cite{romming_writing_2013} and a priori, even the presence of skyrmions is not obvious. Nevertheless, distorted skyrmions appear in the double line regions. Their bean-like shape is induced by the local arrangement of the Fe atoms similarly to the zigzag shape of the spin spiral wavefront. They are always located on top of three dislocation lines, two identical ones at the ends and a third different one in the center. This particular preferred position results from the local variation of the magnetic interaction within the layer. The skyrmions are hence pinned to the lines and naturally aligned on ``tracks" (see Fig.~\ref{adj_regions} and Fig.~\ref{hysteresis}(g)). The atom arrangement is different for the single lines and it appears that the skyrmion pinning effect is absent and only \SI{360}{\degree} domain walls are observed in magnetic field. \begin{figure} \includegraphics[scale=1]{figure4.pdf} \caption{\label{adj_regions} Spin-resolved differential tunneling conductance maps of a region with single lines surrounded by double lines (left) and of a region with double lines surrounded by single lines (right) in increasing out-of-plane magnetic field, measured with an out-of-plane spin-sensitive Cr bulk tip. When the external field increases, the spin spirals in the double line regions transform into distorted skyrmions whereas those in the areas with single lines become inhomogeneous and can be described as an assembly of independent \SI{360}{\degree} domain walls. When the width of these walls is similar to the width of the skyrmions in the adjacent areas, they couple to them. \emph{Measurement~parameters:} Left:~${U = \SI{-700}{\milli\volt}}$, ${I = \SI{1}{\nano\ampere}}$, ${T = \SI{8}{\kelvin}}$; Right:~${U = \SI{-500}{\milli\volt}}$, ${I = \SI{1}{\nano\ampere}}$, ${T = \SI{4}{\kelvin}}$. } \end{figure} \section{Metastability and transition fields} A more quantitative investigation of the influence of an external magnetic field is shown in Fig.~\ref{hysteresis}. For this measurement series, the magnetic field was increased up to \SI{4}{\tesla} in \SI{0.5}{\tesla} steps. At \SI{4}{\tesla}, the sample has fully reached the ferromagnetic state. Then, the magnetic field was decreased again in steps to zero. Every region in the film behaves differently because of its structure and its interactions with adjacent areas (see Fig.~\ref{adj_regions}). Comparisons between pictures taken at the same field value during the up-sweep and the down-sweep reveal, however, that almost all the areas show hysteresis. The skyrmions and domain walls vanish at a much higher field than the one needed to produce them again. This indicates that the magnetic states are metastable. \begin{figure} \includegraphics[scale=1]{figure5.pdf} \caption{\label{hysteresis} (a)-(h) Spin-polarized differential tunneling conductance maps of an ultrathin Fe film on Ir(111). The numbers in green circles in (a) indicate the local Fe coverage. An external out-of-plane magnetic field was applied, increased in steps of \SI{0.5}{\tesla} up to \SI{4}{\tesla} (at this field the sample is completely polarized, i.e. ferromagnetic) and then decreased again to zero. Comparison between scans taken at the same field value shows a hysteretic behavior for the triple layer Fe, both in the regions with the double and single lines. (i) Plot of the magnetic state during the field sweep of the four areas marked in blue in (a). Areas I and II are double line regions, III and IV single line regions. \emph{Measurement~parameters:} ${U=\SI{-500}{\milli\volt}}, {I=\SI{1}{\nano\ampere}}, {T=\SI{4}{\kelvin}}$, out-of-plane spin-sensitive Cr bulk tip. } \end{figure} The transition fields to the ferromagnetic (FM) state are obtained from field dependent measurements using the procedure described in Fig.~\ref{hysteresis}(i): the evolution of the magnetic states for different regions is indicated for an up- and a down-sweep of the magnetic field. The transition field value corresponds to the middle of the corresponding step and the error bar to the step height. Results for several samples are gathered in Fig.~\ref{critical_field}, where they are correlated with the spiral period in the absence of external magnetic field. Only sweeps with increasing field are considered in this figure, the field values are thus upper estimates due to the hysteresis effect. The transition field $B_\mathrm{t}$ is decreasing when the spiral period increases. The trend that higher external fields are required to destroy non-collinear spin structures with smaller spatial period is consistent which the observation that the spin spiral in the double layer Fe does not change in magnetic fields up to \SI{9}{\tesla}~\cite{hsu_guiding_2016}. \begin{figure}[H] \centering \includegraphics[scale=1]{figure6.pdf} \caption{\label{critical_field} Effect of the spiral period on the magnetic fields needed to reach the ferromagnetic state during an increasing field sweep. The correlation between the spiral period and spacing of the dislocation lines in the Fe layer shown in Fig.~\ref{plot_mag_strain} leads also to a dependence of the transition field on the strain relief. A trend that higher external fields are required to destroy non-collinear spin structures with smaller spatial period appears. The calculated transition fields were obtained from the phase diagram detailed by Bogdanov and Hubert~\protect\cite{bogdanov_thermodynamically_1994} with the parameters from Fig.~\ref{plot_mag_strain}: ${D = \SI{2.8}{\milli\joule\per\square\meter}}$ and ${K_\mathrm{eff} = 0}$. The magnetic moment used is ${\mu = 2.7 \mu_\mathrm{B}}$ per Fe atom~\cite{heinze_spontaneous_2011}. Because of the metastability of the magnetic states revealed by the hysteresis, the experimental field values are upper estimates of the transition fields. The SP-STM measurements were performed at \SI{4}{\kelvin} on three different samples and the external magnetic field was increased in steps of \SI{0.5}{\tesla}. The error bars correspond to the height of the steps as indicated in Fig.~\ref{hysteresis}(i). } \end{figure} Although the simplified model~(\ref{energy_density}) does not help to understand the absence of a skyrmionic phase in the single line regions, it reproduces the decrease of the transition field for larger magnetic structures once the Zeeman term ${\mathcal{E}_\mathrm{z} = -M_\mathrm{s}B\mathrm{m}_z}$ is included. In the zero effective anisotropy case, the transition field can be obtained from the phase diagram provided by Bogdanov and Hubert \cite{bogdanov_thermodynamically_1994}. There is a threshold value for the reduced field parameter $h$ such as: \begin{equation} B_\mathrm{t} = \frac{D^2 h_\mathrm{t}}{A M_\mathrm{s}} = 4\pi\frac{Dh_\mathrm{t}}{\lambda M_\mathrm{s}} \end{equation} The saturation magnetization ${M_\mathrm{s} = \SI{1.77}{\mega\ampere\per\meter}}$ is estimated from the magnetic moment of 2.7 $\mu_\mathrm{B}$ per atom in the monolayer Fe~\cite{heinze_spontaneous_2011}. We did not consider here a potential variation of $M_\mathrm{s}$ with the strain relief. The threshold value $h_\mathrm{t}$ is different for the transition from spirals to the FM state and from skyrmions to the FM state: \begin{align} h^\mathrm{spiral}_\mathrm{t} & = 0.308 \\ h^\mathrm{skyrmion}_\mathrm{t} & = 0.401 \end{align} The transition fields are plotted as plain lines in Fig.~\ref{critical_field}, assuming again that ${D = \SI{2.8}{\milli\joule\per\square\meter}}$. They are defined as the fields at which the energy of the FM state is equal to the one of the spiral or skyrmion state, respectively. As expected, the experimental values are larger than the computed ones because of the metastability of the spiral and skyrmion states in increasing magnetic field. Remarkably, the smallest field values are almost on the theoretical curve and none of them below, which indicates a rather good agreement between the model and the actual system. This behavior of the transition fields supports our assumption that the effective exchange coupling is affected by the strain relief and is responsible for the observed variation of the magnetic properties of the Fe film. \section{Conclusion} Exploiting the strain relief in epitaxial ultrathin films is an effective way to control precisely and locally their magnetic state. Both the typical size of the spin structures and the transition fields could be tuned. Moreover, the actual uniaxial structure and pinning properties of ultrathin films exhibiting dislocation lines may allow to stabilize a skyrmion state as well as to confine skyrmions on well-defined tracks on the order of a few nanometers. Combined with the possibility to write and delete the skyrmions by a local electric field~\cite{hsu_electric_2016}, this could be of great interest in view of future racetrack-based spintronic devices~\cite{parkin_magnetic_2008, fert_skyrmions_2013, wiesendanger_nanoscale_2016}. \begin{acknowledgments} Financial support by the European Union via the Horizon 2020 research and innovation programme under grant agreement No. 665095, by the Deutsche Forschungsgemeinschaft via SFB668-A8, and by the Hamburgische Stiftung f\"{u}r Wissenschaften, Entwicklung und Kultur Helmut und Hannelore Greve is gratefully acknowledged. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,336
Subscribe | Login/Register Book a Classified Ad 130 veils for Mama Tu Tutakiao Housing units razed to the ground in Arorangi blaze Parched Penrhyn receives water relief Rukau – A simple delight Te Maeva Nui cost quadruples Monday 2 July 2018 | Published in Culture The $3 million cost of the upcoming Te Maeva Nui celebrations is more than four times the amount set out in the 2017/2018 Cook Islands Government Budget Estimates document released in June last year. According to that document, the cost of this year's Te Maeva Nui was originally estimated at $722,500, but a recent media release from the Office of the Prime Minister puts the current planned amount at $3million. This amount was only revealed in the context of discussions held between the current caretaker government and the opposition Democratic Party regarding expenditure authority for continued spending until a new government is formed. "The Democratic Party has provided broad support to the government's planning for continued expenditure from 1 July 2018, prior to the formation of a government," the release read. "But (they) expressed concern with the costs of the Te Maeva Nui celebration, believing the event could have been delivered at a significantly lower cost than the planned $3 million." Responding to the release, Democratic Party finance spokesperson James Beer suggested that the higher than usual cost was neither necessary, nor properly budgeted-for. "We fully support Te Maeva Nui, and our policies sufficiently underscore the value of such an event, both for social and economic reasons," said Beer. "But when the overall cost is more than four times the estimate contained in the 2017/2018 appropriation, we really have to ask why. "It is understandable that unprecedented events such as the 50th anniversary of self-government would attract that level of expenditure, or the 75th or the 100th – but the 53rd? No." Explaining the high cost, finance minister Mark Brown said that, "the majority cost is on the charter of the two ferry vessels that will be used to transport our northern groups to Raro and also airfares for other southern-group participants". "I believe the Demos were of the view that they could have chartered cheaper vessels – I'm not sure from where – and also they thought to cut the number of participants from the outer islands. "However, these celebrations have been in the pipeline for a year and islands have already prepared for their contingents. To cut numbers at this late stage was not an option." Beer disputes Brown's assertion that this level of participation in Te Maeva Nui was envisaged over a year ago, however. "If that were so, the details of that would at least be reflected in the Appropriations Act," said Beer. "It is neither there nor is it mentioned in the Pre-election Economic and Fiscal Update, published only a few weeks ago." "When one examines the details supplied to us and then takes into consideration where the majority of the expenditure is focussed and the 'unlimited' free cargo returning to the North after Te Maeva Nui and during an election period, it raises more questions," he said. Asked what the Democratic Party would consider a more acceptable figure for the cost of Te Maeva Nui this year, Beer said it was "hard to say". "Because of the absence of information and a reluctance on the part of government to be clear and forthcoming with all expenditure provisions in the Constitution, it is hard to say, other than that the estimates for 2018/2019 (and 2017/2018) set out $722,500 and ordinarily, we would have remained within those confines. "Anything beyond that would need to be analysed carefully against other pressing priorities." PO Box 15, Maraerenga, Avarua, Main Road, Rarotonga, Cook Islands +682 22999 adverts@cookislandsnews.com Print Advertising Specs & Rates Digital Advertising Specs & Rates Inquire with Us Journalism Charter © Copyright 2021 Cook Islands News Premium website designed by Utopia and Raro I.T Solutions Ltd.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,948
Happy holidays from the MTV News music staff! As we approach 22 years without Kurt Cobain, it's time to leave him alone already. Was Justin Bieber Channeling Kurt Cobain At His Seattle Concert? Justin Bieber honored Seattle's musical history with a few fashion nods to Nirvana's Kurt Cobain. Kurt Cobain's widow, Courtney Love, shared a pair of heartbreaking, nostalgia-inducing throwback photos on Instagram. A new clip from "Kurt Cobain: Montage Of Heck" has come to light. After photos of her dad playing with Nirvana went viral, Maggie Poukkula spoke to MTV News about her dad. This Snow Shovel Sounds JUST Like Nirvana -- Huh? This snow shovel sounds like Nirvana. That is all. MTV News caught up with photographer Kirk Weddle, who has a new show at Austin, Texas', Modern Rocks Gallery of his amazing outtakes from a promo shoot for Nirvana's "Nevermind" album. Watch Nirvana playing the Reading Festival in 1991 and talk to MTV News for the first time. Frances Bean grew up right in front of our eyes. Check out this throwback Seattle interview with Nirvana. MTV dug up this 1993 clip in which Kurt Cobain talks about his dream device, The Dreamer. MTV dug up this uncut Kurt Cobain interview from a 1994 rockumentary about Nirvana. MTV News spoke with director Brett Morgen about his Kurt Cobain biopic "Montage of Heck." Here's another clip of 'Montage of Heck' that you'll want to watch. Frances Bean Cobain spoke candidly about having a legend for a dad -- and he's not really than different from our own. A previously unheard Kurt Cobain song has hit the Web, thanks to "Kurt Cobain: Montage of Heck." We've got our first look at "Kurt Cobain: Montage of Heck," the forthcoming documentary about the life and death of the Nirvana frontman. A Milwaukee radio station has banned all bands from Seattle until after Sunday's NFL NFC championship game between the Green Bay Packers and Seattle Seahawks. If you're looking to travel more deeply into the mind of ex-Nirvana frontman Kurt Cobain, you're in luck: A '88 mixtape created by the "Lithium" singer hit the Web recently.
{ "redpajama_set_name": "RedPajamaC4" }
4,279
On May 12, 2018, the OCAC Board of Trustees, President and Faculty of Oregon College of Art and Craft presented the graduating class of 2018 at the commencement ceremony at First Unitarian Church. The ceremony recognized the academic achievements of 41 students from OCAC's Master of Fine Arts in Craft: Practice and Innovation, Bachelor of Fine Arts in Craft, and Post-Baccalaureate Certificate in Craft. This year's event featured a commencement address given by Jordan D. Schnitzer, President of Harsch Investment Properties, President of Jordan Schnitzer Family Foundation, and Director of the Harold & Arlene Schnitzer CARE Foundation. Schnitzer's speech emphasized the importance of an arts education and investing in your community. Click HERE to see all of the photos from this year's celebration!
{ "redpajama_set_name": "RedPajamaC4" }
1,758
La Misma Gente ist eine venezolanische Rockband, die 1977 in San Antonio de los Altos gegründet wurde und die auch dreißig Jahre später noch aktiv ist. In Kolumbien gab es eine weitere Band gleichen Namens, die lateinamerikanische Tanzmusik spielte und über 25 Jahre lang bestand. Die venezolanische Band La Misma Gente wurde von Pedro Vicente Lizardo und seinem Bruder Humberto Enrique Lizardo gegründet. Sie sind Söhne des Dichters Pedro Francisco Lizardo. Pedro Vicente Lizardo übernahm in der Gruppe die Rolle des Sängers, Texters und Gitarristen, sein Bruder die des Bassisten. Dazu kamen Victor González am Schlagzeug, Pedro Galindo als Komponist, Sänger und Saxofonist, Mario Bresanutti am Piano, als Sänger und Komponist und Ricardo Ramírez an der Flöte. Die Lizardo-Brüder hatten vorher in den 1960er Jahren mit Los Barracudas gespielt und in den 1970ern mit Apocalipsis. Der Schlagzeuger kam von Sky White Meditation. Diskografie 1983: Por Fin 1984: Luz y Fuerza 1986: Tres 1992: A la Calle 1996: La Misma Gente Weblinks Venezolanische Band Rockband
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,195
Monolithic extended bolt release is made to fit high performance upper receivers. Has no backing plates or screws. Gives the operator ability to reload quicker, clear jams more efficiently and eliminate unnecessary hand movements. Constructed to be lightweight and super strong. Made in the USA.
{ "redpajama_set_name": "RedPajamaC4" }
3,921
{"url":"http:\/\/saliencydetection.net\/dut-omron\/","text":"# The DUT-OMRON Image Dataset\n\n## What's New\n\n<2018-01-22 Mon> We have removed four images (img_561715398.jpg, sun_bqfgwnmzhjwuphdi.jpg, sun_bswlalhygmnrhvuw.jpg, sun_buaqzytduodbufcb.jpg) from dataset due to the lack of corresponding pixel-wise ground-truth. If you have already downloaded our dataset, please delete the four images from your image dataset, eye-fixation and bounding box ground-truth folders, or you can re-download new data from DUT-OMRON Dataset Images,Bounding Box Ground-truth and Eye Fixation Ground-truth. We thank Xianghuan Duan who reported the bug.\n\n## Introduction\n\nRecent years have witnessed significant improvements in saliency detection methods 1-13 17-19. The experimental results on the existing datasets 8,14 have reached a very high level, hardly to surpass for subsequent research whereas the images in those datasets are much simpler than real nature images. Thus we introduce a new database, DUT-OMRON, with nature images for the research of more applicable and robust methods in both salient object detection and eye fixation prediction.\n\nThe proposed database consists of $$5,168$$ high quality images manually selected from more than $$140,000$$ images. We resize the images to $$400\\times x$$ or $$x\\times 400$$ pixels, where $$x$$ is less than $$400$$. Images of our database have one or more salient objects and relatively complex background. We have $$25$$ participants in all for collecting the ground truth, five participants' labels for each image. All of them have normal or corrected to normal vision and are aware of the goal of our experiment. We construct pixel-wise ground truth, bounding box ground truth and eye-fixation ground truth for the proposed database.\n\nOur dataset is the only dataset which has both the eye fixations, bounding box and the pixel-wise ground truth in such large scale. Compared with the ASD and MSRA datasets and some other eye-fixation datasets (i.e., the MIT 15 and NUSEF 16 datasets), images in the proposed dataset are more difficult, thus more challenging, and provide more space of improvement for related research in saliency detection.\n\n### Contact information\n\nThis website is maintained by Xiang Ruan ( ruanxiang at tiwaki dot com ), if you find bug or have concern about this website, shoot me an email.\n\nName Email Affiliation\nHuchuan Lu lhchuan at dlut dot edu dot cn Dalian University of Technology\nXiang Ruan ruanxiang at tiwaki dot com tiwaki Co.,Ltd.\n\n## Dataset\n\n### Pixel-wise data\n\nWe provide pixel-wise ground truth for all the images in the dataset, which are more accurate than bounding box ground truth for evaluation of saliency models. The third row of Figure 1 shows some examples of pixel-wise ground truth.\n\n#### Experimental results based on pixel-wise ground truth\n\nWe evaluate 17 methods 1-13, 17-19 on our dataset using the pixel-wise ground truth. Figure 2 shows the P-R curves and F-Measure results of 17 methods.\n\n### Bounding box data\n\nFor each image, the participants are asked to draw several rectangles which enclose most salient objects in the image while the number of the rectangles is decided by their own understanding of the image. We obtain five participants' annotations and at least five rectangles for each image. In order to get a consistent mask among the five participants, fist we utilize each person's annotations to get a binary mask. Then the final ground truth is generated based on the average of five binary masks. The fourth row of Figure 1 shows some examples of gray-level bounding box ground truth.\n\n#### Experimental results based on bounding box ground truth\n\nWe evaluate 17 methods 1-13,17-19 on our dataset to test both the methods and the dataset using Precision and Recall (P-R) curves and F-Measure as the evaluation criteria. The results are shown in Figure 3. ( please also check saliency maps of the 14 methods and sample of saliency maps ). The evaluation results on our dataset are basically consistent with the results on other datasets, which further indicates the reliability and feasibility of the DUT-OMRON dataset.\n\n### Eye-fixation data\n\nWe use an eye tracking apparatus, Tobbii X1 Light Eye tracker, to record eye fixations when the participant focuses on the input image shown in the monitor. Each image is displayed for 2 seconds with no intervals between successive images. Five viewers' eye tracking data are recorded for every image of the dataset.\n\nWe take the following four steps to exclude the outliers from the original data:\n\n\u2022 Firstly, the first fixation of an image from each participant is discarded to avoid the influence of center bias.\n\u2022 Secondly, by using the human labeled bounding box ground truth, the outliers outside the labeled rectangles are removed, which means any fixations within any viewer's rectangle will be retained. Most of the outliers are removed in this step.\n\u2022 Thirdly, K-means algorithm is used to classify the points into three clusters since we have more than one object in most of the images.\n\u2022 Lastly, we retain 90% of the fixations based on Euclidean distance from the cluster center.\n\nAfter removing outliers, $$153$$ reliable eye fixations on average are retained for each image and over $$95\\%$$ of the images have more than $$50$$ eye fixations. Figure 4 shows some examples of the eye-fixation ground truth before versus after removing outliers.\n\n#### Analysis of the eye fixation ground truth\n\nWe convolve a Gaussian filter across each fixation location and compute the sum of all the maps to generate a continuous saliency map after normalizing to $$[0,1]$$, as shown in Figure 5. We take similar method as 15 to threshold this map to get a binary map. A disadvantage is found in 15 (take the top $$n$$ percent of the image as salient). When the salient region is smaller than $$n\\%$$ of the image, the corner will be identified as saliency, which is meaningless. Therefore we take the gray level as the threshold. Figure 5 shows some samples of the binary maps when the threshold $$n$$ is set to $$0.1$$.\n\nWe can see that this measure only formulate a rough fixation density map based on fixation locations, not accurate enough for quantitative performance evaluation. These density maps afford a representation for which similarity to saliency map may be considered at a glance by human eye in qualitative evaluation.\n\n\u2022 Center bias. We compute the sum of all the continuous saliency maps of all the $$5172$$ images and normalize it within $$[0,1]$$. The average saliency map is shown in Figure 6. It demonstrates our dataset has a bias for human fixations to be near the center of the image.\n\n\u2022 Consistency of the five participants. We utilize an ROC metric 15 to evaluate the consistency of the five subjects. The saliency maps from one subject\u2019s fixation locations is treated as a binary classifier on every pixel in the image and fixation points from all the five subjects are used as ground truth. Figure 7 shows the ROC curves of the five subjects. The results demonstrate the eye fixation data have a high consistency among subjects.\n\n#### Experimental results based on eye-fixation ground truth\n\nWe introduce an ROC 15 method for the evaluation of eye-fixation prediction models. First, the saliency maps are thresholded at top $$n$$ percent of the image for binary masks according to the gray level, where $$n=1,3,5,10,15,20,25,30$$. Then for each binary mask, we compute the percentage of human fixations within salient area of the map as the performance (vertical coordinate) and the threshold is set as horizontal coordinate. By defining the threshold using other method (i.e. take the gray level as the threshold), we can also evaluate other models.\n\nIt's not fair to compare eye-fixation prediction models with saliency detection models. Thus we only evaluate three eye-fixation prediction algorithms, the GB 11, CA 10 and IT98 9 methods, using this evaluation criteria.\n\nFigure 8 shows the evaluation results, where the human model is the Gauss-filtered saliency maps based on the eye-fixations of one participant in our experiment. The results indicate a big space for eye-fixation prediction models to improve.\n\n### Please cite our paper if you use our dataset in your research\n\nChuan Yang, Lihe Zhang, Huchuan Lu, Xiang Ruan, Minghsuan Yang, \"Saliency Detection Via Graph-Based Manifold Ranking\", CVPR2013, P3166-3173\n\nor by bibtex\n\n@inproceedings{yang2013saliency,\ntitle={Saliency detection via graph-based manifold ranking},\nauthor={Yang, Chuan and Zhang, Lihe and Lu, Huchuan, Ruan, Xiang and Yang, Ming-Hsuan},\nbooktitle={Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on},\npages={3166--3173},\nyear={2013},\norganization={IEEE}\n}\n\n\nYou can download the source code of manifold ranking saliency from github for both\n\n### Detailed introduction of the dataset refer to our FCV14 paper\n\nXiang Ruan, Na Tong, Huchuan Lu \"How far we away from a perfect visual saliency detection - DUT-OMRON: a new benchmark dataset\", FCV2014\n\n## References\n\n1, H. Jiang, J. Wang, Z. Yuan, T. Liu, N. Zheng, and S. Li, \u201cAutomatic salient object segmentation based on context and shape prior,\u201d in Proceedings of British Machine Vision Conference, 2011.(bibtex)\n\n2, Y. Xie, H. Lu, and M. Yang, \u201cBayesian saliency via low and mid level cues.\u201d in IEEE Transaction On Image Processing, vol. 22, no. 5, 2013, pp. 1689\u20131698. (bibtex)\n\n3, Y. Xie and H. Lu, \u201cVisual saliency detection based on Bayesian model,\u201d in Proceedings of IEEE International Conference on Image Processing, 2011, pp. 653\u2013656. (bibtex)\n\n4, M.-M. Cheng, G.-X. Zhang, N. J. Mitra, X. Huang, and S.-M. Hu, \u201cGlobal contrast based salient region detection,\u201d inProceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2011, pp. 409\u2013416. (bibtex)\n\n5, E. Rahtu, J. Kannala, M. Salo, and J. Heikkil, \u201cSegmenting salient objects from images and videos,\u201d in Proceedings of European Conference on Computer Vision, 2010, pp. 366\u2013379. (bibtex)\n\n6, K. Chang, T. Liu, H. Chen, and S. Lai, \u201cFusing generic objectness and visual saliency for salient object detection,\u201d in Proceedings of IEEE International Conference on Computer Vision, 2011, pp. 914\u2013921. (bibtex)\n\n7, X. Shen and Y. Wu, \u201cA unified approach to salient object detection via low rank matrix recovery,\u201d in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 853\u2013860.(bibtex)\n\n8, R. Achanta, S. S. Hemami, F. J. Estrada, and S. S\u00fcsstrunk, \u201cFrequency-tuned salient region detection,\u201d in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 1597\u20131604.(bibtex)\n\n9, L. Itti, C. Koch, and E. Niebur, \u201cA model of saliency-based visual attention for rapid scene analysis,\u201d IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, pp. 1254\u20131259, 1998. (bibtex)\n\n10, S. Goferman, L. Zelnik-Manor, and A. Tal, \u201cContext-aware saliency detection,\u201d in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2010, pp. 2376\u20132383. (bibtex)\n\n11, J. Harel, C. Koch, and P. Perona, \u201cGraph-based visual saliency,\u201d in Advances in Neural Processing Systems, 2006, pp. 545\u2013552. (bibtex)\n\n12, X. Hou and L. Zhang, \u201cSaliency detection: a spectral residual approach,\u201d in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2007. (bibtex)\n\n13, Y. Zhai and M. Shah, \u201cVisual attention detection in video sequences using spatiotemporal cues,\u201d in Proceedings of ACM International Conference on Multimedia and Expo, 2006, pp. 815\u2013824. (bibtex)\n\n14, T. Liu, J. Sun, N.-N. Zheng, X. Tang, and H.-Y. Shum, \u201cLearning to detect a salient object,\u201d in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2007. (bibtex)\n\n15, T. Judd, K. Ehinger, F. Durand, and A. Torralba, \u201cLearning to predict where humans look,\u201d in12th international conference on Computer Vision, 2009, pp. 2106\u20132113. (bibtex)\n\n16, S. Ramanathan, H. Katti, N. Sebe, M. Kankanhalli, and T.-S. Chua, \u201cAn eye fixation database for saliency detection in images,\u201d Proceedings of European Conference on Computer Vision, pp. 30\u201343, 2010. (bibtex)\n\n17, B. Jiang, L. Zhang, H. Lu, C. Yang, and M.-H. Yang, \u201cSaliency detection via absorbing markov chain,\u201d in Proceedings of IEEE International Conference on Computer Vision, 2013. (bibtex)\n\n18, X. Li, H. Lu, L. Zhang, X. Ruan, and M.-H. Yang, \u201cSaliency detection via dense and sparse reconstruction,\u201d in Proceedings of IEEE International Conference on Computer Vision, 2013. (bibtex)\n\n19, C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang, \u201cSaliency detection via graph-based manifold ranking.\u201d CVPR2013. (bibtex)\n\n## Access map\n\nCreated: 2019-01-07 Mon 22:48\n\nValidate","date":"2022-09-26 03:30:41","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.42504578828811646, \"perplexity\": 3179.817311299884}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030334644.42\/warc\/CC-MAIN-20220926020051-20220926050051-00156.warc.gz\"}"}
null
null
A photographer with an impressive talent who is interested in not only solo shootings, but also group ones, which are much more difficult to handle. Obviously, due to his easy temper, HARMUT manages to win his models' favor and work with several girls at once. He likes to experiment and make play with different themes in his photo and video sets, which always turn out stylish and memorable.
{ "redpajama_set_name": "RedPajamaC4" }
2,954
We specialize in Panels and automated equipment, Drives, PLCs, HMI, Database, Web, network and custom components. Radio Frequency Identification (RFID) is an emerging technology that allows you to identify and locate objects using radio waves. This technology has two parts: The RFID Tag and the RFID Reader. The tags are small and low cost, and can be applied directly to product packaging, name badges, tools, or other objects. As the tags move through your process environment, RFID Readers read the information stored on the tags, which identifies the item. Readers are typically dispersed through the plant or venue, and communicate over a standard Ethernet network. This technology is being used successfully in many different applications where objects (including people) need to be identified and tracked in a given environment. When RFID technology is combined with a database such as SQL Server, the application possibilities become endless. We can use it to find lost or misplaced items, manage inventory, track students at a training event or conference center, or identify all objects on a given pallet in your warehouse. Using RFID solutions from Siemens and other vendors, iQuest has successfully deployed numerous RFID applications using HMI and database technology. We are currently developing our Event Management solution that uses RFID to track attendance at large conventions and training events. Seriously, I think your support was absolutely essential to the success of our application. iQagent entered in GSMA's 18th Global Mobile Awards. iQagent Awarded Finalist in Control Engineer's 2013 Engineering Excellence Award. © 2013 iquestcorp.com, inc. All rights reserved.
{ "redpajama_set_name": "RedPajamaC4" }
5,070
\section{Introduction} Right-angled Artin groups have delivered considerable applications to geometric group theory (two examples are the association with special cube complexes \cite{HW} and Bestvina--Brady groups \cite{BB}). Another recent area of progress has been the extension of rigidity properties held by irreducible lattices in semisimple Lie groups to involve mapping class groups and automorphism groups of free groups \cite{FM,bridson-farb,BW2010}. In this paper we look to what extent this phenomenon extends to the automorphism group of a right-angled Artin group $A_\Gamma$, where $\Gamma$ is the defining graph. If $\Gamma$ is discrete then $A_\Gamma$ is the free group $F_n$, and if $\Gamma$ is complete then $A_\Gamma$ is the free abelian group $\mathbb{Z}^n$. One therefore expects traits shared by both $\mathbb{Z}^n$ and $F_n$ to be shared by an arbitrary RAAG. Similarly, one optimistically hopes that properties shared by both ${\rm{GL}}_n(\mathbb{Z})$ and ${\rm{Out}}(F_n)$ will also by shared by ${\rm{Out}}(A_\Gamma)$ for an arbitrary right-angled Artin group. For instance, there is a Nielsen-type generating set of ${\rm{Out}}(A_\Gamma)$ given by the work of Laurence \cite{MR1356145} and Servatius \cite{MR1023285}, and ${\rm{Out}}(A_\Gamma)$ has finite virtual cohomological dimension \cite{CV2009}. ${\rm{Out}}(A_\Gamma)$ is residually finite \cite{CV2010, Minasyan2009}, and for a large class of graphs, ${\rm{Out}}(A_\Gamma)$ satisfies the Tits alternative \cite{CV2010}. Bridson and the author recently showed that any homomorphism from an irreducible lattice $\Lambda$ in a higher-rank semisimple Lie group to ${\rm{Out}}(F_n)$ has finite image \cite{BW2010}. A direct translation of this result cannot hold for an arbitrary RAAG, as $\mathbb{Z}^n$ is a RAAG and $\out(\mathbb{Z}^n)={\rm{GL}}_n(\mathbb{Z})$. However, Margulis' supperrigidity implies that if $\Lambda \to {\rm{GL}}_n(\mathbb{Z})$ is a map with infinite image, then the real rank of the Lie group containing $\Lambda$ must be less than or equal to $n-1$, the real rank of $\SL_n(\mathbb{R})$. The aim of this paper is to show that we may effectively bound the rank of an irreducible lattice acting on $A_\Gamma$, and that this bound is determined by obvious copies of ${\rm{SL}}_m(\mathbb{Z})$ in ${\rm{Out}}(A_\Gamma)$. We will describe these copies now. Suppose that $\Gamma'$ is a subgraph of $\Gamma$ which is a clique (i.e. any two vertices of $\Gamma'$ are connected by an edge). A free abelian subgroup of rank $m$ in $A_\Gamma$ does not imply there exists a copy of ${\rm{SL}}_m(\mathbb{Z})$ in ${\rm{Out}}(A_\Gamma)$, however if the set of vertices adjacent to $v \in \Gamma'$ is independent of $v$ then the natural embedding $\mathbb{Z}^{|V(\Gamma')|} \to A_\Gamma$ induces an injection $\SL_{|V(\Gamma')|}(\mathbb{Z})\to {\rm{Out}}(A_\Gamma)$. Define the $\SL$--dimension of ${\rm{Out}}(A_\Gamma)$, written $d_{SL}({\rm{Out}}(A_\Gamma))$, to be the number of vertices in the largest such graph $\Gamma'$. \begin{reptheorem}{t:lr} Let $G$ be a real semisimple Lie group with finite centre, no compact factors, and ${\rm{rank}}_\mathbb{R} G \geq 2$. Let $\Lambda$ be an irreducible lattice in $G$. If ${\rm{rank}}_\mathbb{R} G \geq d_{SL}({\rm{Out}}(A_\Gamma))$, then every homomorphism $f:\Lambda \to {\rm{Out}}(A_\Gamma)$ has finite image. \end{reptheorem} Note that $d_{SL}({\rm{GL}}_n(\mathbb{Z}))=n$ and $d_{SL}({\rm{Out}}(F_n))=1$. Our previous observation that ${\rm{Out}}(A_\Gamma)$ contains a copy of $SL_m(\mathbb{Z})$ for $m=d_{SL}({\rm{Out}}(A_\Gamma))$ tells us that the bound on ${\rm{rank}}_\mathbb{R} G$ given in Theorem \ref{t:lr} is the best that one can provide. The above theorem is deduced from the previously mentioned results for ${\rm{Out}}(F_n)$ and ${\rm{SL}}_n(\mathbb{Z})$, combined with the following general algebraic criterea: \begin{reptheorem}{t:main} Suppose that $d_{SL} ({\rm{Out}}(A_\Gamma)) = m$, and let $F(\Gamma)$ be the size of a maximal, discrete, full subgraph of $A_\Gamma$. Let $\Lambda$ be a group. Suppose that for each finite index subgroup $\Lambda'\leq\Lambda$, we have: \begin{itemize} \item Every homomorphism $\Lambda' \to {\rm{SL}}_m(\mathbb{Z})$ has finite image, \item For all $N \leq F(\Gamma)$, every homomorphism $\Lambda' \to \out(F_N)$ has finite image. \end{itemize} Then every homomorphism $f:\Lambda \to {\rm{Out}}(A_\Gamma)$ has finite image. \end{reptheorem} The constant $F(\Gamma)$ can be viewed as the maximal rank of a free subgroup of $A_\Gamma$ that arises from a subgraph of $\Gamma$. In the free group case, the rigidity result for lattices was deduced from a more general result that showed that $\mathbb{Z}$--averse groups, that is, groups for which no normal subgroup maps onto $\mathbb{Z}$, have no interesting maps to ${\rm{Out}}(F_n)$. To do this, one requires some deep geometric results on the behaviour of \emph{fully irreducible} elements of ${\rm{Out}}(F_n)$ \cite{HM,DGO,BF}, and algebraic results about the structure of the Torelli subgroup of ${\rm{Out}}(F_n)$ (the subgroup consisting of automorphisms that act trivially on $H_1(F_n)$). The results for ${\rm{Out}}(A_\Gamma)$ in \cite{CV2009,CV2010} come from looking at \emph{projection homomorphisms} which, when $\Gamma$ is connected, allow us to understand ${\rm{Out}}(A_\Gamma)$ in terms of automorphisms of smaller RAAGs. This allows for inductive arguments. Our approach is to combine the projection homomorphisms alluded to above with algebraic results concerning the structure of the Torelli subgroup of ${\rm{Out}}(A_\Gamma)$. After the background material in the following section, the paper is organised like so: . Section~\ref{s:lie theory} uses Lie methods pioneered by Magnus \cite{MR0207802} to study the lower central series of $A_\Gamma$; in particular we study the consecutive quotients $L_c=\gamma_c(A_\Gamma)/\gamma_{c+1}(A_\Gamma)$, and the Lie $\mathbb{Z}$--algebra $L$ formed by taking the direct sum $L=\oplus_{c=1}^{\infty}L_c$, where the Lie bracket is induced by taking commutators in $A_\Gamma$. Let $Z(\_)$ denote the centre of a group or Lie algebra, $(L)_p$ the tensor product of $L$ with $\mathbb{Z}/p\mathbb{Z}$, and $L^c=L/\oplus_{i>c}L_i$. The main technical result concerning $L$ is: \begin{reptheorem}{c:centres} If $Z(A_\Gamma)=\{1\}$ then $Z(L)=Z((L)_p)=0$ and $Z(L^c)$ is the image of $L_c$ under the quotient map $L\to L^c$. \end{reptheorem} We obtain this by looking at the enveloping algebra of $L$, which we call $U(A_\Gamma)$. This has a particularly nice description as a free $\mathbb{Z}$--module with a basis consisting of positive elements of $A_\Gamma$. This allows us to study $Z(L)$ by looking at centralisers of elements in $A_\Gamma$, which are well understood. In Section \ref{s:johnson} we describe a central filtration $\mathcal{T}(A_\Gamma)=G_1 \trianglerighteq G_2 \trianglerighteq G_3 \ldots$ of $\mathcal{T}(A_\Gamma)$ analogous to the Andreadakis--Johnson filtration of $\mathcal{T}(F_n)$. We have Johnson homomorphisms $$\tau_c\colon\thinspace G_c \to {\rm{Hom}}(L_1,L_{c+1})$$ for which $\ker\tau_c=G_{c+1}$. Day \cite{Day09} has shown that $\mathcal{T}(A_\Gamma)$ is finitely generated by a set $\mathcal{M}_{\Gamma}$, which we describe in Section~\ref{s:torelli intro}. We show that $\tau_1$ maps the elements of $\mathcal{M}_\Gamma$ to linearly independent elements in ${\rm{Hom}}(L_1,L_2)$, therefore $H_1(\mathcal{T}(A_\Gamma))$ is a free abelian group with a basis given by the image of $\mathcal{M}_\Gamma$. This answers Question 5.4 of \cite{Day09}. In particular, $\mathcal{M}_\Gamma$ is a minimal generating set of $\mathcal{T}(A_\Gamma)$. The above filtration is separating, so that $\cap_{c=1}^{\infty}G_c=\{1\}$, and the groups $L_c$ are free abelian, so each consecutive quotient $G_c/G_{c+1}$ is free abelian. In the final part of Section \ref{s:johnson} we combine Theorem \ref{c:centres} with results of Minasyan \cite{Minasyan2009} and Toinet \cite{Toinet10} to show that the image $H_1,H_2,H_3,\ldots$ of the series $G_1,G_2,G_3, \ldots$ in ${\rm{Out}}(A_\Gamma)$ satisfies the same properties. (This roughly follows methods designed by Bass and Lubotsky \cite{BL} for studying central series.) In particular, we attain the following result: \begin{reptheorem}{t:tfn} For any graph $\Gamma$, the group $\overline{\mathcal{T}}(A_\Gamma)$ is residually torsion-free nilpotent. \end{reptheorem} This was also discovered independently by Toinet \cite{Toinet10}. This is a key part in the proof of Theorem~\ref{t:main}; in particular we use it to deal with the inductive step when the underlying graph $\Gamma$ is disconnected. A problem arises that projection homomorphisms are not generally surjective; consequently they may raise $\SL$--dimension. This is confronted in Section \ref{s:sldim}, where we study the image of ${\rm{Out}}(A_\Gamma)$ under a projection map. As such subgroups appear naturally when working with projections, we feel that the methods here may be of independent interest. In particular, we extend the definition of $\SL$--dimension to arbitrary subgroups of ${\rm{Out}}(A_\Gamma)$, and show that if we only study the image of ${\rm{Out}}(A_\Gamma)$ under a projection map, $\SL$--dimension does not increase. This prepares us to complete the proof of Theorem \ref{t:main}. We describe applications in Section \ref{s:consequences}. The author would like to thank his supervisor Martin Bridson for his encouragement and advice, and Andrew Sale for a series of enthusiastic and helpful conversations. \section{Background} \label{s:background1} Most of this section is standard, however there are some ideas here that are new. In Section \ref{s:generators} we extend the usual generating set of ${\rm{Out}}(A_\Gamma)$ to include \emph{extended partial conjugations}, which are products of partial conjugations that conjugate by the same element of $A_\Gamma$. We also order the vertices of $\Gamma$ in a useful way in Sections \ref{s:relation} and \ref{s:gordering}, which will be used later in the paper to give a block decomposition of the action of subgroups of ${\rm{Out}}(A_\Gamma)$ on $H_1(A_\Gamma)$. We should first define $A_\Gamma$: Let $\Gamma$ be a graph with vertex and edge sets $E(\Gamma)$ and $V(\Gamma)$ respectively. If $\iota, \tau:E(\Gamma) \to V(\Gamma)$ are the maps that take an edge to its initial and terminal vertices, then $A_\Gamma$ has the presentation:$$ A_\Gamma= \langle v:v \in V(\Gamma)|[\iota(e),\tau(e)]:e \in E(\Gamma) \rangle. $$ Throughout we assume that $\Gamma$ has $n$ vertices labelled $\{v_1,\ldots,v_n\}$. \subsection{A generating set of ${\rm{Aut}}(A_\Gamma)$} \label{s:generators} Laurence \cite{MR1356145} proved a conjecture of Servatius \cite{MR1023285} that ${\rm{Aut}}(A_\Gamma)$ has a finite generating set consisting of the following automorphisms: \begin{description} \item[Graph symmetries] If a permutation of the vertices comes from a self-isomorphism of the graph, then this permutation induces an automorphism of $A_\Gamma$. These automorphisms form a finite subgroup of ${\rm{Aut}}(A_\Gamma)$ called ${\rm{Sym}}(A_\Gamma)$. \item[Inversions] These are automorphisms that come from inverting one of the generators of $A_\Gamma$, so that: $$s_i(v_k)=\begin{cases} v_i^{-1} & i=k \\ v_k & i \neq k. \end{cases}$$ \item[Partial conjugations] Suppose $[v_i,v_j] \neq 0$. Let $\Gamma_{ij}$ be the connected component of $\Gamma - st(v_j)$ containing $v_i.$ Then the partial conjugation $K_{ij}$ conjugates every vertex of $\Gamma_{ij}$ by $v_j$, and fixes the remaining vertices, so that: $$K_{ij}(v_k)=\begin{cases} v_jv_kv_j^{-1} & v_k \in \Gamma_{ij} \\ v_k & v_k \not \in \Gamma_{ij}. \end{cases} $$ Note that if $lk(v_i) \subset st(v_j)$ then $\Gamma_{ij}=\{v_i\}$, so in this case $K_{ij}$ fixes every basis element except $v_i$. \item[Transvections] If $lk(v_i) \subset st(v_j)$, then there is an automorphism $\rho_{ij}$ which acts on the generators of $A_\Gamma$ as follows: $$\rho_{ij}(v_k)=\begin{cases} v_iv_j & i=k \\ v_k & i \neq k. \end{cases} $$ \end{description} There are two important finite index normal subgroups of ${\rm{Aut}}(A_\Gamma)$ that we obtain from this classification. The first is the subgroup generated by inversions, partial conjugations, and transvections and is denoted ${\rm{Aut^0}}(A_\Gamma)$. The second is the smaller subgroup generated by only partial conjugations and transvections. Denote this group ${\rm{SAut^0}}(A_\Gamma)$. In some cases we will need to look at groups generated by (outer) automorphisms that conjugate more than one component of $\Gamma - st(v_j)$ by $v_j$. Let $T$ be a subset of $\Gamma_{ij}$ such that no two vertices of $T$ lie in the same connected component of $\Gamma_{ij}$. We define an \emph{extended partial conjugation} to be an automorphism of the form $\prod_{t \in T}K_{tj}$. We will abuse notation by describing the images of the above elements in ${\rm{Out}}(A_\Gamma)$ by the same names, so that the groups ${\rm{Out}}^0(A_\Gamma)$ and ${\rm{SOut}}^0(A_\Gamma)$ are defined in the same manner. If $\phi \in {\rm{Aut}}(A_\Gamma)$, we use $[\phi]$ to denote the equivalence class of $\phi$ in ${\rm{Out}}(A_\Gamma)$. Let $\mathcal{A}_\Gamma$ be the enlarged generating set of ${\rm{Out}}(A_\Gamma)$ given by graph symmetries, inversions, extended partial conjugations, and transvections. We shall be studying subgroups of ${\rm{Out}}(A_\Gamma)$ generated by subsets of $\mathcal{A}_{\Gamma}$, however some of these groups are not generated by subsets of the standard generating set. This is because under the restriction, exclusion and projection maps defined in Section \ref{s:rep}, partial conjugations are not always mapped to partial conjugations, but are always mapped to extended partial conjugations. Throughout this paper, we will assume ${\rm{Aut}}(A_\Gamma)$ and ${\rm{Out}}(A_\Gamma)$ act on $A_\Gamma$ on the left. \subsection{Ordering the vertices of $\Gamma$} \label{s:relation} Given a vertex $v$, the \emph{link of} $v$ is the set of vertices of $\Gamma$ adjacent to $v$. The star of $v$ is the union of $v$ and the link of $v$. We write $lk(v)$ for the link of $v$, and $st(v)$ for the star of $v$. Extending the definition of the link and star of a vertex, given any full subgraph $\Gamma'$ of $\Gamma$, the subgraph $lk(\Gamma')$ is defined to be the intersection of the links of the vertices of $\Gamma'$, and we define $st(\Gamma')=\Gamma' \cup lk(\Gamma').$ Given any full subgraph $\Gamma'$, the right-angled Artin group $A_{\Gamma'}$ injects into $A_\Gamma,$ so can be viewed as a subgroup. We may define a binary relation on the vertices by letting $u \leq v$ if $lk(u) \subset st(v).$ This was introduced by Charney and Vogtmann in \cite{CV2009}, who showed that the relation is transitive as well as symmetric, so defines a preorder on the vertices. This induces an equivalence relation by letting $u \sim v$ if $u \leq v$ and $v \leq u$. Let $[v]$ denote the equivalence class of the vertex $v.$ We will abuse notation by also using $[v]$ to denote the full subgraph of $\Gamma$ spanned by the vertices in this equivalence class. The preorder descends to a partial order of the equivalence classes. We say that $[v]$ is \emph{maximal} if it is maximal with respect to this ordering. The vertices of $[v]$ are all either at distance one from each other in $\Gamma$, or generate a free subgroup of $\Gamma$, therefore $A_{[v]}$ is either a free abelian, or a free non-abelian subgroup of $A_\Gamma.$ We say that the equivalence class $[v]$ is \emph{abelian} or \emph{non-abelian} respectively. Suppose that there are $r$ equivalence classes of vertices in $\Gamma$. We may choose an enumeration of the vertices so that there exists $1=m_1<m_2<\ldots<m_r < n$ such that the equivalence classes are the sets $\{v_1=v_{m_1},\ldots,v_{m_2-1}\},\ldots,\{v_{m_r},\ldots,v_n\}.$ With further rearrangement we may assume that $v_{m_i} \leq v_{m_j}$ only if $i \leq j$. We formally define $m_{r+1}=n+1$ so that for all $i$, the equivalence class of $[v_{m_i}]$ contains $m_{i+1}-m_i$ vertices. \subsection{G--ordering vertices}\label{s:gordering} Given a subgroup $G \leq {\rm{Out}}(A_\Gamma)$, we may define the relation $\leq_G$ on the vertices of $\Gamma$, by letting $v_i \leq_G v_j$ if either $i=j$ or the element $[\rho_{ij}]$ lies in $G$. Note that this is a subset of the previous relation $\leq$ defined on the vertices. Also, $\leq_G$ is reflexive by definition and transitive as $\rho_{il}=\rho_{jl}^{-1}\rho_{ij}^{-1}\rho_{jl}\rho_{ij}$, so $\leq_G$ is a preorder, induces an equivalence relation $\sim_G$ on the vertices, and induces a partial ordering of the equivalence classes of $\sim_G$. Let $[v_i]_G$ be the equivalence class of the vertex $v_i$. Each equivalence class $[v_i]_G$ is a subset of the equivalence class $[v_i]$. In particular the subgroup $A_{[v_i]_G}$ is either free abelian or free and non-abelian, so $[v_i]_G$ may also be described as $\emph{abelian}$ or $\emph{non-abelian}$. Suppose that there are $r' \geq r$ equivalence classes of vertices in $\sim_G$. We may further refine the enumeration of the vertices given previously so that there exists $1=l_1 < l_2 < \ldots < l_{r'} < n$ such that the equivalence classes of $\sim_G$ are the sets $\{v_1=v_{l_1},\ldots,v_{l_2-1}\},\ldots,\{v_{l_{r'}},\ldots,v_n\},$ and $v_{l_i} \leq_G v_{l_j}$ only if $i \leq j$. Define $l_{r'+1}=n+1$ so that for all $i$, the equivalence class of $[v_{l_i}]_G$ contains $l_{i+1}-l_{i}$ vertices. \subsection{The Torelli subgroups of ${\rm{Aut}}(A_\Gamma)$ and ${\rm{Out}}(A_\Gamma)$}\label{s:torelli intro} The abelianisation of $A_{\Gamma}$, denoted $A_\Gamma^{ab}=A_\Gamma/[A_\Gamma,A_\Gamma]$ or $H_1(A_\Gamma)$, is a free abelian group generated by the images of the vertices under the abelianisation map $A_\Gamma \to A_\Gamma/[A_\Gamma,A_\Gamma].$ This induces a natural map $$\Phi:{\rm{Aut}}(A_\Gamma) \to \aut(A_\Gamma^{ab}).$$ Once and for all we fix the basis of $A_\Gamma^{ab}$ to be the image of $V(\Gamma)$ under the abelianisation map. This allows us to identify $\aut(A_\Gamma^{ab})$ with ${\rm{GL}}_n(\mathbb{Z})$. This is the viewpoint which we will take for the rest of the paper. We say that $\ker\Phi=\mathcal{T}(A_\Gamma)$ is the \emph{Torelli subgroup of} ${\rm{Aut}}(A_\Gamma)$. Every partial conjugation lies in $\mathcal{T}(A_\Gamma)$, and if $i,j,k$ are distinct, $v_i \leq v_j,v_k$, and $[v_j,v_k]\neq0,$ then the automorphism $$K_{ijk}(v_l)=\begin{cases} v_i[v_j,v_k] & l=i \\ v_l & l \neq i, \end{cases} $$ lies in $\mathcal{T}(A_\Gamma)$. The conditions on $i,j,k$ ensure that $K_{ijk}$ is well-defined and nontrivial. In \cite{Day09}, Day proves the following theorem:\begin{theorem}The set of partial conjugations and automorphisms of the form $K_{ijk}$ (where $i,j,k$ are distinct, $v_i \leq v_j,v_k$, and $[v_j,v_k]\neq0$) generates $\mathcal{T}(A_\Gamma)$. \end{theorem} As $K_{ijk}=K_{ikj}^{-1}$, we may restrict the generating set to contain $K_{ijk}$ only when $j<k$. We call this generating set $\mathcal{M}_\Gamma.$ As $\Phi$ sends $\text{Inn}(A_\Gamma)$ to the identity, we can factor out $\text{Inn}(A_\Gamma)$ to obtain a map $$\overline{\Phi}:{\rm{Out}}(A_\Gamma) \to {\rm{GL}}_n(\mathbb{Z}).$$ We define $\overline{\mathcal{T}}(A_\Gamma)=\ker\overline{\Phi},$ and call this group the \emph{Torelli subgroup of} ${\rm{Out}}(A_\Gamma)$. \subsection{Words in $A_\Gamma$.}\label{s:words} At times we will need to discuss words and word length in $A_\Gamma$. The support, $supp(w)$, of a word $w$ on $V(\Gamma)\cup V(\Gamma)^{-1}$ is the full subgraph of $\Gamma$ spanned by the generators (or their inverses) occurring in $w$. We say that a word $w$ representing $g \in A_\Gamma$ is \emph{reduced} if there is no subword of the form $v^{\pm1} w'v^{\mp1}$ with $supp(w')\subset st(v)$. We may pass between two reduced words by repeatedly switching consecutive letters that commute (see \cite{MR2322545}) --- it follows that we may define $supp(g)$ to be the support of any reduced word representing $g$. Similarly the \emph{length} of $g$ is the length of any reduced word representing $g$. We say that $g$ is \emph{positive} if it has a representative that is a positive word in $V(\Gamma)$. \subsection{Restriction, exclusion, and projection homomorphisms.} \label{s:rep} Suppose that $\Gamma'$ is a full subgraph of $\Gamma$, so that $A_{\Gamma'}$ can be viewed as a subgroup of $A_\Gamma$ in the natural way. Suppose that the conjugacy class of $A_{\Gamma'}$ is preserved by a subgroup $G < {\rm{Out}}(A_\Gamma)$. Then there is a natural \emph{restriction map} $R_{\Gamma'}\colon\thinspace G \to \out(A_{\Gamma'})$ obtained by taking a representative of an element $[\phi] \in G$ that preserves $A_{\Gamma'}$. This is well defined as the normaliser of $A_\Gamma$ is of the form $C_{A_\Gamma}(A_{\Gamma'}).A_{\Gamma'}$. Hence if $gA_{\Gamma'}g^{-1}=A_{\Gamma'}$, the action of $g$ on $A_{\Gamma'}$ can be realised by an inner automorphism of $A_{\Gamma'}$. Similarly, if the normal subgroup of $A_\Gamma$ generated by $A_{\Gamma'}$ is preserved by some $G< {\rm{Out}}(A_\Gamma)$, then there is a natural \emph{exclusion map} $E_{\Gamma'}\colon\thinspace G \to \out(A_\Gamma/\langle \langle A_{\Gamma'}\rangle \rangle)\cong\out(A_{\Gamma-\Gamma'})$. There are two key examples: \begin{example}\label{e:maps} If $\Gamma$ is connected and $v$ is a maximal vertex then the conjugacy classes of $A_{[v]}$ and $A_{st[v]}$ are preserved by ${\rm{Out}}^0(A_\Gamma)$ (\cite{CV2009}, Proposition 3.2). Therefore there is a restriction map $$R_v:{\rm{Out}}^0(A_\Gamma) \to \out^0(A_{st[v]}),$$ an exclusion map $$E_v:{\rm{Out}}^0(A_\Gamma) \to \out^0(A_{\Gamma - [v]}),$$ and a projection map $$P_v:{\rm{Out}}^0(A_\Gamma) \to \out^0(A_{lk[v]})$$ obtained by combining the restriction and exclusion maps. We can take the direct sum of these projection maps over all maximal equivalence classes $[v]$ to obtain the \emph{amalgamated projection homomorphism}: $$P:{\rm{Out}}^0(A_\Gamma) \to \bigoplus_{\text{$[v]$ maximal}}\out^0(A_{lk[v]})$$ \end{example} \begin{example}\label{e:disconnected} If $\Gamma$ is not connected, then there exists a finite set $\{\Gamma_i\}_{i=1}^k$ of connected graphs containing at least two vertices and an integer $N$ such that $A_\Gamma \cong F_N \ast_{i=1}^k A_{\Gamma_i} $. By looking at the action of the generating set of ${\rm{Out}}^0(A_\Gamma)$, we find that the conjugacy class of $A_{\Gamma_i}$ is fixed by ${\rm{Out}}^0(A_\Gamma)$, therefore for each $i$ we obtain a restriction map $$R_i:{\rm{Out}}^0(A_\Gamma) \to \out^0(A_{\Gamma_i}),$$ and as the normal subgroup generated by $\ast_{i \in I}A_{\Gamma_i}$ is preserved by ${\rm{Out}}(A_\Gamma)$ there is an exclusion map $$E: {\rm{Out}}(A_\Gamma) \to \out(F_N).$$ \end{example} Charney and Vogtmann have shown that when $\Gamma$ is connected, the maps in Example \ref{e:maps} describe ${\rm{Out}}(A_\Gamma)$ almost completely. There are two cases: when the centre of $A_\Gamma$, which we write as $Z(A_\Gamma)$, is trivial, and when $Z(A_\Gamma)$ is nontrivial. In the first case, they show the following: \begin{theorem}[\cite{CV2009}, Theorem 4.2] \label{t:projections} If $\Gamma$ is connected and $Z(A_\Gamma)$ is trivial, then $\ker P$ is a finitely generated free abelian group. \end{theorem} In \cite{CV2010}, Theorem \ref{t:projections} is extended by giving an explicit generating set of $\ker P,$ however we will not need this description in the work that follows. When $Z(A_\Gamma)$ is nontrivial there is a unique maximal abelian equivalence class $[v]$ consisting of the vertices in $Z(A_\Gamma)$, and we are in the following situation: \begin{proposition}[\cite{CV2009}, Proposition 4.4]\label{p:projections2} If $Z(A_\Gamma)=A_{[v]}$ is nontrivial, then $${\rm{Out}}(A_\Gamma) \cong \text{Tr} \rtimes (\text{GL}(A_{[v]}) \times \out(A_{lk[v]})),$$ where $\text{Tr}$ is the free abelian group generated by the transvections $[\rho_{ij}]$ such that $v_i \in lk[v]$ and $v_j \in [v]$. The map to $\text{GL}(A_{[v]})$ is given by the restriction map $R_v$, and the map to $\out(A_{lk[v]})$ is given by the projection map $P_v$. The subgroup $\text{Tr}$ is the kernel of the product map $R_v \times P_v.$ \end{proposition} In the above proposition we do not need to restrict $R_v$ and $P_v$ to ${\rm{Out}}^0(A_\Gamma)$, as every automorphism of $A_\Gamma$ preserves $Z(A_\Gamma)=A_{[v]}$. When $\Gamma$ is disconnected, the restriction and exclusion maps of Example \ref{e:disconnected} give us less information. As above, we may amalgamate the restriction maps $R_i$ and the exclusion map $E$, however in this situation the kernel of the amalgamated map is not abelian --- it is the semidirect product of a subgroup of $\overline{\mathcal{T}}(A_\Gamma)$ and the abelian group generated by transvections $[\rho_{ij}]$, where $v_i$ is an isolated vertex of $\Gamma$, and $v_j$ is contained in a nontrivial connected component of $\Gamma$. \section{The lower central series of $A_\Gamma$} \label{s:lie theory} In this section we shall gather some results on the lower central series of $A_\Gamma$ and its associated Lie algebra that we require in the rest of the paper. In Proposition~\ref{l2basis} we give a basis for the free abelian group $\gamma_2(A_\Gamma)/\gamma_3(A_\Gamma)$ and in Theorem~\ref{c:centres} we give information about the structure of the Lie algebra $L=\oplus_{i=1}^\infty\gamma_i(A_\Gamma)/\gamma_{i+1}(A_\Gamma).$ From Magnus' work, most of the results are well known in the free group case (see, for example, Chapter 5 of \cite{MR0207802} or Chapter 2 of \cite{MR979493}), however a little care is required generalising the results to all right-angled Artin groups. \subsection{The Lie algebra $L$ and its enveloping algebra $U(A_\Gamma)$} \label{s:background2} Many facts about the lower central series of the free group were generalised to right-angled Artin groups by Duchamp and Krob \cite{MR1176154,MR1179860}; below we summarise some key points from their work. In the above papers right-angled Artin groups are described as \emph{free partially commutative groups}. Note that the algebra $U(A_\Gamma)$ defined below coincides with the \emph{free partially commutative $\mathbb{Z}$--algebra} used in Duchamp and Krob's work, as we can get between two positive words in $V(\Gamma)$ representing the same element of $A_\Gamma$ with a sequence of positive words where consecutive elements differ only by permuting a pair of commuting letters (this says that the \emph{free partially commutative monoid} embeds in $A_\Gamma$ in a natural way.) Let $\gamma_c(A_\Gamma)$ be the $c$th term in the lower central series of $A_\Gamma$, so that $\gamma_1(A_\Gamma)=A_\Gamma$ and $\gamma_{c+1}(A_\Gamma)=[\gamma_c(A_\Gamma),A_\Gamma]$. Let $L_c=\gamma_c(A_\Gamma)/\gamma_{c+1}(A_\Gamma)$. For all $\Gamma$ and $c$, the group $L_c$ is free abelian. Let $L= \oplus_{i=1}^\infty L_i$. As $[\gamma_c(A_\Gamma),\gamma_d(A_\Gamma)]\subset\gamma_{c+d}(A_\Gamma)$, the $\mathbb{Z}$--module $L$ inherits a graded $\mathbb{Z}$--Lie algebra structure by taking commutators in $A_\Gamma$, and is generated by the images of $v_1,\ldots,v_n$ in $L_1$. Furthermore $L$ is a free $\mathbb{Z}$--module, and admits a basis consisting of elements of the form \begin{equation}\label{mono}\beta(v_{i_1},v_{i_2},\cdots,v_{i_k},v_{i_{k+1}}),\end{equation} where $\beta$ is a bracket of degree $k$. (For example, if $k=2$ then we have a basis consisting of elements either of the form $[x,[y,z]]$ or of the form $[[x,y],z]$.) \begin{definition} Let $U(A_\Gamma)$ be the free $\mathbb{Z}$--module with a basis given by positive elements of $A_\Gamma.$ Let $U_i(A_\Gamma)$ be the submodule of $U(A_\Gamma)$ spanned by positive elements of $A_\Gamma$ of length $i$. Then $U(A_\Gamma)=\oplus_{i=0}^{\infty}U_i(A_\Gamma)$, and multiplication in $A_\Gamma$ gives $U(A_\Gamma)$ the structure of a graded $\mathbb{Z}$--algebra. \end{definition} We will distinguish elements of $U(A_\G)$ from $A_\Gamma$ by writing positive words in $\{\mathbf{v_1,\ldots,v_n}\}$ rather than $\{v_1,\ldots,v_n\}$. Let $\mathcal{L}(U(A_\G))$ be $U(A_\G)$ endowed with the Lie bracket defined by $[u_1,u_2]=u_1.u_2-u_2.u_1.$ Then $U(A_\Gamma)$ and $L$ are related by the following theorem: \begin{theorem}[\cite{MR1176154} Corollary I.2. and \cite{MR1179860} Theorem 2.1] The map $L_1 \to U_1(A_\Gamma)$ given by $v_i\gamma_2(A_\Gamma) \mapsto \mathbf{v}_i$ induces an injective Lie algebra homomorphism $$\epsilon:L \to \mathcal{L}(U(A_\G)).$$ Furthermore, this map is graded, so that $L_i \to U_i(A_\Gamma)$, and has the universal property exhibiting the fact that $U(A_\G)$ is the \emph{enveloping algebra} of $L$. \end{theorem} Let $U^{\infty}(A_\G)$ be the algebra extending $U(A_\G)$ by allowing infinitely many coefficients of a sequence of positive words to be non-zero. Any element of $U^{\infty}(A_\G)$ can be written uniquely as a power series $a=\sum_{i=0}^{\infty}a_i,$ where $a_i$ is an element of $U_i(A_\Gamma).$ We say that $a_i$ is the \emph{homogeneous part} of $a$ of degree $i$. Each $a_i$ is a linear sum of positive elements of length $i$, so is of the form $a_i=\sum_{k=1}^m\lambda_k g_k$, where $g_k$ is a positive element in $A_\Gamma$ of length $i$ and $\lambda_k \in \mathbb{Z} \setminus \{0\}.$ Define $supp(a_i)=\cup_{k=1}^m supp(g_k)$. Let $U^*(A_\G)$ be the group of units in $U^{\infty}(A_\G)$. If $a$ is of the form $a=1 + \sum_{i=1}^{\infty}a_i,$ then $a \in U^*(A_\G)$ and $$a^{-1}=1-(a_1+a_2+\cdots)+(a_1+a_2+\cdots)^2-\ldots=1 + \sum_{i=1}^{\infty}c_i.$$ Here $c_1=-a_1$ and $c_j=-\sum_{i=0}^{j-1}c_ia_{j-i}$ recursively. We have abused notation slightly in the above by writing $1$ as the leading coefficient rather than $1.1_{A_\Gamma}.$ We may use $U^{\infty}(A_\G)$ to study $A_\Gamma$ via the following proposition: \begin{proposition}The mapping $v_i \mapsto 1 + \mathbf{v_i}$ induces a homomorphism $\mu:A_\Gamma \to U^*(A_\G).$ \end{proposition} \begin{proof} The mapping $v_i \mapsto 1 +\mathbf{v_i}$ induces a homomorphism $\overline{\mu}:F(V) \to U^*(A_\G)$. The relations in the standard presentation of $A_\Gamma$ are sent to the identity in $U^*(A_\G)$, so this descends to a homomorphism $\mu:A_\Gamma \to U^*(A_\G).$\end{proof} $\mu$ is sometimes called the \emph{Magnus map} or \emph{Magnus morphism}. It is important because of the following result of Droms, Duchamp and Krob \cite[Theorem 1.2]{MR1179860}. Its proof follows the method used by Magnus in the case of the free group, which is covered on page 310 of \cite{MR0207802}. \begin{proposition} The homomorphism $\mu:A_\Gamma \to U^*(A_\G)$ is injective. \end{proposition} There is a central series $M_1 \geq M_2 \geq M_3 \ldots$ of $U^*(A_\G)$ defined by letting $a \in M_c$ if and only if $a_i=0$ when $0 < i < c.$ It is related to the lower central series of $A_\Gamma$ by the following theorem: \begin{theorem}[\cite{MR1179860}, Theorem 2.2] \label{mu is nice} For all $c$ we have $\mu^{-1}(M_c)=\gamma_c(A_\Gamma).$ \end{theorem} As $\cap_{c=1}^{\infty}M_c=\{1\}$, it follows that $\cap_{c=1}^{\infty} \gamma_c(A_\Gamma)= \{1\}.$ By induction on $c$ one can show that for $a,b \in U^*(A_\G),$ the element $ab^{-1}$ lies in $M_c$ if and only if $a_i=b_i$ for $i < c.$ This gives one an effective way of studying the lower central series of ${A_\G}.$ \subsection{More information on the structure of $L$} We use $\mu$ to find free generating sets for the groups $L_c$: \begin{proposition} \label{l2basis} The free abelian group $L_2=\gamma_2(A_\Gamma)/\gamma_3(A_\Gamma)$ has a basis given by the set $S=\{[v_i,v_j]\gamma_3(A_\Gamma):i < j, [v_i,v_j]\neq 0\}.$ \end{proposition} \begin{proof} As $\gamma_2(A_\Gamma)=[A_\Gamma,A_\Gamma]$ is the normal closure of $S$, any element of $g \in \gamma_2(A_\Gamma)$ is of the form $g=g_1s_1^{e_1}g_1^{-1}\cdots g_k s_k^{e_k} g_k^{-1}$, where $s_i \in S$ and $g_i \in A_\Gamma$. However, $[g_i,s_i] \in \gamma_3(A_\Gamma)$ for all $i$, therefore the image of $g$ in $\gamma_2(A_\Gamma)/\gamma_3(A_\Gamma)$ is equal to $s_1^{e_1}\cdots s_k^{e_k}$, so $L_2$ is generated by $S$. As $\mu$ is injective and $\mu^{-1}(M_c)=\gamma_c(A_\Gamma)$ for all $c$, the map $\mu$ induces an injection $\bar{\mu}\colon\thinspace \gamma_2(A_\Gamma)/\gamma_3(A_\Gamma) \to M_2 / M_3,$ and this can be composed further with the homomorphism $f\colon\thinspace M_2 / M_3 \to U_2(A_\Gamma)$ given by $a + M_3 \mapsto a_2.$ The free abelian group (or free $\mathbb{Z}$--module) $U_2(A_\Gamma)$ has a basis consisting of elements of length $2$ in $A_\Gamma$, the set: $$\{\mathbf{v_i^2}:v_i \in V(\Gamma)\}\cup\{\mathbf{v_iv_j}:v_i,v_j\in V(\Gamma) \text{ and $i <j$ or $[v_i,v_j]\neq0$}\}.$$Then $f\bar{\mu}([v_i,v_j]\gamma_3(A_\Gamma))=\mathbf{v_iv_j-v_jv_i}$, so the images of the elements of $S$ are linearly independent in $U_2(A_\Gamma)$, and form a basis of $L_2$. \end{proof} We shall use Proposition \ref{l2basis} in Section \ref{s:johnson} to describe the abelianisation of $\mathcal{T}(A_\Gamma)$. The last thing we need is to use this associative algebra to give us information about the the structure of $L$. \begin{proposition} \label{p:centres} Let $v \in V(\Gamma)$. Let $a=\sum_{i=1}^{\infty}a_i$ be an element of $U(A_\G)$, and let $b=[\mathbf{v},a]=\sum_{i=1}^{\infty}b_i$. Then $b_{i+1}=0$ only if $supp(a_i) \subset st(v)$. \end{proposition} \begin{proof} Suppose that $b_{i+1}=0$. Let $a_i=\sum_{k=1}^m\lambda_kg_k$ be a decomposition of $a_i$ into a sum of distinct positive elements $g_k$, where each $\lambda_k \in \mathbb{Z}\smallsetminus\{0\}$. As $b_{i+1}=\mathbf{v}a_i-a_i\mathbf{v}=0$, we have $\mathbf{v}a_i=a_i\mathbf{v}$. We have two injective maps $g_k \mapsto \mathbf{v}g_k$ and $g_k \mapsto g_k\mathbf{v}$ from the set of positive elements of length $i$ to the set of positive elements of length $i+1,$ therefore as $\mathbf{v}a_i=a_i\mathbf{v}$ we have $\{g_k\mathbf{v}\}_{k=1}^m=\{\mathbf{v}g_k\}_{k=1}^m.$ Suppose that each $g_k$ has a reduced representative where the first $j$ letters commute with $v$. For each $g_k$ there exists $l$ such that $g_k\mathbf{v}=\mathbf{v}g_l,$ so that $g_k=\mathbf{v}g_l\mathbf{v^{-1}}.$ There is a reduced representative $w_l$ of $g_l$ where the first $j$ letters commute with $v$ and we can find a reduced representative of $g_k$ by a sequence of swaps of consecutive commuting letters and one cancellation in $vw_lv^{-1}$. If $v^{-1}$ is cancelled with a letter $v$ from the last $i-j$ letters of $w_l$ then certainly the first $j+1$ letters in this representative of $g_k$ commute with $v$. If $v^{-1}$ is cancelled by the initial letter $v$ or one of the first $j$ letters of $w_l$, then $v^{-1}$ must commute with the last $i-j$ letters of $w_l$, so every letter in this reduced representative of $g_k$ commutes with $v$. In either case there is a representative of $g_k$ where the first $j+1$ letters commute with $v$. By induction this must hold for all $j \leq i$, hence $supp(g_k)\subset st(v)$ for all $k$ and $supp(a_i) \subset st(v)$ \end{proof} The above lets us control the centre of $L$ and other associated structures. Let $p$ be a positive integer and let $(L)_p$ be the Lie algebra obtained by taking the tensor product of $\mathbb{Z}/p\mathbb{Z}$ with $L$. Let $L^c$ be the quotient algebra $L/\oplus_{i>c}L_i.$ We use $Z(\_)$ to denote the centre of a Lie algebra. (cf. \cite{MR979493}, Chapter 2, Exercise 3.3) \begin{theorem} \label{c:centres} If $Z(A_\Gamma)=\{1\}$ then $Z(L)=Z((L)_p)=0$ and $Z(L^c)$ is the image of $L_c$ under the quotient map $L\to L^c$. \end{theorem} \begin{proof} Elements of $L$ of the form in Equation \eqref{mono} map to homogeneous polynomials in $U(A_\Gamma)$ under the map $\epsilon\colon\thinspace L \to \mathcal{L}(U(A_\G))$, where the coefficient of each element of $A_\Gamma$ is $\pm 1$. As such elements form a basis of $L$, it follows that tensoring $L$ with $\mathbb{Z}/p\mathbb{Z}$ corresponds to taking the image of $L$ under the map $\epsilon$ and tensoring $U(A_\G)$ with $\mathbb{Z}/p\mathbb{Z}$. Also, as $\epsilon$ maps $L_i \to U_i(A_\Gamma)$, the algebra $L^c$ is isomorphic to the image of $L$ in $U^c=U(A_\Gamma)/\oplus_{i>c}U_i(A_\Gamma)$. If we take a nonzero element $a \in U(A_\G)$ then as $Z(A_\Gamma)=\{1\}$, for each $a_i$ there exists $v \in V(\Gamma)$ such that $supp(a_i) \not \subset st(v)$. Then $[\mathbf{v},a]\neq0$ by Proposition~\ref{p:centres}, and if $a$ is nonzero in $\mathbb{Z}/p\mathbb{Z} \otimes U(A_\G)$, then so is $[\mathbf{v},a]$ by looking at coefficients. Similarly if $a$ is nonzero in $U^c$, there exists $i\leq c$ with $a_i \neq 0$. By picking $v$ such that $supp(a_i) \not \subset st(v)$, the $(i+1)$st homogenous part of $[\mathbf{v},a]$ is nonzero by Proposition~\ref{p:centres}. The proof then follows by noting that if $a \in \im \epsilon$ then as $\mathbf{v}\in \im \epsilon$, so is $[\mathbf{v},a]$ for all $v \in V(\Gamma).$ \end{proof} \section{The Andreadakis--Johnson Filtration of $\mathcal{T}(A_\Gamma)$}\label{s:johnson} In this section we follow the methods of Bass and Lubotsky \cite{BL} to extend the notion of \emph{higher Johnson homomorphisms} from the free group setting to general right-angled Artin groups. Coupled with the work in the previous sections of the paper, these allow us to describe the abelianisation of $\mathcal{T}(A_\Gamma)$, and show that $\mathcal{T}(A_\Gamma)$ has a separating central series $G_1,G_2,G_3,\ldots$ where each quotient $G_i/G_{i+1}$ is a finitely generated free abelian group. This was first studied in the case of free groups by Andreadakis. We show that the image of this series in ${\rm{Out}}(A_\Gamma)$ satisfies the same results. \subsection{Definition and application to $H_1(\mathcal{T}(A_\Gamma))$} As $\gamma_c(A_\Gamma)$ is characteristic, there is a natural map ${\rm{Aut}}(A_\Gamma) \to \aut(A_\Gamma/\gamma_c(A_\Gamma))$. Let $G_{c-1}$ be the kernel of this map. Then $G_0={\rm{Aut}}(A_\Gamma)$ and $G_1=\mathcal{T}(A_\Gamma)$. The following proposition is proved in the same way as Proposition 2.2 of \cite{BW2010}. \begin{proposition} Let $\phi \in G_c$, where $c \geq 1$. The mapping $\bar{g} \mapsto \phi(g)g^{-1}.\gamma_{c+2}(A_\Gamma)$ induces a homomorphism $$ \tau_c\colon\thinspace G_c \to {\rm{Hom}}(L_1,L_{c+1}) $$ such that $\ker(\tau_c)=G_{c+1}$. \end{proposition} As $\cap_{c=1}^{\infty} \gamma_c(A_\Gamma)=\{1\}$, it follows that $\cap_{c=1}^{\infty}G_c=\{1\}.$ As $L_1$ and $L_{c+1}$ are free abelian, ${\rm{Hom}}(L_1,L_{c+1})$ is free abelian, and therefore $G_c/G_{c+1}$ is free abelian. One can check that $[G_c,G_d] \subset G_{c+d}$, so that $G_1,G_2,G_3,\ldots$ is a central series of $\mathcal{T}(A_\Gamma)$. The rank of each $L_c$ has been calculated in \cite{MR1179860}, although more work is needed to calculate the ranks of the quotients $G_c/G_{c+1}$. In the free group case $G_1/G_2$, $G_2/G_3$ and $G_3/G_4$ are known \cite{Pettet05,Satoh06} but as yet there is no general formula. In this paper we restrict ourselves to studying the abelianisation of $\mathcal{T}(A_\Gamma)$, using the generating set $\mathcal{M}_\Gamma$ defined in Section \ref{s:torelli intro}. \begin{theorem} The first Johnson homomorphism $\tau_1$ maps $\mathcal{M}_\Gamma$ to a free generating set of a subgroup of ${\rm{Hom}}(L_1,L_2)$. The abelianisation of $\mathcal{T}(A_\Gamma)$ is isomorphic to the free abelian group on the set $\mathcal{M}_\Gamma$, and $G_2$ is the commutator subgroup of $\mathcal{T}(A_\Gamma)$. \end{theorem} \begin{proof} By Proposition \ref{l2basis}, the free abelian group $L_2$ has a basis $S=\{[v_i,v_j]\gamma_3(A_\Gamma):i < j, [v_i,v_j]\neq 0\}.$ This allows us to obtain an explicit description of the images of elements of $\mathcal{M}_\Gamma$: \begin{align*} \tau_1(K_{ij})(v_l)&= \begin{cases} 1\gamma_3(A_\Gamma) & \text{if $v_l \not \in \Gamma_{ij}$} \\ [v_j,v_l]\gamma_3(A_\Gamma) & \text{if $v_l \in \Gamma_{ij}$} \end{cases} \\ \tau_1(K_{ijk})(v_l)&= \begin{cases} 1\gamma_3(A_\Gamma) & \text{if $l \neq i$} \\ [v_j,v_k]\gamma_3(A_\Gamma) & \text{if $l = i$} \end{cases} \end{align*} These elements are linearly independent in ${\rm{Hom}}(L_1,L_2)$. The second statement follows immediately, and the third follows as $G_2=\ker(\tau_1)$. \end{proof} \begin{cor} $\mathcal{M}_{\Gamma}$ is a minimal generating set of $\mathcal{T}(A_\Gamma)$. \end{cor} \subsection{Example: the pentagon} Suppose that $\Gamma$ is the pentagon shown in Figure~1. \begin{figure} [ht] \includegraphics{pentagon} \centering \caption{} \label{fig:pentagon} \end{figure} In this case, if $v_i \leq v_j$ then $v_i = v_j$, therefore no elements of the form $K_{ijk}$ exist in $\mathcal{T}(A_\Gamma)$. Removing $st(v_i)$ from $\Gamma$ leaves exactly one connected component consisting of the two vertices opposite $v_i$, therefore $$\mathcal{M}_\Gamma=\{K_{13},K_{24},K_{35},K_{41},K_{52}\}.$$ Hence $H_1(\mathcal{T}(A_\Gamma))=G_1/G_2 \cong \mathbb{Z}^5$. Also, $\{[v_1,v_3],[v_1,v_4],[v_2,v_4],[v_2,v_5],[v_3,v_5]\}$ is a set of coset representatives of $\gamma_2(A_\Gamma)$ in $\gamma_3(A_\Gamma)$, therefore ${\rm{Hom}}(L_1,L_2) \cong \mathbb{Z}^{25}.$ In particular $\tau_1$ is not surjective (in contrast to the free group situation --- see \cite{Pettet05}). \subsection{Extension to ${\rm{Out}}(A_\Gamma)$} We'd now like to move our attention to the images of $G_1,G_2,\ldots$ in ${\rm{Out}}(A_\Gamma)$, which we will label $H_1,H_2, \ldots$ respectively. Let $\pi$ be the natural projection ${\rm{Aut}}(A_\Gamma) \to {\rm{Out}}(A_\Gamma)$. The action of an element of $A_\Gamma$ on itself by conjugation induces a homomorphism $ad\colon\thinspace A_\Gamma \to {\rm{Aut}}(A_\Gamma)$. For each $g \in A_\Gamma \setminus \{1\}$ there exists a unique integer $d$ such that $g \in \gamma_d(A_\Gamma)$ and $g \not \in \gamma_{d+1}(A_\Gamma)$. In the language of Section \ref{s:lie theory}, we identify $g$ with the element $g\gamma_{d+1}(A_\Gamma)$ in the submodule $L_d$ of the Lie algebra $L$. We use this to study the map $ad$ in the following lemma: \begin{lemma}\label{l:e1} If $Z(A_\Gamma)=\{1\}$, then $g \in \gamma_c(A_\Gamma)$ if and only if $ad(g) \in G_c$. \end{lemma} \begin{proof} If $g \in \gamma_c(A_\Gamma)$ then $ghg^{-1}=h \mod \gamma_{c+1}(A_\Gamma)$, for all $h \in A_\Gamma$. Hence $ad(g) \in G_c$. Conversely, suppose that $g \not \in \gamma_c(A_\Gamma)$. Then $g \in L_d$ for some $d <c$, and by Theorem \ref{c:centres}, the image of $g$ under the quotient map $L \to L^c$ is not central. As $L$ is generated by $v_1,\ldots,v_n$, there exists $v_i$ such that the image of $[g,v_i]$ is nonzero in $L^c$. Hence $[g,v_i] \neq 1 \mod \gamma_{c+1}(A_\Gamma)$, so $gv_ig^{-1} \neq v_i \mod \gamma_{c+1}(A_\Gamma)$, and $ad(g) \not \in G_c$. \end{proof} By the above, $ad(\gamma_c(A_\Gamma)) \subset G_c$, and the induced map $\gamma_c(A_\Gamma)/\gamma_{c+1}(A_\Gamma) \to G_c /G_{c+1}$ is injective. Furthermore, it is clear that $\pi(ad(A_\Gamma))=\{1\}$, and if $\phi \in G_c$ but $[\phi] \in H_{c+1}$, then there exists $g \in A_\Gamma$ such that $\phi G_{c+1}=ad(g) G_{c+1}$. As $\phi \in G_c$, by Lemma \ref{l:e1} we have $g \in \gamma_c(A_\Gamma)$, hence $\phi G_{c+1}$ is in the image of $\gamma_c(A_\Gamma)/\gamma_{c+1}(A_\Gamma)$. Therefore, when $Z(A_\Gamma)=\{1\}$ we have an exact sequence of abelian groups: $$ 0 \to \gamma_{c}(A_\Gamma)/\gamma_{c+1}(A_\Gamma) \xrightarrow{\alpha} G_c/G_{c+1} \xrightarrow{\beta} H_c/H_{c+1} \to 0,$$ where $\alpha$ and $\beta$ are induced by $ad$ and $\pi$ respectively. \begin{theorem} If $Z(A_\Gamma)=\{1\}$ and $c \geq 1$ then the group $H_c /H_{c+1}$ is free abelian. \end{theorem} \begin{proof} By the exact sequence above, $$H_c/H_{c+1}\cong (G_c/G_{c+1})/(ad(\gamma_c(A_\Gamma))G_{c+1}/G_{c+1}).$$ As $G_c/G_{c+1}$ is free abelian, it is sufficient to show that if there exists $\phi \in G_c$, $g \in A_\Gamma$, and a prime $p$ such that $ad(g) G_{c+1}=\phi^p G_{c+1}$, then there exists $h \in A_\Gamma$ such that $ad(h) G_{c+1}=\phi G_{c+1}$. Suppose that $\phi$, $g$, and $p$ exist as above. As $\phi \in G_c$, for every $x \in A_\Gamma$, there exists $w_x \in \gamma_{c+1}(A_\Gamma)$ such that $\phi(x)=xw_x$. One can check using commutator calculus that $$\phi(x)^p = x w_x^p \mod \gamma_{c+2}(A_\Gamma),$$ therefore $$gxg^{-1} = x w_x^p \mod \gamma_{c+2}(A_\Gamma).$$ It follows that the image of $[g,x]$ is zero in $(L)_p$, for all $x \in A_\Gamma$, hence the image of $g$ lies in $Z((L)_p)$. By Theorem \ref{c:centres}, $g$ must lie in the kernel of the map $L \to (L)_p$, so there exists $h \in \gamma_c(A_\Gamma)$ such that $g=h^p \mod \gamma_{c+1}(A_\Gamma)$. Then $[g,x]=[h^p,x] \mod \gamma_{c+2}(A_\Gamma)$ for all $x \in A_\Gamma$, so $ad(h)^p G_{c+1}=ad(g) G_{c+1}= \phi^p G_{c+1}$. As $G_c /G_{c+1}$ is torsion-free it has unique roots. Hence $ad(h) G_{c+1} = \phi G_{c+1}$. \end{proof} We now adapt a well-known fact about ${\rm{Out}}(F_n)$ to ${\rm{Out}}(A_\Gamma)$. \begin{proposition} The interesection $\cap_{c=1}^{\infty}H_c$ is trivial. \end{proposition} \begin{proof} Let $\phi \in {\rm{Aut}}(A_\Gamma)$, and suppose that $[\phi] \in H_c$ for all $c$. Then for every element $g \in A_\Gamma$, we know that $\phi(g)$ is conjugate to $g$ in $A_\Gamma/\gamma_c(A_\Gamma)$ for all $c$. Toinet has shown that RAAGs are conjugacy seperable in finite $p$--group, (and therefore nilpotent) quotients \cite{Toinet10}; this tells us that $\phi(g)$ is conjugate to $g$. Furthermore, Minasyan (\cite{Minasyan2009}, Proposition 6.9) has shown that if $\phi$ takes every element of $A_\Gamma$ to a conjugate, then $\phi$ itself is an inner automorphism. Hence $\cap_{c=1}^{\infty}H_c=\{1\}$. \end{proof} Therefore if $Z(A_\Gamma)$ is trivial then $H_1,H_2,H_3\ldots$ is central series of $\overline{\mathcal{T}}(A_\Gamma)$, with trivial intersection, such that the consecutive quotients $H_c/H_{c+1}$ are free abelian. \begin{cor} \label{c:tor} If $Z(A_\Gamma)$ is trivial then $\overline{\mathcal{T}}(A_\Gamma)$ is residually torsion-free nilpotent. \end{cor} \subsection{The situation when $Z(A_\Gamma)$ is nontrivial} Suppose that $Z(A_\Gamma)$ is nontrivial. Let $[v]$ be the unique maximal equivalence class of vertices in $\Gamma$. By Proposition~\ref{p:projections2}, there is a restriction map $R_v$ and a projection map $P_v$ like so: \begin{align*} R_v&\colon\thinspace {\rm{Out}}(A_\Gamma) \to \gl(Z(A_\Gamma)) \cong \gl(A_{[v]}) \\ P_v&\colon\thinspace {\rm{Out}}(A_\Gamma) \to \out(A_\Gamma/Z(A_\Gamma)) \cong \out(A_{lk[v]}). \end{align*} The kernel of the map $R_v\times P_v$ is the free abelian subgroup $\text{Tr}$ generated by the transvections $[\rho_{ij}]$ such that $v_i \in lk[v]$ and $v_j \in [v]$. Elements of ${\rm{Out}}(A_\Gamma)$ that lie in $\text{Tr}$ act nontrivially on $A_\Gamma^{ab}$, as do elements that are nontrivial under $R_v$. It follows that $\overline{\mathcal{T}}(A_\Gamma)$ is mapped isomorphically under $P_v$ onto $\overline{\mathcal{T}}(A_{lk[v]})$. As the centre of $lk[v]$ is trivial, this lets us promote the above work to any right-angled Artin group: \begin{theorem} \label{t:tfn} For any graph $\Gamma$, the group $\overline{\mathcal{T}}(A_\Gamma)$ is residually torsion-free nilpotent. \end{theorem} \section{$\SL$--dimension and projection homomorphisms for subgroups of ${\rm{Out}}(A_\Gamma)$.} \label{s:sldim} In Section \ref{s:main} we will restrict the actions of certain groups on RAAGs of bounded $\SL$--dimension by using an induction argument based on the number of vertices in $\Gamma$, combined with projection homomorphisms. Unfortunately our definition of $\SL$--dimension described in the introduction can behave badly under projection, restriction, and inclusion homomorphisms. In particular, if, $v$ is a maximal vertex in a connected graph $\Gamma$, then it is not always true that $d_{SL}(\out(A_{lk[v]})) \leq d_{SL}({\rm{Out}}(A_\Gamma))$. To get round this problem, we will extend the definition of $\SL$--dimension to arbitrary subgroups of ${\rm{Out}}(A_\Gamma)$, and show that if instead we look at the image of ${\rm{Out}}(A_\Gamma)$ under such homomorphisms, then the $\SL$--dimension will not increase (for instance, it will always be the case that $d_{SL}(P_v({\rm{Out}}(A_\Gamma))) \leq d_{SL}({\rm{Out}}(A_\Gamma))).$ For the remainder of this section we fix a subgroup $G$ of ${\rm{Out}}(A_\Gamma)$ generated by a subset $T \subset \mathcal{A}_\Gamma$. We shall assume that $T$ is maximal, so that $T=\mathcal{A}_\Gamma \cap G$. We are going to study the action of $G$ on $H_1(A_\Gamma)$, so to simplify matters we will often want to ignore graph symmetries and inversions. \begin{proposition} Let $G^0$ be the subgroup of $G$ generated by the extended partial conjugations, inversions, and transvections in $T$ and let $SG^{0}$ be the subgroup of $G$ generated solely by the extended partial conjugations and transvections in $T$. Then $G^0$ and $SG^0$ are finite index normal subgroups of $G$. \end{proposition} \begin{proof} Suppose $\alpha$ is a graph symmetry that moves the vertices according to the permutation $\sigma$. We find that $\alpha K_{ij}\alpha^{-1}=K_{\sigma(i)\sigma(j)}$, $\alpha \rho_{ij} \alpha^{-1}=\rho_{\sigma(i)\sigma(j)}$ and $\alpha s_i \alpha^{-1}=\alpha_{\sigma(i)}$. As we assumed that $T$ is maximal, if $[\phi]$ and $[\alpha]$ belong to $T$, then so does $[\alpha \phi \alpha^{-1}]$. Therefore if $W$ is a word in $T \cup T^{-1}$ one may shuffle graph symmetries along so that they all occur at the beginning of $W$. As the group of graph symmetries is finite, this shows that $G^0$ is finite index in $G$, and the above computations verify that $G^0$ is normal in $G$. Similarly, with inversions one verifies that: $$ \rho_{kl}s_i=\begin{cases} s_i\rho_{kl} & i \neq k,l \\ s_i\rho_{ki}^{-1} & i=l \\ s_iK_{il}^{-1}\rho_{il}^{-1} & i=k,[v_i,v_l]\neq0 \\ s_i\rho_{il}^{-1} & i=k,[v_i,v_l]=0\end{cases} \text{and} \quad K_{kl}s_i=\begin{cases} s_iK_{kl} & i \neq l \\ s_iK_{kl}^{-1} & i=l \end{cases}.$$ These show that $SG^0$ is normal in $G^0$, and we may write any element of $G^0$ in the form $[s_1^{\epsilon_1}\ldots s_n^{\epsilon_n}\phi']$, where $\epsilon_i \in \{0,1\}$ and $\phi'$ is a product of extended partial conjugations and transvections. Therefore $SG^0$ is of index at most $2^n$ in $G^0$. \end{proof} Now lets look at the generators of ${\rm{Aut}}(A_\Gamma)$ (respectively ${\rm{Out}}(A_\Gamma)$) under the map $\Phi: {\rm{Aut}}(A_\Gamma) \to {\rm{GL}}_n(\mathbb{Z})$ (respectively $\overline{\Phi}:{\rm{Out}}(A_\Gamma) \to {\rm{GL}}_n(\mathbb{Z})$). If $\alpha$ is a graph symmetry, then $\Phi(\alpha)$ is the appropriate permutation matrix corresponding to the permutation $\alpha$ induces on the vertices. For a partial conjugation $K_{ij}$ we see that $\Phi(K_{ij})$ is the identity matrix $I$; $\Phi$ sends the inversion $s_i$ to the matrix $S_i$ which has $1$ everywhere on the diagonal except for $-1$ at the $(i,i)th$ position, and zeroes everywhere else, and $\Phi$ sends the transvection $\rho_{ij}$ to the matrix $T_{ji}=I + E_{ji}$, where $E_{ji}$ is the elementary matrix with $1$ in the $(i,j)$th position, and zeroes everywhere else. The swapping between $\rho_{ij}$ and $T_{ji}$ may seem a little unnatural, but occurs as a choice of having ${\rm{Aut}}(A_\Gamma)$ act on the left. It follows that the image of ${\rm{SAut^0}}(A_\Gamma)$ under $\Phi$ is the subgroup of ${\rm{GL}}_n(\mathbb{Z})$ generated by matrices of the form $T_{ij}$, where $v_j \leq v_i$. After restricting our attention to $G^0$ and $SG^0$ our main tool for studying the image of ${\rm{Out}}(A_\Gamma)$ and its subgroups under $\overline{\Phi}$ will be by ordering the vertices of $\Gamma$ in the manner described in Sections \ref{s:relation} and \ref{s:gordering}. If we order the vertices as in Section \ref{s:relation}, then $[\rho_{ij}] \in {\rm{Out}}(A_\Gamma)$ only if either $v_i$ and $v_j$ are in the same equivalence class of vertices, or $i \leq j$. It follows that a matrix in the image of $\overline{\Phi}|_{{\rm{Out}}^0(A_\Gamma)}$ has a block decomposition of the form: $$ M=\begin{pmatrix}M_1 & 0 & \dots & 0 \\ * & M_2 & \dots & 0 \\ \hdotsfor{4} \\ * & * & \dots & M_{r} \end{pmatrix},$$ where the $*$ in the $(i,j)$th entry in the block decomposition may be nonzero if $[v_{m_j}]\leq[v_{m_i}]$, but zero otherwise. Similarly, given a subgroup $G\leq {\rm{Out}}(A_\Gamma)$ generated by a subset of $\mathcal{A}_\Gamma$, if we order the vertices by the method given in Section~\ref{s:gordering}, a matrix in the image of $\overline{\Phi}|_{G^0}$ has a block decomposition of the form: $$ M=\begin{pmatrix}N_1 & 0 & \dots & 0 \\ * & N_2 & \dots & 0 \\ \hdotsfor{4} \\ * & * & \dots & N_{r'} \end{pmatrix},$$ where the $*$ in the $(i,j)$th entry in the block decomposition may be nonzero if $[v_{l_j}]_G\leq[v_{l_i}]_G$, but zero otherwise. If $[v]_G$ is an abelian equivalence class of vertices, then $G$ contains a copy $\SL(A_{[v]_G})$ generated by the $[\rho_{ij}]$ with $v_i,v_j \in [v]$. This fact lends itself to the following definition: \begin{definition} For any subgroup $G \leq {\rm{Out}}(A_\Gamma)$, the $\SL$--dimension of $G$, $d_{SL}(G)$, is defined to be the size of the largest abelian equivalence class under $\sim_G$. \end{definition} Roughly speaking, $d_{SL}(G)$ is the largest integer such that $G$ contains an obvious copy of $SL_{d_{SL}(G)}(\mathbb{Z})$. Note that $d_{SL}({\rm{Out}}(A_\Gamma))$ is simply the size of the largest abelian equivalence class under the relation $\leq$ defined on the vertices. As each abelian equivalence class of vertices is a clique in $\Gamma$, the $\SL$--dimension of ${\rm{Out}}(A_\Gamma)$ is less than or equal to the size of a maximal clique in $\Gamma$ (this is known as the \emph{dimension of} $A_\Gamma$). We can now look at how $G$ and its $\SL$--dimension behave under restriction, exclusion, and projection maps. \begin{lemma}\label{l:dimr} Suppose that $\Gamma'$ is a full subgraph of $\Gamma$ and the conjugacy class of $A_{\Gamma'}$ in $A_\Gamma$ is preserved by $G$. Then under the restriction map $R_{\Gamma'}$, the group $R_{\Gamma'}(G)$ is generated by a subset of $\mathcal{A}_{\Gamma'}$, and $d_{SL}(R_{\Gamma'}(G))\leq d_{SL}(G)$. \end{lemma} \begin{proof} One first checks that for an element $[\phi] \in T$, either $R_{\Gamma'}([\phi])$ is trivial or $R_{\Gamma'}([\phi]) \in \mathcal{A}_{\Gamma'}$. This is obvious in the case of graph symmetries, inversions, and transvections. In the case of partial conjugations if $v_j$ is not in $\Gamma'$, or if $\Gamma_{ij}\cap\Gamma'=\emptyset$, then $R_{\Gamma'}([K_{ij}])$ is trivial. Otherwise, $\Gamma_{ij} \cap \Gamma'$ is a union of connected components of $\Gamma' - st(v_j)$, so that $R_{\Gamma'}([K_{ij}])$ is an extended partial conjugation of $A_{\Gamma'}$. This proves the first part of the lemma. To prove the second part of the lemma, we first give an alternate definition of $d_{SL}(G)$. Elements in the image of $G^0$ under $\overline{\Phi}$ are of the form: \begin{equation} \label{matrix} M=\begin{pmatrix}N_1 & 0 & \dots & 0 \\ * & N_2 & \dots & 0 \\ \hdotsfor{4} \\ * & * & \dots & N_{r'} \end{pmatrix}, \end{equation} where each $N_i$ is an invertible matrix of size $l_{i+1}-l_i.$ Each of the blocks is associated to either an abelian or non-abelian equivalence class in $\sim_G$, so $d_{SL}(G)$ is the size of the largest diagonal block in this decomposition associated to an abelian equivalence class. Abelian equivalence classes of $\sim_G$ are mapped to abelian equivalence classes of $\sim_{R_{\Gamma'}(G)}$, and the action of $R_{\Gamma'}(G)^0$ on $A_{\Gamma'}^{ab}$ is obtained by removing rows and columns from the decomposition given in Equation \eqref{matrix}. Therefore, the largest diagonal block in the action of $R_{\Gamma'}(G)^0$ on $A_{\Gamma'}^{ab}$ associated to an abelian equivalence class will be of size less than or equal to $d_{SL}(G)$. Therefore $d_{SL}(R_{\Gamma'}(G))\leq d_{SL}(G)$. \end{proof} The following lemma is shown in the same way: \begin{lemma}\label{l:dime} Let $\Gamma'$ be a full subgraph of $\Gamma$. Suppose that the normal subgroup generated by $A_{\Gamma'}$ in $A_\Gamma$ is preserved by $G$. Then under the exclusion map $E_{\Gamma'}$, the group $E_{\Gamma'}(G)$ is generated by a subset of $\mathcal{A}_{\Gamma'}$, and $d_{SL}(E_{\Gamma'}(G))\leq d_{SL}(G)$. \end{lemma} As projection maps are obtained by the concatenation of a restriction and an exclusion map, combining the previous two lemmas gives the following: \begin{proposition}\label{p:dimp} Suppose that $\Gamma$ is connected and $v$ is a maximal vertex of $\Gamma$. Under the projection homomorphism $P_v$ of Example \ref{e:maps}, the group $P_v(G^0)$ is generated by a subset of $\mathcal{A}_{lk[v]}$ and $d_{SL}(P_v(G^0))\leq d_{SL}(G)=d_{SL}(G^0)$. \end{proposition} \section{Proof of Theorem \ref{t:main}}\label{s:main} Suppose that a group $\Lambda$ admits no surjective homomorphisms to $\mathbb{Z}$, so that ${\rm{Hom}}(\Lambda,\mathbb{Z})=0$. If a group $H$ satisfies the property: $$\text{\emph{For every subgroup $H'\leq H$ there exists a surjective homomorphism $H'\to\mathbb{Z}$}}.$$ Then ${\rm{Hom}}(\Lambda,H)=0$, also. For instance, a simple induction argument on nilpotency class gives the following: \begin{proposition}\label{p:nilpotent} Suppose that ${\rm{Hom}}(\Lambda,\mathbb{Z})=0$ and $G$ is a finitely generated torsion-free nilpotent group. Then every homomorphism $f:\Lambda \to G$ is trivial. \end{proposition} By Theorem \ref{t:tfn} we know that $\overline{\mathcal{T}}(A_\Gamma)$ is residually torsion-free nilpotent. Combining this with Proposition \ref{p:nilpotent} gives: \begin{proposition}\label{p:tor} Suppose that ${\rm{Hom}}(\Lambda,\mathbb{Z})=0$ and $f:\Lambda \to \overline{\mathcal{T}}(A_\Gamma)$ is a homomorphism. Then $f$ is trivial. \end{proposition} The overriding theme here is that we may build homomorphism rigidity results from weaker criteria by carefully studying a group's subgroups and quotients. This is also the flavour of our main theorem: \begin{theorem}\label{t:main} Suppose that $G$ is a subgroup of ${\rm{Out}}(A_\Gamma)$ generated by a subset $T \subset \mathcal{A}_\Gamma$ and $d_{SL}(G) \leq m$. Let $F(\Gamma)$ be the size of a maximal, discrete, full subgraph of $A_\Gamma$. Let $\Lambda$ be a group. Suppose that for each finite index subgroup $\Lambda'\leq\Lambda$, we have: \begin{itemize} \item Every homomorphism $\Lambda' \to {\rm{SL}}_m(\mathbb{Z})$ has finite image, \item For all $N \leq F(\Gamma)$, every homomorphism $\Lambda' \to \out(F_N)$ has finite image. \item ${\rm{Hom}}(\Lambda',\mathbb{Z})=0$ \end{itemize} Then every homomorphism $f:\Lambda \to G$ has finite image.\end{theorem} The remainder of this section will be dedicated to a proof of Theorem \ref{t:main}. We proceed by induction on the number of vertices in $\Gamma$. If $\Gamma$ contains only one vertex, then ${\rm{Out}}(A_\Gamma) \cong \mathbb{Z}/2\mathbb{Z}$, so there is no work to do. As the conditions on $\Lambda$ are also satisfied by finite index subgroups, we shall allow ourselves to pass to such subrgoups without further comment. \begin{rem}If either $m \geq 2$ or $F(\Gamma) \geq 2$, then as there exist no homomorphisms from $\Lambda'$ to ${\rm{SL}}_m(\mathbb{Z})$ or $\out(F_{F(\Gamma)})$ with infinite image, it follows that ${\rm{Hom}}(\Lambda',\mathbb{Z})=0$ also. This is always the case when $G={\rm{Out}}(A_\Gamma)$ and $|V(\Gamma)| \geq 2$. Hence the above statement of Theorem \ref{t:main} is a strengthening of the version given in the introduction. \end{rem} Let $f:\Lambda \to G$ be such a homomorphism. There are three cases to consider: \subsection{$\Gamma$ is disconnected.} In this case $A_\Gamma \cong F_N \ast_{i=1}^k A_{\Gamma_i}$, where each $\Gamma_i$ is a connected graph containing at least two vertices. Let $\Lambda'=f^{-1}({\rm{Out}}^0(A_\Gamma)).$ As ${\rm{Out}}^0(A_\Gamma)$ is finite index in ${\rm{Out}}(A_\Gamma)$, this means $\Lambda'$ is finite index in $\Lambda$. As in Example \ref{e:disconnected}, for each $\Gamma_i$ there is a restriction homomorphism:$$R_i:{\rm{Out}}^0(A_\Gamma) \to \out^0(A_{\Gamma_i}).$$ By Lemma \ref{l:dimr}, $R_i(G)$ is generated by a subset $T_i \subset \mathcal{A}_{\Gamma_i}$, and $d_{SL}(R_i(G)) \leq d_{SL}(G)$. As $\Gamma_i$ is a proper subgraph of $\Gamma$, we have $F(\Gamma_i)\leq F(\Gamma)$ and $|V(\Gamma_i)| < |V(\Gamma)|$. Hence, by induction $R_if(\Lambda')$ is finite for each $i$, and there exists a finite index subgroup $\Lambda_i$ of $\Lambda'$ such that $R_if(\Lambda_i)$ is trivial. We may also consider the exclusion homomorphism: $$E:{\rm{Out}}(A_\Gamma) \to \out(F_N).$$ As $N \leq F(\Gamma)$, the group $\ker(Ef)$ is a finite index subgroup of $\Lambda$. Let $$\Lambda''=\cap_{i=1}^k\Lambda_i\cap\ker(Ef).$$ As $\Lambda''$ is the intersection of a finite number of finite index subgroups, it is also finite index in $\Lambda$. We now study the action of $\Lambda''$ on $H_1(A_\Gamma)$. The transvection $[\rho_{ij}]$ belongs to ${\rm{Out}}^0(A_\Gamma)$ only if either $v_i$ and $v_j$ belong to the same connected component of $\Gamma$, or if $v_i$ is an isolated vertex of $\Gamma$. Therefore the action of ${\rm{Out}}^0(A_\Gamma)$ on $H_1(A_\Gamma)$ has a block decomposition of the following form: $$\begin{pmatrix}M_1 & 0 & \dots & 0 & * \\ 0 & M_2 & \dots & 0 & * \\ \hdotsfor{5} \\ 0 & 0 & \dots & M_k & * \\ 0 & 0 & \dots & 0 & M_{k+1} \end{pmatrix},$$ where $M_i$ corresponds to the action on $A_{\Gamma_i}$, and $M_{k+1}$ corresponds to the action on $F_N$. However, as $R_if(\Lambda'')$ is trivial for each $i$, and $Ef(\Lambda'')$ is trivial, the action of $\Lambda''$ on $H_1(A_\Gamma)$ is of the form: $$\begin{pmatrix}I & 0 & \dots & 0 & * \\ 0 & I & \dots & 0 & * \\ \hdotsfor{5} \\ 0 & 0 & \dots & I & * \\ 0 & 0 & \dots & 0 & I \end{pmatrix}.$$ This means there is a homomorphism from $\Lambda''$ to an abelian subgroup of ${\rm{GL}}_n(\mathbb{Z})$. As ${\rm{Hom}}(\Lambda'',\mathbb{Z})=0$, this homomorphism must be trivial. Hence $f(\Lambda'') \subset \overline{\mathcal{T}}(A_\Gamma).$ By Proposition \ref{p:tor}, this shows that $f(\Lambda'')$ is trivial. Hence $f(\Lambda)$ is finite. \subsection{$\Gamma$ is connected and $Z(A_\Gamma)$ is trivial.} Let $\Lambda'=f^{-1}({\rm{Out}}^0(A_\Gamma))=f^{-1}(G^0)$. For each maximal vertex $v$ of $\Gamma$ we have a projection homomorphism:$$P_v:{\rm{Out}}^0(A_\Gamma) \to \out^0(A_{lk[v]}).$$ By Proposition~\ref{p:dimp}, $P_v(G^0)$ is generated by a subset $T_v \subset \mathcal{A}_{lk[v]}$ and $d_{SL}(P_v(G^0))\leq d_{SL}(G^0)=d_{SL}(G)$. As $lk[v]$ is a proper subgraph of $\Gamma$, we have $F(lk[v])\leq F(\Gamma)$ and $|V(lk[v])| < |V(\Gamma)|$. Therefore by induction $P_vf(\Lambda')$ is finite. Let $$\Lambda''=\bigcap_{[v] \text{ max.}}\ker (P_vf).$$ Then $\Lambda''$ is a finite index subgroup of $\Lambda$ and lies in the kernel of the amalgamated projection homomorphism: $$P:{\rm{Out}}^0(A_\Gamma) \to \bigoplus_{[v] \text{ max.}}\out^0(A_{lk[v]}).$$ By Theorem \ref{t:projections}, $\ker P$ is a finitely generated free-abelian group. As ${\rm{Hom}}(\Lambda'',\mathbb{Z})=0$, a homomorphism from $\Lambda''$ to $\ker P$ must be trivial. Therefore $f(\Lambda'')$ is trivial and $f(\Lambda)$ is finite. \subsection{$\Gamma$ is connected and $Z(A_\Gamma)$ is nontrivial.} Suppose that $Z(A_\Gamma)$ is nontrivial. Let $[v]$ be the unique maximal equivalence class in $\Gamma$. Let $P_v$ and $R_v$ be the restriction and projection maps given in Theorem~\ref{p:projections2}. If $[v]$ is not equal to the whole of $\Gamma$ then by induction $P_vf(\Lambda)$ and $R_vf(\Lambda)$ are both finite, therefore there exists a finite index subgroup $\Lambda'$ of $\Lambda$ such that $f(\Lambda')$ is contained in the kernel of $P_v \times R_v$. This is the free abelian group $\text{Tr}$, so the image of $\Lambda'$ in $\text{Tr}$ is trivial, and $f(\Lambda)$ is finite. Therefore we may assume that $\Gamma=[v]$. We now look at the $\sim_G$ equivalence classes in $\Gamma$. As $A_\Gamma$ is free abelian, each $[v_i]_G \subset [v]$ is abelian, and as $d_{SL}(G) \leq m$, every such $[v_i]_G$ contains at most $m$ vertices. Therefore matrices in (the image of) $G^0$ (under $\overline{\Phi}$) are of the form: $$ M=\begin{pmatrix}N_1 & 0 & \dots & 0 \\ * & N_2 & \dots & 0 \\ \hdotsfor{4} \\ * & * & \dots & N_{r'} \end{pmatrix},$$ where the $*$ in the $(i,j)$th block is possibly nonzero if $[v_{l_j}]\leq[v_{l_i}]$. For each $i$, we can look at the projection $M \mapsto N_{i}$ to obtain a homomorphism $g_i:SG^0 \to \text{SL}_{l_{i+1}-l_i}(\mathbb{Z})$. As $l_{i+1}-l_i \leq m$, our hypothesis on $\Lambda$ implies that $g_if(f^{-1}(SG^0))$ is finite for all $i$. Let $\Lambda_i$ be the kernel of each map $g_if$ restricted to $f^{-1}(SG^0)$. Each $\Lambda_i$ is finite index in $\Lambda$. Let $\Lambda'= \cap_{i=1}^k \Lambda_i.$ Then matrices in the image of $\Lambda'$ under $f$ are of the form: $$ M=\begin{pmatrix}I & 0 & \dots & 0 \\ * & I & \dots & 0 \\ \hdotsfor{4} \\ * & * & \dots & I \end{pmatrix},$$ therefore $f(\Lambda')$ is a torsion-free nilpotent group. By Proposition \ref{p:nilpotent}, $f(\Lambda')$ is trivial. Hence $f(\Lambda)$ is finite, and this finishes the final case of the theorem. \section{Consequences of Theorem \ref{t:main}} \label{s:consequences} We say that a group $\Lambda$ is $\mathbb{Z}$--\emph{averse} if no finite index subgroup of $\Lambda$ contains a normal subgroup that maps surjectively to $\mathbb{Z}$. This restriction gives a large class of groups to which $\Lambda$ cannot map. For instance, in \cite{BW2010} Bridson and the author prove the following theorem: \begin{theorem}\label{t:free} Suppose that $\Lambda$ is $\mathbb{Z}$--averse and $f:\Lambda \to {\rm{Out}}(F_n)$ is a homomorphism. Then $f(\Lambda)$ is finite. \end{theorem} Note that if $\Lambda$ is $\mathbb{Z}$--averse, then every finite index subgroup of $\Lambda$ is also $\mathbb{Z}$--averse. As there are no homomorphisms from a $\mathbb{Z}$--averse group to $\SL_2(\mathbb{Z})$ with infinite image (as $\SL_2(\mathbb{Z})$ is virtually free), combining the above with Theorem \ref{t:main} we obtain: \begin{cor} If $\Lambda$ is a $\mathbb{Z}$--averse group, and $\Gamma$ is a finite graph that satisfies ${d_{SL}({\rm{Out}}(A_\Gamma))\leq 2},$ then every homomorphism $f:\Lambda \to {\rm{Out}}(A_\Gamma)$ has finite image. \end{cor} We would like to apply Theorem \ref{t:main} to higher-rank lattices in Lie groups. For the remainder of this section $\Lambda$ will be an irreducible lattice in a semisimple real Lie group $G$ with real rank ${\rm{rank}}_\mathbb{R} G \geq 2$, finite centre, and no compact factors. Such lattices are $\mathbb{Z}$-averse by Margulis' normal subgroup theorem, which states that if $\Lambda'$ is a normal subgroup of $\Lambda$ then either $\Lambda/\Lambda'$ is finite, or $\Lambda'\subset Z(G)$, so $\Lambda'$ is finite. The work of Margulis also lets us restrict the linear representations of such lattices: \begin{proposition} \label{p:SLrigidity} If ${\rm{rank}}_\mathbb{R} G \geq k$ then every homomorphism $f:\Lambda \to \SL_k(\mathbb{Z})$ has finite image. \end{proposition} To prove this we appeal to Margulis superrigidity. The following two theorems follow from \cite{Margulis91}, Chapter IX, Theorems 6.15 and 6.16 and the remarks in 6.17: \begin{theorem}\label{t:ss} Let H be a real algebraic group and $f:\Lambda \to H$ a homomorphism. The Zariski closure of the image of $f$, denoted $\overline{f(\Lambda)}$, is semisimple. \end{theorem} \begin{theorem}[Margulis' Superrigidity Theorem]\label{t:sr} Let $H$ be a connected, semisimple, real algebraic group and $f:\Lambda \to H$ a homomorphism. If \begin{itemize} \item H is adjoint (equivalently $Z(H)=1$) and has no compact factors, and \item $f(\Lambda)$ is Zariski dense in H, \end{itemize} then $f$ extends uniquely to a continuous homomorphism $\tilde{f}:G \to H$. Furthermore, if $Z(G)=1$ and $f(\Lambda)$ is nontrivial and discrete, then $\tilde{f}$ is an isomorphism. \end{theorem} We may combine these to prove Proposition \ref{p:SLrigidity}: \begin{proof}[Proof of Proposition \ref{p:SLrigidity}] Let $f:\Lambda \to \SL_k(\mathbb{Z})$ be a homomorphism. By Theorem~\ref{t:ss}, the Zariski closure of the image $\overline{f(\Lambda)} \subset \SL_k(\mathbb{R})$ is semisimple. Also, $\overline{f(\Lambda)}$ has finitely many connected components --- let $\overline{f(\Lambda)}_0$ be the connected component containing the identity. Decompose $\overline{f(\Lambda)}_0=H_1 \times K$, where $K$ is a maximal compact factor. Then $H_1$ is a connected semisimple real algebraic group with no compact factors. We look at the finite index subgroup $\Lambda_1=f^{-1}(H_1)$ of $\Lambda$, so that $\overline{f(\Lambda_1)}=H_1$. As the centre of a subgroup of an algebraic group is contained in the centre of its Zariski closure, $f(Z(\Lambda_1)) \subset Z(H_1)$. This allows us to factor out centres in the groups involved. Let $G_2=G/Z(G)$, $\Lambda_2=\Lambda_1/Z(\Lambda_1)=\Lambda_1/(\Lambda_1 \cap Z(G))$ and $H_2=H_1/Z(H_1)$. Then there is an induced map $f_2:\Lambda_2 \to H_2$ satisfying the conditions of Theorem \ref{t:sr}. Therefore if $f_2(\Lambda_2)\neq1$ there is an isomorphism $\tilde{f}_2:G_2 \to H_2$. However \begin{align*}{\rm{rank}}_\mathbb{R} G_2={\rm{rank}}_\mathbb{R} G&\geq k \\ {\rm{rank}}_\mathbb{R} H_2 = {\rm{rank}}_\mathbb{R} H_1 \leq {\rm{rank}}_\mathbb{R} \SL_k(\mathbb{R}) &= k-1.\end{align*} This contradicts the isomorphism between $H_2$ and $G_2$. Therefore $f_2(\Lambda_2)=1$. As $Z(\Lambda_1)$ is finite, and $\Lambda_1$ is finite index in $\Lambda$, this show that the image of $\Lambda$ under $f$ is finite. \end{proof} Combining Proposition \ref{p:SLrigidity} with Theorems \ref{t:free} and \ref{t:main}, this gives: \begin{theorem}\label{t:lr} Let $G$ be a real semisimple Lie group with finite centre, no compact factors, and ${\rm{rank}}_\mathbb{R} G \geq 2$. Let $\Lambda$ be an irreducible lattice in $G$. If ${\rm{rank}}_\mathbb{R} G \geq d_{SL}({\rm{Out}}(A_\Gamma))$, then every homomorphism $f:\Lambda \to {\rm{Out}}(A_\Gamma)$ has finite image. \end{theorem} The following corollary justifies our definition of $\SL$--dimension, and shows that you can't hide any larger copies of $\SL_n$ inside ${\rm{Out}}(A_\Gamma)$: \begin{cor} ${\rm{Out}}(A_\Gamma)$ contains a subgroup isomorphic to $\SL_k(\mathbb{Z})$ if and only if $k \leq d_{SL}({\rm{Out}}(A_\Gamma))$. \end{cor} \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,071
On February 14, 2017, the Court of Appeals of the State of Washington Division Three ("Appellate Court") affirmed the trial court granting summary judgment to the medical malpractice defendants, finding that the affidavit of the plaintiff's medical expert that was submitted in opposition to the defendants' summary judgment motions had failed to provide competent expert testimony on the issues of standard of care, causation, and damages. The decedent was treated for chest pains in April 2010. Medical testing and imaging reports as well as laboratory results showed positive tuberculosis cultures from the decedent's sputum sample, and additional sputum testing was positive for tuberculosis. A physician then prescribed the decedent medications for the treatment of tuberculosis, including Isoniazid, also known as isonicotinylhydrazide (INH). The plaintiff and her medical expert alleged that the decedent did not suffer from tuberculosis but the decedent took the tuberculosis medications nonetheless. The defendant public health clinic where the decedent was treated alleged that it sought to monitor the decedent's liver function but he failed to show for testing. After ingesting the prescribed drugs, the decedent suffered from nausea, vomiting, dizziness, lack of energy, and a loss of appetite, and his skin also changed to a reddish-yellow shade. The plaintiff alleged that in June 2010, the decedent expressed a desire to discontinue taking the tuberculosis medications because of severe discomfort but a physician insisted that he continue taking the medications, under threat of incarceration if he failed to do so. The decedent continued to receive medical care for tuberculosis in July and August 2010. In July 2010, the decedent's condition worsened: he became unable to walk, drive, or eat; he experienced body shakes, hand tremors, and confusion; and, his abdomen swelled, all of which he reported to his medical providers. The medical providers then discovered serious deviations in his laboratory results. The decedent died of liver failure on August 6, 2010. The plaintiff's Washington medical malpractice lawsuit against the public health clinic and one of its physicians alleged medical negligence and wrongful death. The defendants moved for summary judgment, contending that the plaintiff lacked expert medical testimony to support her claim of medical malpractice. In response to the summary judgment motions, the plaintiff filed a declaration by her medical expert, who is a licensed physician in the State of Washington who owns an internal medical clinic in Yakima. The plaintiff's expert specializes in the areas of complex medical patients with chronic pain symptoms, geriatric patients, and internal medicine patients. The trial court concluded that the detailed declaration of the plaintiff's medical expert nonetheless failed to provide competent expert testimony on the issues of standard of care, causation, and damages, and therefore granted the defendants' motions. The plaintiff appealed. If you or a loved one were injured due to medical negligence in Washington State or in another U.S. state, you should promptly find a Washington medical malpractice attorney, or a medical malpractice attorney in your U.S. state, who may investigate your medical malpractice claim for you and represent you in a medical malpractice case, if appropriate. This entry was posted on Wednesday, February 22nd, 2017 at 5:15 am. Both comments and pings are currently closed.
{ "redpajama_set_name": "RedPajamaC4" }
291
'Abbas Hardens Position Regarding September U.N. Gambit Palestinian Authority head Mahmoud 'Abbas told the PLO Executive Committee on Wednesday that he plans to seek recognition of the state of Palestine in the U.N. General Assembly in September even if Israel and the Palestinians agree to resume negotiations before then. 'Abbas' remarks indicate a hardening of his position, which until now has been that the U.N. gambit would be halted if Israel agreed to his conditions for returning to the negotiating table. It remains uncertain exactly what the Palestinians will be able to accomplish at Turtle Bay since officials there ruled that the General Assembly cannot deal with issues of membership unless the Security Council first passes an enabling resolution, which the U.S. has promised to veto. Palestinian Prime Minister Salam Fayyad has opined publicly that declaring statehood at the U.N. without Israeli cooperation would result in an empty "symbolic declaration." Palestinian officials in Ramallah have expressed conflicting opinions to The Media Line, many believing that the U.N. option will fizzle. Others believe a statehood declaration that is not followed up by movement on the ground will be a source of tension that could manifest in violent demonstrations. Some also expressed concern at what they are calling 'Abbas' confrontational approach toward the United States which, they fear, could jeopardize vital foreign aid from Washington. 'Abbas also called for mass demonstrations — a la "Arab Spring" — against Israel. He said, "I insist on popular resistance and I insist that it be unarmed popular resistance so that nobody misunderstands us. We are now inspired by the protests of the Arab Spring, all of which cry out 'peaceful', 'peaceful.'" URL to article: https://themedialine.org/mideast-daily-news/abbas-hardens-position-regarding-september-u-n-gambit/
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,860
Guide - Soldier of Fortune 2 Wiki Guide - IGN. Final Fantasy VI is the sixth main installment in the Final Fantasy series, developed and published by Squaresoft.It was released in April 1994 for the Super Nintendo Entertainment System in Japan, and released as Final Fantasy III in North America in October 1994 (with alterations made due to Nintendo of America's guidelines at the time). Later, it was re-released worldwide as Final Fantasy. Soldier of Fortune is a first-person action shooter title from Raven Software. You play a soldier-for-hire who must travel to some of the world's most dangerous political hotspots in order to carry out your task. Take part in over 30 missions with tasks such as locating nukes on a speeding freight train to searching out an Iraqi oil refinary. You'll be utilizing the world's most lethal. Soldier of Fortune II: Double Helix (Gold Edition) is a video game published in 2003 on Windows by Activision Publishing, Inc. It's an action game, set in a shooter, licensed title and contemporary themes. Does everyone deserve a second chance? Find out which gaming comebacks worked out and which should have retired for good. As Rambo prepares to return to the big screen after a twenty-year absence, videogames.yahoo.com looks back over video games series and companies that have tried to pull off similar comebacks. Which games vanished for years, only to return out of nowhere? Which companies. PPC Games. From OSx86. Jump to: navigation, search. This. Soldier Of Fortune 2. Game Name: Soldier Of Fortune 2 Working: Yes Playable: Not Very Tested Resolutions: 640x480, 800x600 Graphics Card: GMA900 Audio: Yes Networking: No noticible Problems OS X Version: 10.4.4 Game Version: 1.0, 1.03 (Gold) and Gold with OSP patch Multiplayer Notes: tested on a 2.8 Celeron 336, 512 ram, using xbench. As a PC game, Soldier of Fortune has much of that same appeal mixed with an over-the-top budget blockbuster theme. It looks and plays like an action movie, made to feel more so with levels that involve hijacking a running freight train or defending your sidekick while he disarms a time bomb. The game is divided into ten missions that take you to famous war-torn hotspots like Iraq, Russia. I am a computer noob so sorry if this is stupid question. I am trying to use my ps3 controller with it. I downloaded some software that is supposed to allow it to work as a pc game pad. Doom will. Call of Duty: WWII is a first-person shooter video game developed by Sledgehammer Games for the Xbox One, PlayStation 4 and Windows. Leaked concept art for the game was initially found in late March 2017, a month before the official reveal on April 26, 2017. The game was released on November 3, 2017. Call of Duty: WWII is the fourteenth game in the Call of Duty franchise and Sledgehammer Games. Cold call email script to get appointment Are slot machines legal in virginia The bookwoman of troublesome creek reviews Poker tournament deal making Ninja blade pc game free download highly compressed King court key poker room Live jazz and food near me Texas holdem poker news Old dutch corn chips vegan Poker therapy pdf Planning poker docker Slot sim s6 edge plus Casino game wallpaper Mail slot for garage Card life game xbox one Gold rate online hyderabad Casino tycoon 2 stream Where can i play free slots Slots at harrah's cherokee How to get good grades in 10 easy steps Max payne 3 game free download full version for pc setup Apple tv usb port usage How to withdraw money from quora partner program Dragon ball z watercolor painting Live oak fl ford dealer Poker texas boyaa Soldier of Fortune ll still the goriest game of. - GameSpot. Walkthrough for The Witcher 3: Wild Hunt, following our recommended Game Progress Route. The game is divided into a prologue and 3 acts. The main quest progresses the act and certain side quests become available as this progression happens. Other side quests and. Hello again and welcome to another edition of Johnny's Game Profile. Today, I'm going to talk about a strange PC game called The Chaos Engine. The stry of the game goes like this. A huge system of scientists have discovered the way to manipulate human DNA and make super mutants. But the experiment soon went horribly wrong, it wasn't long at all before the mutants created their very own. Fallout: Brotherhood of Steel, commonly abbreviated as Fallout: BOS, FOBOS, or simply BOS, is an action role-playing game developed and produced by Interplay Entertainment for the Playstation 2 and Xbox game consoles. Released on January 13, 2004, Fallout: Brotherhood of Steel was the fourth video game to be set in the Fallout universe and the first to be made for video game consoles. Soldier of Fortune: Payback is the third installment of the Soldier of Fortune game series. Unlike the previous two Soldier of Fortune games, which were developed by Raven Software utilizing the Quake 2 and Quake 3 engines, Payback was developed by Cauldron HQ, developed with Cauldron's in-house CloakNT engine, used in their previous First Person Shooter game, Chaser. Soldier of Fortune II: Double Helix is a first-person shooter video game developed by Raven Software, the sequel to Soldier of Fortune.It was developed using the id Tech 3 engine as opposed to the original's id Tech 2, and published in 2002.Once again, Raven hired John Mullins to act as a consultant on the game. Based on criticisms of the original game, Raven Software developed Soldier of. Soldier of Fortune 2: Double Helix: The stakes are even higher in this heart-pounding sequel to the FPS hit! Soldier of Fortune: Payback: Payback is hell! Space Trader: Star Trek: Voyager - Elite Force: Set phasers to frag. Star Trek: Voyager - Elite Force Expansion Pack: Set phasers to frag with the all-new Expansion Pack! Star Trek: Elite Force 2: The alien invaders show no mercy, and. Soldier of Fortune II follows up where the original installment finished off. Boasting one of the finest arsenals of weapons and various different usages in-game such as the always enjoyable akimbo styles this game has quickly become a favourite among the more trigger happy out that. Another positive is that generally you are fighting humans adding to a more enjoyable gaming experience. With. It delivers the most realistic covert-operative themed shooter ever created. Like a blockbuster action-thriller Soldier of Fortuneplunges players into the secret and deadly world of the modern-day gun-for-hire via dozens of real-to-life missions spanning five continents and innovative multiplayer modes. Features: 30 deadly missions. It's funny and complicated and completely nonsensical in the overblown fashion that only Valve could create. Basically, you're on RED or BLU (whose uniforms are red and blue, respectively). You've been hired to shoot the people who dress differe. Arctic Warfare Caitlyn (re-released for purchase in the store, released first in September 2011 as a gift skin for buyers of the November 2011 issue of PC Gaming); The following skins were also released with this patch, but were not available for purchase until Monday, January 2.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,902
Francouzský angloarab je označení plemena teplokrevného koně, vyšlechtěného na území Francie křížením anglického plnokrevníka a arabským. Historie chovu V první polovině 19. století začali chovatelé v Anglii s zkvalitněním výtečného zdejšího plnokrevníka přidáním vlastností arabských plnokrevníků. Vzápětí po nich obdobné křížení vyzkoušeli chovatelé v Francii, Německu, Polsku a také v Čechách. Ve Francii systematický chov angloarabů začal roku 1836 v hřebčíně Pompadour. Tehdejší ředitel E.Gayot nechal arabské hřebce Massouda a Aslana Turka připustit na plnokrevné klisny Dair, Common a Selim. Vzniklé potomstvo bylo velmi kvalitní, mělo polovinu krve arabské, polovinu anglické. Chov se v tomto hřebčíně podařilo ustálit, tedy dále nekřížit s jinými plemeny. Později se chovem začaly zabývat další hřebčíny a vzniklo tak několik linií Francouzského angloaraba. Vzhled koně Je to kůň vysoký a štíhlý. Hlavu mívá jemnější než anglický plnokrevník, s rovným profilem a má velké oči. Krk je delší než u koní arabských, ocas posazený vysoko. Nohy jsou dlouhé a štíhlé. Barvy má povolené všechny. Při Olympijských hrách v Berlíně byl tento druh koně vyhlášen nejkrásnějším koněm her. Vlastnosti Není tolik vznětlivý, jako plnokrevníci a ani tak rychlý, zato má dokonalé chody, je odolný a vytrvalý. Každý z hřebčínů produkuje trochu odlišné typy, někteří koně jsou tak lehčí a rychlejší. Použití Je zejména využíván pro sportovní účely, k dostihům, drezuře i dalším soutěžím. Pompadurští koně sloužili v armádě a byli využíváni jako lehcí kočároví jukři. Odkazy Reference Externí odkazy Plemena koní z Francie
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,358
424051F8-F6A7-44EA-872F-3A237199D554 Wright, Bates: Similar views, different directions By MEG LANDERS,Mail Tribune Senate District — Democratic candidate Alan Bates and Republican opponent Jim Wright share similar stands on the issues but say their approaches and experience set them apart in the Nov. 2 race. Both list education as the state's No. — funding priority, followed by public safety and health care and human services. They support caps on medical malpractice suits and oppose compensation for land-use laws. But when it comes to campaigning, the agreements between Bates, a 54-year-old Ashland physician, and Wright, a 64-year-old Medford businessman, come to an abrupt halt. The campaign turned negative and he's continued to be negative, said Bates, referring to a flier released in September by Wright's campaign that altered one of Bates' quotes about a sales tax. (If) the public doesn't know who to believe, they won't believe anything, he said, declining further comment about his opponent. — Wright, who in a recent flier blamed Bates and other politicians for dragging on the last legislative session for 227 days, said he's just questioning Bates' record and priorities. There's been absolutely nothing personal, said Wright. Bates said being a physician and knowing how to read a & 36;12 billion budget are two assets he'll bring to the Senate if elected. He is currently the representative for House District 5. The other thing I'm noted for up there, I suppose, is being bipartisan, he said. He said experience is essential at a crucial time for Oregon. Wright said his background is also valuable experience. I come from more of the business side of things and working within the means we have, he said. I believe that there are ways of incorporating business tools. Senate District — includes Ashland, Talent, Phoenix, Jacksonville and a majority of Medford. Bates has served in the Oregon House since 2001. He grew up in the Pacific Northwest, served in the U.S. Army in Vietnam and has lived in Southern Oregon for the past 20 years. He was chief of medicine at both Rogue Valley and Providence Medford medical centers and a board member for a group of primary care physicians. He served three terms on the Eagle Point School Board and an appointed term on the governor's task force on Quality in Education. Wright is a 1958 graduate of Ashland Senior High School and a 1963 graduate of what is now Southern Oregon University. He is former president and current vice chairman of LTM Inc., a Medford paving and aggregate company. He is a member of several community organizations, including the Medford chamber and RVMC Foundation board. He has also served as chairman of the Oregon Economic Development Commission and Oregon State Finance Committee. When he was 11 or 12 years old, he was diagnosed with retinitus pigmentosa, a progressive eye disease. He stopped driving 17 years ago but said he can read with the help of technological aids. Wright said education funding needs to be the first commitment in the Legislature's budget so schools know early on how much money they'll have to work with in the coming biennium. He said funding could come from cuts that would weed out government inefficiencies. I come from the less government side rather than more, he said. Bates agreed that education must take top priority at the Legislature. We have to fund that at a level that's adequate, he said, adding that putting 30 or more students in one classroom is not adequate funding. That's not teaching, that's crowd control, he said. Bates said he is strongly in favor of Measure 35, which would place a & 36;500,000 cap on non-economic damages in medical malpractice suits. He said Southern Oregon, which already has trouble recruiting physicians, is going to lose more doctors without caps on malpractice suits. Wright agreed, adding that physicians are retiring early because malpractice insurance is too costly. Another area where they agree is Measure 37. If passed Nov. 2, it would force government agencies to either compensate some landowners for restrictive zoning laws or allow them to develop their properties. I'm extremely worried about 37, said Bates, adding that the state's land-use laws may need tweaking but Measure 37 goes too far. I'd say I'm opposing it, said Wright, adding that Oregonians clearly want changes. We do need to do more than a little bit of modification to our land-use laws ' they're trying to send us a message, he said. I think we have to listen. Ashland. physician, Oregon House District 5 representative 2001 to present. Party affiliation: Democrat. Eagle Point School Board, Governor's Committee for Excellence in Education, Oregon Health Services Commission. doctor of osteopathy, College of Osteopathic Medicine, Kansas City, Mo.; bachelor's, Central Washington State University. Campaign contributions: 36;171,343 (including a beginning balance of & 36;49,160). Largest contributions: 36;10,000 each from Oregon Nurse political action committee, Oregon AFSCME Council 75, Doctors for Healthy Communities PAC; and & 36;5,042 from Oregon Education Association. Expenditures: 36;84,121 so far, primarily on surveys, brochures, fund-raising events and advertising. Jim Wright Medford. vice chairman and former president of LTM Inc., a Medford paving and aggregate company. Republican. past chairman, Oregon Economic Development Commission and Oregon State Finance Committee; past member Ashland planning council. bachelor's, Southern Oregon State College. 36;30,000 from the Leadership Fund, & 36;29,829 from the Oregon Victory Committee, & 36;10,000 from the Oregon Restaurant Association PAC, & 36;10,000 from the AGC Committee for Action. 36;166,379 so far, primarily on brochures, fund-raising events, advertising and commercials. Reach reporter Meg Landers at 776-4481 or e-mail .
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,076
Guided by the conviction that facts matter, Principled Technologies (PT) provides industry-leading technology assessment, fact-based marketing, and learning & development services. We provide the engaging, compelling marketing and learning content you need to win in the attention economy. I am Roger Moore and I have ridden with CBC since the team's early days (sporting the original white jerseys!); I have been a Team CBC sponsor for about a dozen years. I love riding with the team and sharing the comradery of hanging out after rides, as I am sure you all do. When I am not hammering pedals I am a clinical psychologist and I have a practice called "The Center for Psychological Wellness, PA" (www.psychwellness.net) which is located in Cary (across from Western Wake Medical Center). I provide treatment for adults, adolescents, couples and families. I have extensive experience in helping people with anxiety that presents in a variety of ways such as panic, phobias, post-traumatic stress, obsessive/compulsive disorder, and generalized anxiety. I am adept at helping folks address issues such as anger, chronic pain, weight control, depression and ADHD, and I work quite a bit with people who are struggling with relationship issues and personality disorders. I also work with couples and families who are facing various challenges including relationship issues, parent/child struggles, separation and divorce dynamics, co-parenting and integrating blended families. If you find that you are facing some of life's challenges that can't quite be fixed with long rides on your bike, or know of someone else who may be in need of mental health assistance, I would be happy to see if there are ways I can be of help. Roger B. Moore, Jr., Ph.D. Our experienced legal team will give you and your family the care and attention it deserves. We handle injury and disability cases throughout NC and we handle catastrophic injury, defective drug and defective product cases nationally. At Henson Fuerst, we're a law firm that is known for handling complicated cases.
{ "redpajama_set_name": "RedPajamaC4" }
637
Heslington Brain This article is an orphan, as no other articles link to it. Please introduce links to this page from related articles; try the Find link tool for suggestions. (April 2018) Mass "A" of the Heslington Brain Human brain (Mass "A") 70 millimetres (2.8 in) × 60 millimetres (2.4 in) × 30 millimetres (1.2 in) 673–482 BC By York Archaeological Trust in August 2008 Heslington, Yorkshire The Heslington Brain is a 2,600-year-old human brain found inside a skull buried in a pit in Heslington, Yorkshire, in England, by York Archaeological Trust in 2008. It is the oldest preserved brain ever found in Eurasia, and is believed to be the best-preserved ancient brain in the world.[1] The skull was discovered during an archaeological dig commissioned by the University of York on the site of its new campus on the outskirts of the city of York. The area was found to have been the site of well-developed permanent habitation between 2,000–3,000 years before the present day. A number of possibly ritualistic objects were found to have been deposited in several pits, including the skull, which had belonged to a man probably in his 30s. He had been hanged before being decapitated with a knife and his skull appears to have been buried immediately. The rest of the body was missing. Although it is not known why he was killed, it is possible that it may have been a human sacrifice or ritual murder. The brain was found while the skull was being cleaned. It had survived despite the rest of the tissue on the skull having disappeared long ago. After being extracted at York Hospital, the brain was subjected to a range of medical and forensic examinations by York Archaeological Trust which found that it was remarkably intact, though it had shrunk to only about 20% of its original size. It showed few signs of decay, though most of its original material had been replaced by an as yet unidentified organic compound, due to chemical changes during burial. According to the archaeologists and scientists who have examined it, the brain has a "resilient, tofu-like texture". It is not clear why the Heslington brain survived, although the presence of a wet, anoxic environment underground seems to have been an essential factor, and research is still ongoing to shed light on how the local soil conditions may have contributed to its preservation. 1 Discovery Discovery[edit] The site where the brain was discovered is about 3 kilometres (1.9 mi) south-east of York city centre on the eastern edge of Heslington village. It is situated partly on the ridge of an ancient glacial moraine and partly in the basin of the Vale of York.[2] Until the construction of the Heslington East campus of the University of York in 2009, the site was used as agricultural land.[3] Heslington Heslington, England Survey and excavation work, commissioned by the university, was carried out on the site from 2003. It culminated in a full-scale excavation carried out in 2007–08 by York Archaeological Trust in an area of just over 8 hectares (20 acres).[2] The archaeologists found that the landscape had been inhabited and farmed for thousands of years. The remains of prehistoric fields, buildings and trackways were discovered, dating from the Bronze Age through to the middle of the Iron Age,[3][4] with traces of earlier activity as far back as the Mesolithic and Neolithic periods.[2] During the Iron Age, the area appears to have been the site of a permanent settlement. The excavators found a number of circular features, which they interpreted as the remains of roundhouses.[5] The inhabitants seem to have relocated during the Roman period to a site further up the ridge, leaving the area of the Iron Age settlement to revert to fields.[2] Around a dozen pits were found on the site and possibly ritualistic objects were found in a number of them. Some pits had been pierced with a single stake, while a number of pits included "burned" cobbles of a local type of stone. The headless body of a red deer had been deposited in a drainage channel and a red deer antler was found in an Iron Age ditch.[3] In one waterlogged pit, a human skull – with the jawbone and the first two vertebrae still attached – was discovered by archaeologist Jim Williams in August 2008, lying face-down at the bottom of the pit. Jim cleaned it up and recorded it but had a dentist's appointment that afternoon (for a filling) and so the task of lifting the skull fell to his colleague and friend Rupert Lotherington. The find was seen as unusual but its true importance was not discovered until after it had been transported, within a block of soil, to the Finds Laboratory of the York Archaeological Trust. As Finds officer Rachel Cubitt was cleaning it, she noticed that something was loose inside the skull.[2] She peered through its base and saw that it contained a "yellow substance". She said later that it "jogged my memory of a university lecture on the rare survival of ancient brain tissue ... we gave the skull special conservation treatment as a result and sought expert medical opinion."[4] The skull and its contents were put into cold storage and were examined using a variety of medical and autopsy techniques.[3] Analysis[edit] The excavation at Heslington East, May 2008. The skull was found to be that of a man aged between 26–45 years old at the time of death, likely in his mid-30s. Radiocarbon dating found that the man had died some time between the 7th century BCE and 5th century BCE (673–482 BCE). He belonged to the Mitochondrial DNA haplogroup J1d, which had not previously been seen in Britain, in either living or dead individuals. J1d has been identified previously in only a few people from Tuscany and the Middle East; it may have been more widely present in Britain in the past, and lost through genetic drift.[3] He had died from traumatic spondylolisthesis – a complete fracture of his spine that is usually associated with death by hanging. Shortly after death his head and upper vertebrae had been severed from his body, almost surgically, using a thin-bladed knife. Although there is evidence from other sites of "trophy heads" in Iron Age societies, it appears that the head was immediately deposited in the pit and buried in fine-grained wet sediment.[3] The reason for the killing is unclear. The archaeological context suggests that the man may have been killed for a ritual or sacrificial purposes.[6] CT scanning was carried out at York Hospital, where the skull was opened and the contents were removed and forensically examined. The interior of the skull was found to contain several large fragments of brain mixed with sediment. The brain had shrunk to about 20% of its original size, but many anatomical features were still readily identifiable. It was remarkably well-preserved and had few signs of decay other than the presence of a few bacterial spores.[3] One of the fragments clearly showed the neural folds of a cerebral lobe.[2] Discovery of such a well-preserved brain is all the more remarkable, considering the fragility of all human brains following death. Even when placed in a chilled environment in a mortuary, brains quickly dissolve into liquid.[1] The high fat content of the brain means that it is usually the first major organ to deteriorate.[5] The brains of the crew of the American Civil War submarine H. L. Hunley were recovered along with their skeletonised bodies in 2000, and brains were found in the wreck of the English warship Mary Rose, but archaeologists found that they "liquefied within a matter of minutes" unless preserved immediately in formalin.[citation needed] Brains found in terrestrial environments have tended to be better preserved, as the surviving material tends to have a higher proportion of hydrophobic (water-repellent) matter than in a fresh brain. There appear to be a number of routes by which brains have been preserved, though a common factor appears to be the existence of a wet, anoxic environment.[3] The presence of such an environment is thought to have been responsible for a similar but less complete preservation of brain matter discovered in the 1990s during the construction of a new magistrates' court in Hull.[2] The preservation of the brain has been attributed to several factors. First, the waterlogged, anoxic soil in which the head was buried, even though only the brain, and not the rest of the tissue, survived burial. Only scanty traces of other tissue remain on the rest of the skull. Second, the brain had undergone unusual chemical changes as a result of being severed and the conditions under which it was buried. In contrast to other brains found, no adipocere – a fatty compound formed through the process of decay – was detected in the Heslington brain, probably because the head was severed from the body before the brain had started to decay. There had also been a major decrease in the amount of proteins and lipids and their replacement by fatty acids and other substances produced as degradation products. Much of the original substance of the brain has been replaced by a high molecular weight, long-chain, hydrocarbon material that is as yet unidentified. According to the archaeologists who have examined it, the brain is "odourless", with a "smooth surface" and a "resilient, tofu-like texture."[3] Thirdly, the human body tends to decay from the inside out, consumed by a post-mortem swarm of bacteria from the gut which spread around the body via blood from the alimentary tract. In this particular case, the head was severed from the alimentary tract and drained of blood, so the intestinal bacteria did not have an opportunity to contaminate it. The precise mechanism by which the Heslington brain was preserved is unclear, however; in a bid to shed light on this question, researchers buried a number of pigs' heads in and around the campus to see what happened to them[needs update].[7] In a paper published on 8 January 2020 in the Journal of the Royal Society Interface, Axel Petzold et al performed molecular studies on a sample of the brain, and identified over 800 proteins. Some of these proteins were in good enough condition to elicit an immune response. The team also found that the proteins had folded themselves into tightly packed aggregates which are more stable than those found in live brains. This may partly explain how the Heslington brain has been able to stave off decomposition, in addition to the wet, anoxic environment in which the skull was found that could have prevented aerobic microorganisms from surviving. [8] ^ a b Brice, Makini (17 August 2012). "2,600-Year-Old Brain Found in England, in Remarkably Fresh Condition". Medical Daily. Archived from the original on 20 August 2012. Retrieved 20 August 2012. ^ a b c d e f g "37th Annual Report 2008–2009" (PDF). York Archaeological Trust. 2009. Retrieved 21 August 2012. [permanent dead link] (check Row 39 here) ^ a b c d e f g h i O'Connor, S.; Ali, E.; Al-Sabah, S.; Anwar, D.; Bergström, E.; Brown, K. A.; Buckberry, J.; Buckley, S.; Collins, M.; Denton, J.; Dorling, K. M.; Dowle, A.; Duffey, P.; Edwards, H. G. M.; Faria, E. C.; Gardner, P.; Gledhill, A.; Heaton, K.; Heron, C.; Janaway, R.; Keely, B. J.; King, D.; Masinton, A.; Penkman, K.; Petzold, A.; Pickering, M. D.; Rumsby, M.; Schutkowski, H.; Shackleton, K. A.; Thomas, J. (2011). "Exceptional preservation of a prehistoric human brain from Heslington, Yorkshire, UK". Journal of Archaeological Science. 38 (7): 1641. doi:10.1016/j.jas.2011.02.030. ^ a b "'Oldest human brain' unearthed at Heslington East". Nouse. 13 December 2008. Retrieved 20 August 2012. ^ a b Parry, Wynne (25 March 2011). "2,500-Year-Old Preserved Human Brain Discovered". LiveScience. Retrieved 20 August 2012. ^ Lewis, Simon (1 November 2010). "Was death of Iron Age man at Heslington East a ritual killing?". The Press. York. Retrieved 20 August 2012. ^ Owen, James (6 April 2011). "Ancient "Pickled" Brain Mystery Explained?". National Geographic News. Retrieved 20 August 2012. ^ Yirka, Bob (8 January 2020). "New clues to help explain how a 2600 year old brain survived to modern times". Phys Org. Retrieved 9 January 2020. Preserving Britain's Oldest Brain. by York Archaeological Trust Exceptional preservation of a prehistoric human brain from Heslington, Yorkshire, UK. O'Connor et al. Journal of Archaeological Science 38 (7): 1641 Analysis of the Bronze Age Heslington Brain by FTIR Imaging and Hierarchical Cluster Analysis. Correia Faria E., Dorling K.M., O'Connor S. and Gardner P. In: SPEC2010 "Shedding Light on Disease: Optical Diagnosis for the new Millenium"; 26 Jun 2010-01 Jul 2010; University of Manchester. 2010. Retrieved from "https://en.wikipedia.org/w/index.php?title=Heslington_Brain&oldid=935889786" 2008 archaeological discoveries Archaeology of England Human remains (archaeological) Iron Age Britain Individual human heads, skulls and brains Orphaned articles from April 2018 All orphaned articles Wikipedia articles in need of updating from November 2018 All Wikipedia articles in need of updating
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,960
Banner & Witcoff Welcomes Experienced Litigation Attorney to Washington, D.C. Office Banner & Witcoff, Ltd., a national intellectual property law firm that procures, enforces and litigates intellectual property rights throughout the world, welcomes John R. Hutchins, a first-chair patent litigation attorney, as a shareholder in its Washington, D.C. office. "Our clients encounter increasingly complex and multifaceted legal and business challenges, and work closely with us to overcome these challenges," Banner & Witcoff president Charles L. Miller said. "John's addition to our standout team of litigators will strengthen our ability to provide successful results to our clients." Mr. Hutchins was previously a partner in the Washington, D.C. office of Kenyon & Kenyon LLP, which eventually became Hunton Andrews Kurth LLP. He handles litigation in federal district court and the International Trade Commission, appeals before the U.S. Court of Appeals for the Federal Circuit, client counseling, license drafting and negotiations, and patent post-grant proceedings. He has represented clients in significant cases involving consumer electronics, automotive components, medical devices, paper making processes and chemicals, inks, vaccines, pharmaceuticals, e-commerce, and toys and children's products. "Having spent 20 years litigating at an intellectual property firm, I am greatly looking forward to the opportunity to join Banner & Witcoff – a firm with the strongest reputation for handling IP matters and a deep bench of experienced attorneys," Mr. Hutchins said. Early in his career, Mr. Hutchins served as a law clerk to the Honorable Paul V. Gadola, U.S. District Judge for the Eastern District of Michigan. He earned his J.D., magna cum laude, from Harvard Law School in 1995, and his B.S.E., summa cum laude, in electrical engineering from the University of Michigan in 1992. About Banner & Witcoff, Ltd. A national intellectual property law firm with more than 100 attorneys and nearly 100 years of practice, Banner & Witcoff, Ltd., provides legal counsel and representation to the world's most innovative companies. Our attorneys are known for having the breadth of experience and insight needed to handle complex patent applications as well as handle and resolve difficult disputes and business challenges for clients across all industries and geographic boundaries. For more information, please visit www.bannerwitcoff.com. Please direct all media inquiries to Amanda Robert at (312) 463-5465 or arobert@bannerwitcoff.com. John R. Hutchins
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,902
Villa Castelli è una villa che si trova ad Ispra (Provincia di Varese), sul Lago Maggiore. Storia Sull'area delle antiche case della località detta "Pozzolo", fu costruita la villa che ha preso il nome della famiglia Castelli. L'area apparteneva all'Abbazia di San Celso, in seguito la proprietà passò ad Ermenegildo Piuri, e da lui a Giuseppe Brughera ed ai suoi fratelli. Il figlio di Giuseppe, Leopoldo, costruì una grande casa prima della metà del XIX secolo. Verso Occidente incombeva in grosso caseggiato di proprietà dell'avvocato Paolo Cattaneo e sull'altro lato c'erano il vecchio gruppo di case del Pozzolo e la proprietà di Nicò. Tra il muro di cinta Brughera e la casa colonica Cattaneo, c'era un piccolo spazio non selciato di proprietà controversa, occupato con un deposito da legna di Nicò. Si aprì una questione destinata a durare alcuni anni di contenziosi con il Comune. Alla morte di Leopoldo la casa era stata divisa tra gli eredi, il figlio Achille e la cognata Paola Castelli. Nel 1876 i Brughera cedettero la proprietà ai fratelli Castelli: Carlo, Giacomo, Pietro, Antonio, Guglielmo e Giovanni. Guglielmo divenne il proprietario della casa ed iniziò subito a sistemarla per farne la propria abitazione. Questo abitava a Milano, ma tornava regolarmente per villeggiare in paese. Giacomo ampliò notevolmente la proprietà, acquistando dai Nicolini i fondi Montebello e Montebello superiore, con la cascina e il vigneto. Dopo il 1898, con l'accordo del comune, demolì il fabbricato Cattaneo, e su quell'area ampliò il giardino portando la stradina del monte a fare il giro più largo che possiamo percorrere oggi. Il figlio di Guglielmo, Franco, fu anche lui molto legato a Ispra. Franco e la moglie Camilla Sormani ebbero un figlio, che ereditò il nome dal nonno Guglielmo. Questo morì a Parigi, il 25 marzo 1919. I genitori scelsero di ricordare il figlio con un'istituzione in favore dei giovani del paese, finanziando la costruzione dell'oratorio. La famiglia si estinse poi con la morte di Franco, nel febbraio 1942. La villa con il parco venne venduta ai fratelli Pietro e Dante Cerane, industriali di Busto Arsizio che tentarono di avviare uno stabilimento per la lavorazione della calce. Il rustico fu acquistato da Ottavio Leandri nel 1950, e la cascina Montebello dai Binda. La villa fu disabitata per qualche anno, finché il sindaco Ranci avviò le trattative e nel 1971 fu acquistata dal comune per divenire sede del Municipio. Il vecchio cancello con il monogramma GC (Guglielmo Castelli) chiude l'ingresso delle scuole elementari. Architettura La villa era circondata da un alto muro e nascosta da una fitta cortina di piante che proteggevano la riservatezza dei proprietari. Il parco era collegato alla villa dal ponticello sopra la strada, e saliva con sentieri tagliati nella roccia e immersi nel verde del bosco. Giuseppe Cairo fu l'artefice di gran parte del giardino. Nella villa i padroni vivevano tra il pianterreno e il primo piano, dove accanto alle camere da letto era presente una veranda, trasformata poi in cappella. Le grandi verande, aperte sul davanti, erano gli ambienti più accoglienti. Al di là della strada sorgeva la cascina, che comprendeva la scuderia per i cavalli e la rimessa delle carrozze, i locali di servizio, il fienile, la lavanderia, la cantina e la tinaja, e infine l'abitazione per i giardinieri e i custodi. Sul colle si trovava la piccola cascina dell'Ortiglio, e più sopra la cascina di Montebello con il vigneto. Note Bibliografia Altri progetti Ispra
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,421
namespace otc { } // namespace otc #endif
{ "redpajama_set_name": "RedPajamaGithub" }
4,949
Q: Highcharts Column Chart I'm trying to use Highcharts to create a graph similar to what's shown below. As clear, the 1st column (100%) is a sum of all others. The approach I thought of was to have invisible series below each of the individual categories to give them their raised look. The only thing that's holding me back is the thought of the situation where I might have to deal with say 30+ categories (possible in our application). Is there any simpler way to achieve the same results? I can't post the image directly as I'm a new user. Sorry. http://i.stack.imgur.com/RBWIf.png A: I would suggest taking a look at the "low" property which is shown here: http://fiddle.jshell.net/8JP8T/ var chart = new Highcharts.Chart({ chart: { renderTo: 'container', type: 'column' }, xAxis: { categories: ['Jan', 'Feb', 'Mar', 'Apr'] }, series: [{ data: [{ low: 1, y: 2 }, { low: 2, y: 4 }, { low: 0, y: 4 }, { low: 4, y: 5 }] }] }); You'll still have to preprocess your data to get the correct y value and low value, but it seems doable? A: For column chart, for each point you can specify the y value and the corresponding low to create a floating column. series: [{ data: [ {y: 29.9, low: 20}, {y: 50, low: 20}, {y: 100, low: 50} ] }] See this example on jsfiddle.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,318
I know I should be able to do it by some how manipulating my Pipe Family and it's preferences, But I do not know how? I should add that I am doing sanitary Piping using Only 80 mm and 100 mm PVC Pipes. Is it possible that in my look up table of (Bend - PVC - Sch 40 – DWV) Table I delete all 90 deg and 60 deg Rows of info from my table? I tried it but I get a message that some thing is wrong about my Lookup formula. Any recommendation is greatly appreciated. Its not possible to get revit to place 2x 45 degree bends and a piece of pipe between, but it is possible to make a bend family that looks like that. So actually it is a single bend so far as revit is concerned, making it easy to move, trim etc. but then you can control the length of the pipe between the two bends with a parameter. If you only want to use specific angles, you can do that with the settings under Mechanical Settings -> Pipes -> Angles. Easier than modifying all your fittings. Last edited by josephpeel; December 14th, 2015 at 09:01 AM.
{ "redpajama_set_name": "RedPajamaC4" }
410
{"url":"https:\/\/chat.meta.stackexchange.com\/transcript\/message\/7092793","text":"2:05 PM\n@SomewhatMemorableName Honestly, you sometimes have no choice :P I woke up being an RO in my second? third? month on SE :\/\nokay. 4th month on SE, 2nd in chat.\n\n@Tinkeringbell vaguely the same...\nI wonder if its how the system secretly picks mods ;p\n\n@JourneymanGeek Just pick the RO's? :P\n\n@Tinkeringbell maybe?\n\n@Tinkeringbell Of 3 ROs in there, I was the only one who didn't get picked for mod\n\nThough one of the ROs I ROified is a mod now so...\n\n2:07 PM\nSo not just the RO's\nOr I'm uniquely terrible\n\nI dunno. At least my moderatorship didn't come with a message from Feeds\n\n@Magisch You're not. But I recall you were on a break ;)\n\n@Tinkeringbell funny thing that.\nI was in india with poor internet when the SU mod election came up\n\n@Tinkeringbell The only person that can answer that would be the person who chose the mods, and I'm pretty sure they wont\n\n2:08 PM\nI... quite literally typed out something like \"Hi, I'm standing\" as my election pitch, then edited it later ;)\n\n@Magisch Well, from the e-mail I got, they looked at general site participation, meta participation etc. at the time, and the timezone I was in... For the rest, you'll have to trust me on the not being terrible ;)\n@JourneymanGeek Glad I never had to do an election at the time ;)\n\nhmm. I didn't realise timezones were considered, but I guess it makes sense.\n@Tinkeringbell they're fun.\nI won in the first round, but STV sometimes has nailbiters\n\nAFAIK, all German.SE mods live in Germany (CET)\n(yeah, that's obvious...)\n\n@SomewhatMemorableName well out of people who volunteered\nSR.SE had a american, me and a frenchman\n\nlooks great :)\n\n2:28 PM\n@JourneymanGeek Errr.. the ones I've seen look like they take a lot from the person nominating :)\n\nwrong window ;p\n\nNice. Now I feel ignored :P\n\n@Tinkeringbell well, sometimes\n\n2:31 PM\nstop posting chars that makes me clean my screen or curse: dead pixel!\n\n.\nMWAHAHAHAA\n\nlol\n\n@Tinkeringbell well, depends on the site, and the community?\nI'd lost 2 elections on other sites ;p\n(though it was mostly \"geek! We need to make up the numbers!\")\nand you got T shirts for making it to the primaries in those days\n\nNice :)\nI think I'll stick to IPS for now ;)\nLike I said, elections sound like a huge energy drain\n\nnaw\nThey're relatively short\nand outside the questionare and chat...\n\n2:34 PM\n@Tinkeringbell Depends on the site and the election. SO, maybe. Most other sites, probably not.\n\nwhat you've done so far matters less than the election period itself\nSome folks swear people vote based on reputation, or Reputation.....\n\n@JourneymanGeek I guess they do. ;)\n\nAnd betime\nanother early day for me\n\nI still need to eat dinner. I'm hungry, but it's still way too early for dinner :(\n\nSandwich time then\n\n2:37 PM\nThose are for lunch. It's already been lunchtime.\nI'll go scavenge, maybe there's some fruit lying around\n\nwhat happened to the waffles? Since when the staff moved to sandwiches?\n@Tinkeringbell just avoid parsley\n\n@Derpy And tomatoes. Is parsley a fruit?\n@Derpy It's that big push for healthier cantine food they have all around the world! :P\n\n@Tinkeringbell old (probably urban legend too) teaching from scout years. They said parsley is poisonous to parrots and I never checked if that is true since I had exactly 0 parrots in my whole life.\n\nsandwiches are the canonical meal between lunch and supper\n\nand for @ShadowWizard , yep, for your joy you will never know if those were boy scouts or girl scouts.\n\n2:46 PM\nIf sandwiches go between lunch and supper, than what do you eat for lunch or supper?\nI found berries :D\n\n@HDE226868 I think it depends on the person, too... I've been a candidate in three and I remember spending so much time and effort on writing my nomination, question answers, hanging out in chat... etc...\n\n@Catija That's true, it does.\nI only have one case of personal experience here.\n\n@Tinkeringbell more sandwiches, but also chips and soup\n\nSometimes it's too hot for soup and I prefer salad.\n\ncold soup can be pretty good\nwhen I lived further south, eating a hearty meal on days when temperatures peaked above 110F would get troublesome\na cold tomato soup and chicken salad sandwich went down pretty well\nOf course, pasta salad is always good\n\n2:54 PM\nI'll stick with bread and hagelslag for breakfast and lunch, and a hot meal for dinner. :)\n\nAs long as it's chicken salad and not tuna. :D\nBut it rarely gets that hot here.\n\nwhat is hagelslag?\n\nSprinkles on toast.\n\nlike... cinnamon toast?\n\nLike... literal sprinkles.\n\n2:59 PM\noh. Like candy on toast.\nwell, I suppose that's something of an open-faced sandwich\n\n@Catija We usually just put another slice of bread on top of that much hagelslag...\nthen it's a proper sandwich :)\n\n@Catija Tuna salad sandwiches are best grilled with hot soup\n\nCanned tuna is... not my thing. I never grew up eating it and even the smell makes me feel ill. One of my former coworkers would bring those tuna and cracker lunch packs for lunch in a small shared room and I had to leave whenever she was eating.\n\nmmm... it's so good\nmakes my teeth itch just thinking about it\nsardines on toast are also a good snack\n\nWell... more for you to enjoy, I guess. Do you like your sardines packed in mustard?\n\n3:04 PM\n@Catija maybe she tried to send you a message ... asks on IPS.se\n\n*sees someone asking a question on IPS*\n*runs away*\n\nviable\n\nIt's always nice to know that the IPS mods tend to run away from posts on their own site. ;)\n\nIt's how I deal with interpersonal situations all the time.\n\nWoah~ IPS in the Tavern\n\n3:10 PM\n@Catija mmm, yes. Or I just put mustard on top of oil-packed sardines.\n\n@Shog9 Oh yeah, and a bottle of chardonnay, outside a Spanish\/Portuguese bar in the sun. Tuna salad made with those huge toms the size of softballs. . I gotta go raid fridge now:(\n\nSoftball sized tomatoes would be difficult to bite into on a sandwich.\n\n@Catija lol, you are permitted to slice them:)\n\nAh. That makes sense. ;) Otherwise getting the correct ratio of tuna salad and tomato might be quite difficult... though, for me a 0:5 ratio would be acceptable. :P\n\n@Catija Oh? Has Austin, TX moved to Germany, then? :)\n\n3:27 PM\n@MartinJames Hmmm? What do you mean? I pulled that off of a site called \"stuff Dutch people like\"... I only know about it because I have lots of Dutch friends. :D\n\n3:48 PM\nAnd muisjes\n\nmeises\nWe use that word instead here\n\nLooks and sounds similar\n\n@MartinJames Now I have a mental image of Martin dunking a giant tomato into a can of tuna, wrapping it all in a couple of slabs of bread, then extending his jaw to some inhuman angle like a horror movie monster and chomping down on resulting mass.\n\nthat is an accurate picture I would say\n\n@Catija The Germans put sprinkles on eveything. Possibly not beer, but I would not be surprised at that:)\n\n4:00 PM\nI have sprinkles shaped like the Tardis :D\n\nmmmmmmmmmm, steampunk cupcakes\nAnyone have experience with \/ opinions about the Razer BlackWidow Chroma V2 keyboard?\n\n4:15 PM\nDisco keyboard ...\n\nFits, as I frequently listen to \"Stayin' Alive\" by the BeeGees while using meta. Chroma just adds to the effect.\n\nIf you're ever in need of some, I still have tons of the sprinkles sitting around... I'm assuming they have an indefinite shelf life. I think I got 3-4 bottles of each type.\n@TimPost It looks really pretty?\n\n@TimPost personal opinion is if you want an RGB board, there are better options out there than the Razer (whose QC went way downhill around 2012), with Ducky being probably your best bet\n\nIs the keyboard cushion removable? I'm not a fan of them.\n\nI don't think Duckys come with a wrist rest... not sure about the Razer\n\n4:23 PM\nAre there known issues with Fastly in England: meta.stackoverflow.com\/q\/370894\/578411 ?\n\nThe Razer does. I was continuing my first comment. :)\n\n@KevinB thanks, shared that\n\n@Catija yeah it looks like you can take the rest off\n\n4:43 PM\n[ SmokeDetector | MS ] Blacklisted user: How to search for available username\/nick in Pokemon Go? by resergemsgen on gaming.SE\n[ SmokeDetector | MS ] Bad keyword in body, blacklisted website in body, potentially bad keyword in body: PHP malware in the style tag by Andrea Grippi on security.SE\n\nuser310756\n5:02 PM\nhm I've got a problem with a collapsing widget in the info sidebar of chats - collapsing the avatars. It doesn't happen in safari or incognito chrome. Disabling tampermonkey doesn't help, it seems to be when I'm logged in. anyone have any ideas what might be causing that?\n\nDo you have Porkchat installed?\n\nDoesn't happen in incognito is usually a sign that it is an extension.\n\nPorkchat is an add-on, so disabling Tampermonkey wouldn't kill it.\n\nuser310756\n@Catija yes! thanks. I totally forgot about porkchat\n\n[ SmokeDetector | MS ] Bad keyword in link text in answer, potentially bad keyword in answer, blacklisted user: Display 1080p signal on external monitor while laptop screen is 720p by Jhon Perker on askubuntu.com\n\n5:11 PM\nSure :)\n\n@SmokeDetector k\n\n5:25 PM\nI got a meta 'welcoming' answer accepted.... what went wrong? Am I not being hostile enough?\n\nProbably!\n\nHello! The Rate Limiting Guide mentions that there are no limits for comment deletion, but I got a message today saying that I cannot delete my comments from more than under twenty separate posts\n(even though it doesn't say explicitly, I'm assuming that it meant: \"further deletes are blocked for today\" :P)\n...so is this a mistake that needs to be corrected?\n\nI need a post that explains moving comments to chat for regular users :) I know I've seen it somewhere...\n\n5:42 PM\n@GaurangTandon I would say the Rate Limiting Guide needs an update but maybe an SE Dev needs to confirm what the exact criteria are.\n\n@Tinkeringbell It's mentioned briefly in the comments FAQ\n\n@SonictheInclusiveHedgehog Thanks!\n\nI mostly curated FAQs back in the anonymous editor days, I still do it from time to time\n@GaurangTandon Edited the FAQ. Note that those are community-curated, so if no one bothers to edit them, they will become out-of-date. (This was my main driver to anonymously edit those)\n\n@Magisch i'm deleting my old obsolete comments. Stuff like welcome to chem.se, or here's a mathjax guide, etc.\n@SonictheInclusiveHedgehog thanks for the quick fix!\n@rene i feel that the people who made such complex rules should also have published official documentation about those rules, it shouldn't really be needed to be done by the community imo... (though the community is great at it, things do slip by sometimes, like this one)\n\n5:56 PM\nGot mass declined flags on funny comments. I guess I did annoy some mods...\n\n@GaurangTandon no, SE is an agile shop, the code is the documentation and these rate limiting settings are flexible, they can change at any moment. Documenting them would be a waste of time with little gain. When someone hits a limit it will get documented, as just happened. The process works.\n@SomewhatMemorableName the mod tried to be funny\n\n6:17 PM\n46\n\nWhat are the benefits of the ternary number system over the more traditional binary number system? Are there drawbacks to ternary with respect to binary, aside from the obvious of less adoption?\n\nI remember that question...\n\nIt\u2019s always good for a chuckle.\nMaybe in a better universe we would have ended up with ternary machines, but given the current reality... it\u2019s quite a stretch to expect them to inevitably become ubiquitous.\n\n6:35 PM\n[ SmokeDetector | MS ] Bad keyword with email in answer: How can I search comments on YouTube by Muhammad Steven Gaddafi on webapps.SE\n\nI remember we have really low opinion of Iota. Not a good vouch for ternary machines.\n\nHuh, I feel like some of the SO people could probably come up with some good talks for this: sxsw.com\/conference\/code-and-development\n\nNeeds one more close vote\n\nAmidst all this \"be nice\" hullabaloo, let's bring back lmgtfy! Sure, it's savage and embarrassing when you're on the receiving end... but it's a beautiful one-time lesson. Sometimes your ego needs to take a hit for you to grow. Thoughts?\n\n@canon If you're already googling, you might as well copy-paste some stuff into an answer and gain some rep....\n\n7:14 PM\nundo's minirant of the day: Suppose Stack Overflow is unwelcoming, and suppose we've decided to fix it. Further suppose that the 7% 'unwelcoming' comments stat holds (seems way too high to me, but maybe). Now, the next question you have to answer is this:\nAre our policies wrong, or are we failing to enforce the policies?\nIf we fail to enforce policies for whatever reason, no amount of CoC writing or blog-posting is going to do anything more than appease Twitter.\n\ni mean... even with proper enforcement... unwelcoming messages will still exist.\nuntil said enforcement occurs\n\nIndeed. If we're failing to enforce policies, then all we need is tooling, one way or another\n(assuming moderators aren't declining stuff they shouldn't, which I see no evidence of)\n\n@Undo Have had that but not for unwelcoming comment flags\nOne off mistakes, mostly\n\nYeah, nothing systemic\n\n7:29 PM\n@Undo wait, did the mission changed somewhere into Appease Twitter? Did I miss a memo?\n7\n\n[ SmokeDetector | MS ] Link at end of answer: Adding Hindi Fonts to Ubuntu Font Family by janthi on askubuntu.com\n\nisn't that what caused this whole push?\n\n@SmokeDetector k\n\n@\u03c0\u03ac\u03bd\u03c4\u03b1\u1fe5\u03b5\u1fd6 Zur\u00fcck?\nWhat happened?\n\n@Fabby Nothing to be proud of and tell :3\n\n7:31 PM\nFunny thing happened: @JourneymanGeek thougth I was one of your sock puppet accounts...\n\n@Fabby Beyond being german you definitely aren't XD\n\n@\u03c0\u03ac\u03bd\u03c4\u03b1\u1fe5\u03b5\u1fd6 I'm not even German: I just live there...\n\n@Shog9 Happen to have a screenshot of how comments were presented in the Comment Evaluator 5000?\n\n:D\n\nhmm\n\n7:33 PM\nwhy 5000\n\n@Jouney Should know better, that I've sworn off using socks.\n\n@KevinB because it's better than 4000\n\nit's precisely 300 better than the Comment Evaluator 4700\n\nthen why not 6000\n\nbecause it's not that good\n\n7:33 PM\n5000 was so last xpac\n\n@Shog9 Comment evaluator 4242 would be even better...\n\n@Fabby You're ever welcome from my side. I'm not following the current main stream discourse happening right now.\n\n@\u03c0\u03ac\u03bd\u03c4\u03b1\u1fe5\u03b5\u1fd6 Maybe I'm telepathically linked to you and I'm just a puppet without knowing...\n@\u03c0\u03ac\u03bd\u03c4\u03b1\u1fe5\u03b5\u1fd6 Define \"mainstream discourse\"...\n\nThat's OSI layer 42 then ...\n\n@\u03c0\u03ac\u03bd\u03c4\u03b1\u1fe5\u03b5\u1fd6 :D\n\n7:36 PM\n@Fabby Unwelcoming, as it feels for me.\n\nAwesome, thanks :)\n\nCool!!!\n\nPresumably with the rows repeated for multi-comment chains\n\n@Shog9 Hey @MarkAmery.. notice the 'neutral' isn't in there :P So I'd have definitely hit that unsure we talked about yesterday ;)\n\n7:37 PM\nWhat does \"unsure\" mean in this context?\n\n@\u03c0\u03ac\u03bd\u03c4\u03b1\u1fe5\u03b5\u1fd6 lemme see how far away you live from me.\n\n@Undo yup:\n\nDoes that just mean all neutral reactions get counted as unwelcoming?\n\ni'd assume unsure was mapped to \"unwelcoming\"\n\nSweet\n\n7:38 PM\n@Shog9 Thank you!\n\n@Magisch Neutral reactions probably fall under 'fine', or that would be the expectation\n\n@Fabby That's probably not a question of current location ;)\n\nI'm almost never :) in reaction to a comment\nmostly the second\nI'd expect only the third to count as unwelcoming then\n\nsure... but if it was that way, how would abusive and unwelcoming be separated.\n\nLooks to me as -1 0 +1\n\n7:39 PM\n@Undo suppose Stack Overflow is unwelcoming. The next question we need to answer is... \"What does that mean?\"\nNot, \"FIX IT FIX IT FIX IT FIX IT!\"\n\nI still think unwelcoming is a useless metric\nit varies so wildly per culture\n\nPeople, Policy, Process, Product\n\nGuy on the phone says \"your software is broken and it sucks and I'm very angry\" - you don't say, \"I'm so sorry, I'll fix it for you, have a good day\"\n\nWhat is unwelcoming for me might be exactly what someone else is looking for\n\n7:40 PM\nunless he's paying you\n\nESPECIALLY if he's paying you.\n\n@Shog9 users do this all the time btw\n\nYou're on Windows, right?\n\nYou usually have to coax the reason out of them\n\n@Magisch 'wrong number'\n\n7:41 PM\n@Magisch I know. I've spent years of my life doing this. And every newb on the line tries that placating line. And every time, they go home burned out and the angry guy calls again tomorrow. If you want to actually fix shit, you gotta learn to talk 'em down and get some real info. Otherwise, you're just another slub being paid to get beat on.\n\n@\u03c0\u03ac\u03bd\u03c4\u03b1\u1fe5\u03b5\u1fd6 Indeed, currently house-sitting in Belgium.\n\n@Shog9 Otherwise you're just wasting your and the user's time efficiently\n\n@Shog9 Applause\n\nWhat they say vs what actionable feedback you can extract from them is so far apart that it feels like you have to be a magician to know sometimes\n\nDon't get me wrong, you can be sympathetic. Heck, you can be fake sympathetic; it doesn't matter. As long as you can get past the point where they're screaming and to the point where they give you enough info to figure out what problem(s) led them to this state, your time isn't wasted. But all the sympathy in the world doesn't matter if they gotta go back to using broken software tomorrow.\n\n7:43 PM\n(I get given the difficult customers because I don't eat their shit: I catch it and throw it back gently point out that what they're asking doesn't make sense)\n\nJust two days ago I had someone call and say their connection was faulty and they were tired of kicking the pc to resolve it. Turns out they turned off caching in their browser and the load times were longer then expected. And then the network adapter chipped from being kicked so often so they got even more angry\n\n@Magisch That person is its own problem...\nYou all could do with a little bit more love and less hate and more cuddling and less kicking on your planet...\n>:-) ;-) >:-)\n@\u03c0\u03ac\u03bd\u03c4\u03b1\u1fe5\u03b5\u1fd6 3h30-HOF\n(now a bit more)\n\nProblem solving by percussion can be effective. At ******, I was usually first in, and part of my job was to backheel two 19\" racks as I passed on my way to the coffee machine. Without that effort, the number of test failures due to old, dirty connectors increased dramatically.\n\n@MartinJames Are you hungry?\n\n@Fabby Go on, post whatever dish you have, I can take it..\n\n7:57 PM\n@MartinJames I just wanted to know what Chicken Magmaloo was...\n\n@Fabby Oh - it's like chicken vindaloo, but turbocharged. Sorta phall++\n\n:D :D :D\nLike my death-by-chocolate cake...\n\nChicken in Limoncello sauce probably :3\nin SO Close Vote Reviewers on Stack Overflow Chat, 9 mins ago, by \u03c0\u03ac\u03bd\u03c4\u03b1 \u1fe5\u03b5\u1fd6\n@Olaf Well, in Italy they sell Limoncello\n\n8:15 PM\nGood night all...\n\nNite:)\n\n8:44 PM\nOh dear MSO, if that answer isn't useful, then please write a competing one ....\nI guess I have fans\n\nSure you have!\n\n\\o\/\n@AdamLear did you see this bug report? Not sure if that is related to your earlier push to prod for the sticky preferences. The timing seems to match ...\n\nsigh probably\n\nsorry ...\n\nwow, that's ... mega broken. thanks, Ubuntu\n\n8:57 PM\nWhass up with the game? Do the brits hold?\n\nSE advices IE11 on Windows 7\n\n@\u03c0\u03ac\u03bd\u03c4\u03b1\u1fe5\u03b5\u1fd6 Croatia 2-1\n\nCool lil' southern easterns :3\n\n@Shog9 Well, apparently, \"Stack Overflow is unwelcoming\" may mean \"survey participants were unsure WTF 7% of comments were about when shown them without context.\" Apparently, though, your colleagues have already decided this is \"not good enough for us\".\n\n@MarkAmery That's also kind of a meaningless conclusion.\n\n9:00 PM\n\n@MarkAmery I mean... what do you do with something like that?\n> When in danger or in doubt, run in circles, scream and shout.\n\n@Shog9 With the results from the survey, and nothing more? I'm not sure you can do much with them at all, which is my point. With the criticisms of the survey? You can ask all the staff about how they interpreted the three levels, and you can figure out whether the interpretations were consistent, and then either build a better survey with less ambiguous questions or argue the case that we actually did get some meaningful data.\n\nok... Let's take a step back\nEven assuming you attribute \"unwelcoming\" entirely to comments... which, much as I dislike comments, is sketchy...\nDeciding that \"comments are 7% unwelcoming\" doesn't give you any productive approaches\nYou gotta figure out why those comments are seen as unwelcoming\n\nI agree - I just think we're not even that far in our analysis yet\n\nNo, we're not\nLook... we should've been doing this 4 years ago. Straight-up. We knew it, we actually started this same sort of analysis... And then we saw whatever the 2014 equivalent to \"7%\" was and decided, \"meh. not a problem\"\nI mean... You look back at the posts I was writing about the state of moderation tooling back then, and they're all full of embarrassing admissions that comments were fast becoming the single biggest problem. And we didn't know what to do about it.\n\n9:08 PM\nSo, there's a thing that I haven't seen mentioned at all since Jay's first welcoming blog post, that in my personal, totally non-rigorous observation of the site has always seemed like the single biggest problem with how askers get treated by commenters\n\neh, i just feel like the survey was flawed... and that basing any kind of action or discussion from it seems pointless.\n2\n\nAnd it's not capable of being picked up at all by any of the analysis done since Jay's post\n\nScaling sometimes gets into weird directions.\n\nAnd that's inconsistent community moderation standards\nThere's a mass of people with their own ad-hoc ideas of what makes a good question that they'll present as rules\n\nso... that's when you kinda have to admit that this is a multi-faceted problem\nI mean... A \"good question\" for Android managed code is not the same as a \"good question\" for R or for C# or for C++ on Windows.\n\n9:11 PM\nLike, you come along as a new user and you ask a question and somebody'll say \"What have you tried? You need to show us your code!\" and they vote to close for no MCVE. And so you paste your code into the question and then I come along and say \"You're mixing a how-to question with a debugging question here, and the result is too broad to be usefully answerable because there's basically two separate questions now\" and I vote to close as Too Broad.\n\nAnd the experience as a new user is terrible, because you've got different people trying to impose their contradictory visions of how questions should be, and the only concrete mechanism the site gives them to do so is to fuck you over by blocking anybody from answering your question\n\nThis is where I kinda think Stack Overflow is too large - has been too large - to really have this notion of one set of specific rules that apply the same to everything.\n\nI think framing these things as \"rules\" is maybe part of the problem in the first place\nThere's lots of ad-hoc community standards that get applied by people choosing the close-vote things that then get described as \"rules\"\n\nYeah. They didn't start out as rules. The problem is, when you need 5 people to all vote to close a question as \"unclear\" you start to need some way to get them to agree.\nBut let's be honest: 5 random people are not all equally-equipped to decide which questions are answerable.\nI mean... Who's equipped to decide which WinAPI questions are answerable? You? Me? Or Raymond Chen who has been answering WinAPI questions for the past couple decades?\nBut I get a binding vote, and he's gotta wait for 4 other people.\n\n9:16 PM\nI'm certainly of the view that lots of stuff gets closed for bullshit reasons\nSeveral of my handful of Meta answers have been me jumping in to defend a question's right to exist\n\nOf course. And sadly, the bigger this gets, the more it tends to fall on blind heuristics: folks voting because a question doesn't contain code, or because the title asks something vague that's then specified in detail in the body.\n\narrrrgh I hate those titles\nthey're not a reason for closure, but\nevery time I click through from Google to a \"How do I [do broad, generically applicable task]\" question\n\nit's a reason to edit. I mean, come on - it's trivial to edit those. Not as trivial as voting to close, but easily more trivial than 5 people voting to close.\n\nand actually get \"Here's 200 lines of code I wrote to do something tangentially related to [broad, generically applicable task], why am I getting an EccentricHighlySpecificException on line 66?\"\nI end up wanting to throttle the asker for their title choice\n\nI kinda suspect that as the major tags get more and more \"filled in\"... The reason for folks to participate at all is becoming more and more the niche stuff: specific libraries with lousy docs (because only two people are working on it and they're both fixing bugs); new platforms; legacy stuff that never got properly documented to begin with.\n\n9:20 PM\n@Shog9 I agree, and I edit them when I see them. They upset me most when careless gold badge holders use them as dupe targets for everything to do with [broad, generically applicable task], which happens way too often\n\nBut... If we keep fixating on the folks asking their Java debugging questions or whatever, we're not gonna serve the folks for whom Stack Overflow can actually be useful.\nDon't get me wrong: I'd love to have a resource to help folks become better debuggers. But... Pointing out the line where they forgot to initialize a variable 8000 times a day ain't doing that.\n\nthe \"minimal\" part of MCVE isn't enforced enough\nif it were, those questions would either be duplicates of the last time someone didn't initialise a variable, or would be closed for lack of MCVE\n\nit's awful hard to enforce. It's... pretty near impossible to teach.\nI mean... We did our best I think. But you could fill a couple of books with what MCVE is trying to convey in a few paragraphs.\nAnd then no one would read those books.\n\nof course, going back to the point about contradictory standards, while I will close a question for having much more code than is needed to illustrate the problem... other people will do the opposite\nthere's an irritating interpretation of \"practical problem related to software development\", or whatever the Help Center wording is, that implies that if you're using an artificially constructed example of a problem rather than a copy-paste from your real project, then your question is off-topic\npeople will look at the sort of minimal toy example that I want to see and ask \"But what are you really trying to do? Why do you need this?\" and close as Unclear\nand it has seemed to me for a long time that, no matter what standards we settled on, if we could somehow as a community all agree to enforce the same standards we'd be a whole lot less oppressive to new users\nrather than the current status quo where Mother berates you for speaking out of turn and Father berates you for not being assertive enough\nbut nothing your colleagues are doing will capture this point. It's not about tone; it's not about the fine nuances of word choice in comments. It's about the actual content that's being expressed.\n\nbrb; sitting in on an interview regarding the proposed code of conduct\n\n9:32 PM\n\ud83d\udc4d\nwhen you come back you can tell me whether I'm gonna get banned once it gets implemented :P\n\n9:50 PM\n@canon My biggest problem with comments complaining that something is Googleable is that they're frequently wrong. I personally have no issue with This is trivially Googleable. The first result for \"frobnicate widget in fooscript\" is www.example.com\/relevant\/docs which tells you to call the .swizzle() method. The trouble is with commenters who stop at the end of the first sentence, and leave the same comment even when there are no relevant results for any obvious search.\n\n@MarkAmery I used to comment 'Copying your exact title into a popular search engine gives 'About 32,000,000 results', with several links to SO answers on the first page that explain your problem'. I no longer bother, given it's either rude or pointless or both. Down\/close vote only, every time now.\n\nWell, take Shog's joke last night... I asked how you get an editor's address and he responded \"&editors\"... which made no sense for me but I would have no way to know the terminology to get any sort of background to understand that joke.\n\nIt was a good joke though :P\n\nI didn't even know what family of languages uses that to start with... let alone how to search for &.\n\nYeah, it wasn't searchable\nI would be cautious about generalising too much from that, though\nMost of programming is not just about understanding what a symbol made out of punctuation characters means. Many things are searchable.\n\n9:58 PM\nI see the same thing all the time on ELL... language learners sometimes don't know how to find what they need because getting the correct definition of \"run\" or \"do\"... words that have so many different definitions... it's impossible unless you specifically know the right part of speech you're looking at.","date":"2021-05-07 05:16:21","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3242020905017853, \"perplexity\": 3995.709751231285}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243988774.96\/warc\/CC-MAIN-20210507025943-20210507055943-00178.warc.gz\"}"}
null
null
A report from the Urban Institute shows that U.S. health care spending is growing at slower rates than what originally was projected with the passage of the Affordable Care Act. The study predicts that between 2014-2019, the U.S. will spend $2.6 trillion less on health care than what was initially anticipated when the ACA was passed in 2010. The report can be accessed at http://www.rwjf.org/content/dam/farm/reports/issue_briefs/2016/rwjf429930.
{ "redpajama_set_name": "RedPajamaC4" }
3,768
Study: Minnesota is America's most charitable state Minnesotans are generous with their time and money. A cheesy stock image of volunteers. The results are in and Minnesota is the highest ranked state when it comes to charitable giving. Yes, Minnesotans are the more generous with their time and money than people in any other state, according to an annual study by WalletHub. WalletHub obtained data looking at two specific areas, volunteering and charitable giving, to determine which states were more generous than others. Within these two categories were 18 different metrics to determine the rankings, such as the number of volunteer hours per capita, share of income donated to charity, the proportion of the population that donates money, and the number of food banks per capita. Minnesota scored particularly highly when it comes to volunteering. The state ranked 1st overall in the "Volunteer and Service" category, and 4th in the "Charitable Giving" rank. Minnesota had the 3rd highest volunteer rate, which includes a high rate of volunteering among the millennial population. It also scored highly for its charity regulations and community service requirements. Minnesotavolunteeringcharity Chaska man killed in snowmobile crash in northern Minnesota The snowmobiler tried to cross Hwy. 169 and was struck by a vehicle. Study: Minnesota is 4th most expensive state for childcare The average annual cost of infant care in the state tops $16,000. Which organizations raised the most on Give To The Max Day 2018? The generosity of Minnesotans helped thousands of nonprofits. Ranking: Minnesota is the best state in America to raise a family It's good to live in the Land of 10,000 Reasons to Brag. Minnesota is the happiest state in America, study says What's our secret? Minnesota is one of the happiest states – but not as happy as South Dakota Upper Midwesterners are a content bunch, a survey found. Data study: Minnesota is No. 3 in the nation for life expectancy The results are based on average life expectancies over a 6-year period. Study: Mankato is one of the safest metros for trick-or-treating The southern Minnesota city ranked 2nd on the list. St. Paul is the 'worst city' in Minnesota, study finds As Minneapolis smirks arrogantly from across the Mississippi.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,781
Q: Sharepoint 2010 Create and Add Webpart Automatically In Sharepoint Site When Deploy WSP I'm using SharePoint 2010 and I have developed webparts at visual studio 2010. I'm already using WSP to deploy my web application into server, but still should create and add the webpart manually through SharePoint site. Is there any way that we can deploy the application and automatically create and add all webparts into the site? A: Generally web parts are added to pages using features. Basically you create one site collection feature to deploy web part files to web part gallery and web scope feature which adds this web parts to pages (for example default site page).
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,931
Q: wordpress plugin MVC page structure I'm working on a wordpress plugin that takes the whole page and is about planing a trip. The plugin spread out and contains several modules and more than 10 "views" (Booking, Billing, Register, My Profile, My Bookings, My comments, etc.). I have a strong OOP and MVC background but this plugin was originally created on a single template with everything loaded with ajax and not in a organized way :) * *What is good practice for organizing big plugins (semi small sites) in wordpress? *Is there a way to create direct links for the modules view files? something like: domain.com/blog/wp/plugins/my_plugin/profile.php Bounty: I'm looking for guidelines here from people with experience. A: Firstly, it's not easy to do, and it's harder to stick to as development progresses. I'll try to answer your bullet points first, then try to talk about some architecture stuff I try to adhere to. * *I organize them as close to normal as possible. So I usually end up with folders for Models, Controllers, Views. Try to write your application as much as possible in the same manner you would for anything else. *Use plugin_url(). If you're working on a project that won't be distributed (i.e. not a publicly released plugin), you've got some advantages because you can load in outside packages from Composer without worrying about conflicts from other places. So wherever possible, I'd advise offloading stuff to Composer. I'm not a fan of how PHP implemented Namespaces, but I'm a huge fan of using them in conjunction with autoloading. You'll definitely make things easier on yourself if you use some form of Autoloading, even if it's not used with Namespaces. Since WordPress works off of functional hooks, unless you (over?)engineer a lot of stuff, you're always going to end up with a bunch of hooks all over the place. Generally, my advice there is to try and keep those together in a file, and to never put hooks inside classes, especially constructors. Keep stuff in logical groups. The trick really is to minimize the number of points where you're actually interacting with WordPress, and everywhere else to essentially write your code as you normally would, with decent design patterns and the like. You'll have to have certain points of contact (like the hooks and such) where you'll probably find yourself making some concessions to WordPress, but even there you can mitigate it by loading object methods as hook callbacks, and using those as jumping-off points to a "normal" application. I've been interested in this problem for a while. I've got a couple ongoing projects in this area. One thing I threw together was for interfacing with GravityForms, and it's on github. It's really not complicated, but it might help explain some of how I've got about solving the problem. I'm running out of specifics to add, but please feel free to drop me a line if you like. As I said, I'm really interested in solving this problem, and I think that if WordPress lasts and continues to be as popular as it is today, we'll have better solutions coming around. I hope this is helpful! EDIT: A more concrete example I'll point some stuff out in the code I shared originally. It's sort of specialized, but you can use the principles for any hook-based functionality. As you can see here, I'm invoking a method of class GravityFormsHooks\Loader to handle hooking into objects. In GravityFormsHooks\Loader, I'm calling another static method on that class to actually execute the hook. This example will take either an action or filter, but it's tailored to Gravityforms specifically, so YMMV. Essentially what this GravityFormsHooks\Loader::hook() method does is instantiate the class we're hooking into, and generate the hook as normal. The class I'm calling from the main plugin file is GravityFormsHooks\Forms\Form. Note that any method you hook onto MUST be declared as public. If we're going to shoehorn ourselves into an MVC paradigm, this method here would be your controller. From there, you can jump off to injecting models, a template engine, all manner of cool stuff. As I mentioned in my original post, I try to keep my points of contact with WordPress to an absolute minimum. I don't mean that you should write APIs doing stuff WordPress already has APIs for, just that your hooks should be centralized and minimized. It really is a useful separation of concerns, and as your application grows it'll help you to manage the complexity a lot more easily. The examples I provided should work pretty well as a Hook Controller, with minimum modifications to remove some of the more specialized GravityForms stuff. Let me know if you've got any other questions.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,371
{"url":"https:\/\/math.stackexchange.com\/questions\/2035418\/can-we-prove-the-law-of-total-probability-for-continuous-distributions","text":"# Can we prove the law of total probability for continuous distributions?\n\nIf we have a probability space $(\\Omega,\\mathcal{F},P)$ and $\\Omega$ is partitioned into pairwise disjoint subsets $A_{i}$, with $i\\in\\mathbb{N}$, then the law of total probability says that $P(B)=\\sum_{i=1}^{n}P(B|A_{i})P(A_i{})$. This law can be proved using the following two facts: \\begin{align*} P(B|A_{i})&=\\frac{P(B\\cap A_{i})}{P(A_{i})}\\\\ P\\left(\\bigcup_{i\\in \\mathbb{N}} S_{i}\\right)&=\\sum_{i\\in\\mathbb{N}}P(S_{i}) \\end{align*} Where the $S_{i}$'s are a pairwise disjoint and a $\\textit{countable}$ family of events in $\\mathcal{F}$.\n\nHowever, if we want to apply the law of total probability on a continuous distribution $f$, we have (like here): $$P(A)=\\int_{\\Omega}P(A|x)f(x)dx$$ which is the law of total probabillity but with the summation replaced with an integral, and $P(A_{i})$ replaced with $f(x)dx$. The problem is that we are conditioning on an $\\textit{uncountable}$ family. Is there any proof of this statement (if true)?\n\n\u2022 Fortunately it is true; otherwise we would also have trouble with conditional expectation for continuous variables. I was not able to find the proof in my measure theory book this morning, skimming through it, but going the long way, you may be convinced by the proof of the law of total expectation together with the relation that $P(A\\mid B) = E[1_A\\mid B]$. \u2013\u00a0Therkel Nov 29 '16 at 7:54\n\nThink of it like this: Suppose you have a continuous random variable $$X$$ with pdf $$f(x)$$. Then $$P(A)=E(1_{A})=E[E(1_{A}|X)]=\\int E(1_{A}|X=x)f(x)dx=\\int P(A|X=x)f(x)dx$$.\n\u2022 The event $\\{X = x \\}$ has 0 probability for a continuous random variable $X$, so your $P(A | X = x) = P(A, X=x)\/P(X = x)$ is not well defined. You can condition on any event $B \\in \\sigma(X)$ i.e. on any event from the sigma algebra generated by $X$. \u2013\u00a0baibo Feb 11 at 21:33","date":"2020-07-07 14:45:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 3, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9732154011726379, \"perplexity\": 111.67418947062109}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593655893487.8\/warc\/CC-MAIN-20200707142557-20200707172557-00031.warc.gz\"}"}
null
null
\section{Introduction} Sparse models such as Lasso \citep{tibshirani1996regression} and group Lasso \citep{yuan2006group} have been widely studied in the areas of statistics and machine learning, and are used for various applications such as compressed sensing \citep{donoho2006compressed} and biomarker discovery \citep{climente2019block}, to name a few. Although sparse models can be formulated as a simple convex optimization problem, the computational cost can be large if the numbers of samples and dimensions are extremely large. To tackle this problem, a technique called safe screening has been introduced \citep{ghaoui2010safe} for Lasso problems. Specifically, it eliminates variables that are guaranteed to be zero in the Lasso solution before solving the original Lasso optimization problem. Many safe screening methods have been proposed for various problems \citep{ghaoui2010safe,ogawa2013safe, wang2015lasso,liu2014safe,Xiang2017screening}. These are called sequential screening rules because they require the solution to a more strongly regularized problem. A recent technique used to eliminate variables through an estimated solution in an iterative solver, called dynamic screening, has been proposed \citep{bonnefoy2015dynamic}. In particular, Gap Safe \citep{fercoq2015mind, ndiaye2015gap}, a dynamic screening framework is widely used owing to its generality and efficiency \citep{ndiaye2017gap,shibagaki2016simultaneous,bao2020fast,raj2016screening,ndiaye2020screening}. More specifically, Gap Safe efficiently screens variables by using the dual form of the original problems, where the screening is characterized by properly designing the dual safe region. For Lasso, two simple region-based approaches exist: Gap Safe Sphere and Gap Safe Dome \citep{fercoq2015mind}. In this paper, we propose a dynamic safe screening algorithm that is stronger than either Gap Safe Sphere or Gap Safe Dome for the \emph{Lasso-Like} problem, which includes norm-regularized least squares. To this end, we first propose a general screening framework based on the Fenchel-Rockafellar duality and then derive \emph{Dynamic Sasvi}, a strong safe screening rule for \emph{Lasso-like} problems. Our framework can be regarded as a generalization of the Gap Safe framework, and thus we can derive Gap Safe Sphere and Gap Safe Dome simply using our results. Moreover, thanks to this generalization, we can use a strong problem adaptive inequality. Interestingly, the derived screening rule for \emph{Lasso-like} problems can be seen as a dynamic variant of the safe screening with variational inequalities (Sasvi) \citep{liu2014safe}, a sequential screening rule for Lasso. Therefore, we call this dynamic Sasvi. Unlike the original Sasvi, dynamic Sasvi does not require an exact solution to the problem with another hyper-parameter and hence operates safely in practice. Moreover, we propose the use of dynamic enhanced dual polytope projections (EDPP) \citep{wang2015lasso}, which are a relaxation of dynamic Sasvi by introducing a minimum radius sphere. We show both theoretically and experimentally that the screening power and computational costs of Dynamic Sasvi and Dynamic EDPP compare favorably with those of other state-of-the-art Gap Safe methods. \textbf{Contribution:} The contributions of our paper are summarized as follows. \begin{itemize} \item We propose a flexible screening framework based on Fenchel-Rockafellar duality, which is a generalization of the Gap Safe framework \citep{ndiaye2017gap}. \item We propose two novel dynamic screening rules for norm-regularized least squares, which are a dynamic variant of Sasvi \citep{liu2014safe} and a dynamic variant of EDPP. \item We show that Dynamic Sasvi eliminates more features and increases the speed of the solver in comparison to Gap Safe \citep{fercoq2015mind,ndiaye2017gap} both theoretically and experimentally. \end{itemize} \section{Preliminary} In this section, we first formulate the problem and introduce the key techniques used in this study. \subsection{Notation} Given $h:\mathbb{R}^m \to [-\infty,\infty]$, the domain of $h$ is defined by \[ \mathrm{dom}(h) := \{ {\boldsymbol{z}}\in\mathbb{R}^m \mid |h({\boldsymbol{z}})|<\infty \} \] and $h^\star:\mathbb{R}^m \to [-\infty,\infty]$, the Fenchel conjugate of $h$, is defined by \[ h^\star({\boldsymbol{v}}) := \sup_{{\boldsymbol{z}}\in\mathbb{R}^d} {\boldsymbol{v}}^\top{\boldsymbol{z}} - h({\boldsymbol{z}}). \] If $h$ is proper, the Fenchel-Young inequality \begin{equation} h({\boldsymbol{z}}) + h^\star({\boldsymbol{v}}) \ge {\boldsymbol{v}}^\top{\boldsymbol{z}} \label{eq:fenchel_young} \end{equation} can be proven directly from the definition of the Fenchel conjugate. The subdifferential of a proper function $h:\mathbb{R}^m \to (-\infty,\infty]$ at ${\boldsymbol{z}}$ is given as \begin{align*} & \partial h({\boldsymbol{z}}) \\ := & \{ {\boldsymbol{v}}\in\mathbb{R}^m \mid \forall{\boldsymbol{w}}\in\mathbb{R}^m \ {\boldsymbol{v}}^\top({\boldsymbol{w}}-{\boldsymbol{z}}) + h({\boldsymbol{z}}) \le h({\boldsymbol{w}}) \}. \end{align*} The next proposition is important for driving Safe-screening algorithms. \begin{prop} \label{prop:subdiff_and_conjugate} Assume that $h:\mathbb{R}^m\to(-\infty,\infty]$ is a proper lower semicontinuous convex function and ${\boldsymbol{z}},{\boldsymbol{v}}\in\mathbb{R}^m$. We then have \begin{align*} {\boldsymbol{v}}\in\partial h({\boldsymbol{z}}) & \iff h({\boldsymbol{z}}) + h^\star({\boldsymbol{v}}) = {\boldsymbol{v}}^\top{\boldsymbol{z}} \\ & \iff {\boldsymbol{z}}\in\partial h^\star({\boldsymbol{v}}). \end{align*} \end{prop} See \citep{bauschke2011convex} Section 16 for the proof. For convex set $C\subset\mathbb{R}^m$, the relative interior of $C$ is defined by \begin{align*} & \mathrm{relint}(C) \\ := & \{ v \in C \mid \forall w \in C\ \exists \epsilon > 0\ \mathrm{s.t.}\ v + \epsilon (v-w) \in C \}. \end{align*} \subsection{Problem Formulation} In this study, we consider an optimization problem, formulated as \begin{equation} \underset{{\boldsymbol{\beta}}\in\mathbb{R}^d}{\mathrm{minimize}}~~ f({\boldsymbol{X}}{\boldsymbol{\beta}}) + g({\boldsymbol{\beta}}), \label{eq:primal_problem} \end{equation} where ${\boldsymbol{\beta}}\in\mathbb{R}^d$ is the optimization variable, ${\boldsymbol{X}}\in\mathbb{R}^{n\times d}$ is a constant matrix, and $f:\mathbb{R}^n\to(-\infty,\infty]$ and $g:\mathbb{R}^d\to(-\infty,\infty]$ are proper lower semicontinuous convex functions. We assume \[ \exists {\boldsymbol{\beta}} \in \mathrm{relint}(\mathrm{dom}(g)) \ \mathrm{s.t.} \ {\boldsymbol{X}}{\boldsymbol{\beta}} \in \mathrm{relint}(\mathrm{dom}(f)) \] and the existence of the optimal point, i.e., \[ \exists \hat{{\boldsymbol{\beta}}} \in \mathrm{dom}(P) \ \mathrm{s.t.} \ P(\hat{{\boldsymbol{\beta}}}) = \inf_{{\boldsymbol{\beta}}\in\mathbb{R}^d} P({\boldsymbol{\beta}}), \] where $P:\mathbb{R}^d\to\mathbb{R}$ is defined as $P({\boldsymbol{\beta}}) = f({\boldsymbol{X}}{\boldsymbol{\beta}}) + g({\boldsymbol{\beta}})$. Note that we have not assumed the uniqueness of the solution. Moreover, we focus on the cases where $g$ induces sparsity. Although all theorems in this paper hold, we cannot eliminate any variables without sparsity. This class of optimization problem is popular, the most popular example of which is Lasso \citep{tibshirani1996regression}: \[ \underset{{\boldsymbol{\beta}}\in\mathbb{R}^d}{\mathrm{minimize}}~~ \frac{1}{2}{\|{\boldsymbol{y}}-{\boldsymbol{X}}{\boldsymbol{\beta}}\|}_2^2 + \lambda{\|{\boldsymbol{\beta}}\|}_1. \] Many extensions of Lasso, including Group-Lasso \citep{yuan2006group}, Elastic-Net \citep{zou2005regularization}, and sparse logistic regression \citep{meier2008group} are in this class. Note that non-convex extensions such as SCAD \citep{fan2001variable}, Bridge \citep{frank1993statistical}, and MCP \citep{zhang2010nearly} do not satisfy this assumption. Another example of the problem in Eq.~\eqref{eq:primal_problem} is the dual problem of a support vector machine (SVM) \citep{cortes1995support}. The dual problem of SVM can be formulated as follows: \[ \underset{{\boldsymbol{\beta}}\in\mathbb{R}^d : {\boldsymbol{0}}\le{\boldsymbol{\beta}}\le{\boldsymbol{1}}}{\mathrm{minimize}}~~ \frac{1}{2}{\|{\boldsymbol{X}}{\boldsymbol{\beta}}\|}_2^2 - {\boldsymbol{1}}^\top{\boldsymbol{\beta}}. \] The dual problem of a support vector regression (SVR) \citep{smola2004tutorial} is also a target problem. Note that we cannot eliminate any variables of the primal problem of the normal SVM and SVR owing to a lack of sparsity. However, screening methods are available for the primal problem of the feature sparse variants of SVM and SVR \citep{ghaoui2010safe,shibagaki2016simultaneous}. \subsection{Dual Problem} To derive a safe screening rule for the optimization problem, Eq.~\eqref{eq:primal_problem}, the Fenchel-Rockafellar dual formulation, plays an important role. \begin{theo} (Fenchel-Rockafellar Duality) \label{theo:fenchel_rockafellar} If all assumptions for the optimization problem \eqref{eq:primal_problem} are satisfied, we have the following: \begin{equation} \min_{{\boldsymbol{\beta}}\in\mathbb{R}^d} f({\boldsymbol{X}}{\boldsymbol{\beta}}) + g({\boldsymbol{\beta}}) = \max_{{\boldsymbol{\theta}}\in\mathbb{R}^n} - f^\star(-{\boldsymbol{\theta}}) - g^\star({\boldsymbol{X}}^\top{\boldsymbol{\theta}}). \label{eq:dual_problem} \end{equation} \end{theo} The proof of Theorem \ref{theo:fenchel_rockafellar} is given in the Appendix. Let us denote $- f^\star(-{\boldsymbol{\theta}}) - g^\star({\boldsymbol{X}}^\top{\boldsymbol{\theta}})$ by $D({\boldsymbol{\theta}})$. For primal/dual solutions, we know many conditions that are equivalent to the optimality. Herein, we provide a list of such conditions for convenience. \begin{prop} (Optimal Condition) \label{prop:optimal_condition} If all assumptions for the optimization problem \eqref{eq:primal_problem} are satisfied, the following are equivalent: \renewcommand{\labelenumi}{(\alph{enumi})} \begin{enumerate} \item $\hat{{\boldsymbol{\beta}}} \in \mathop{\mathrm{argmin\,}}_{{\boldsymbol{\beta}}\in\mathbb{R}^d} P({\boldsymbol{\beta}}) \land \hat{{\boldsymbol{\theta}}} \in \mathop{\mathrm{argmax\,}}_{{\boldsymbol{\theta}}\in\mathbb{R}^n} D({\boldsymbol{\theta}})$ \item $P(\hat{{\boldsymbol{\beta}}}) = D(\hat{{\boldsymbol{\theta}}})$ \item $f({\boldsymbol{X}}\hat{{\boldsymbol{\beta}}}) + f^\star(-\hat{{\boldsymbol{\theta}}}) = -\hat{{\boldsymbol{\theta}}}^\top{\boldsymbol{X}}\hat{{\boldsymbol{\beta}}} = - g(\hat{{\boldsymbol{\beta}}}) - g^\star({\boldsymbol{X}}^\top\hat{{\boldsymbol{\theta}}})$ \item $-\hat{{\boldsymbol{\theta}}} \in \partial f({\boldsymbol{X}}\hat{{\boldsymbol{\beta}}}) \land {\boldsymbol{X}}^\top\hat{{\boldsymbol{\theta}}} \in \partial g(\hat{{\boldsymbol{\beta}}})$ \item ${\boldsymbol{X}}\hat{{\boldsymbol{\beta}}} \in \partial f^\star(-\hat{{\boldsymbol{\theta}}}) \land \hat{{\boldsymbol{\beta}}} \in \partial g^\star({\boldsymbol{X}}^\top\hat{{\boldsymbol{\theta}}})$ \end{enumerate} \end{prop} \noindent\textbf{(Proof)} (a) $\iff$ (b) is directly derived from the strong duality. (b) $\iff$ (c) is derived from the Fenchel-Young inequality \eqref{eq:fenchel_young}. (c) $\iff$ (d) $\iff$ (e) are derived from Proposition \ref{prop:subdiff_and_conjugate}. \hfill$\Box$\vspace{2mm} \subsection{Relationship of Dual Safe Region and Screening} In this section, we show that we can eliminate some features by constructing a simple region that contains $\hat{{\boldsymbol{\theta}}}$. \begin{theo} \label{theo:screening_and_dual_safe} Assume that all assumptions for the optimization problem \eqref{eq:primal_problem} are satisfied. Let $\hat{{\boldsymbol{\beta}}}$ be the primal optimal point. Assume that the dual optimal point $\hat{{\boldsymbol{\theta}}}$ is within the region ${\mathcal{R}}$. Then, \[ \hat{{\boldsymbol{\beta}}} \in \bigcup_{{\boldsymbol{\theta}}\in{\mathcal{R}}}\partial g^\star({\boldsymbol{X}}^\top{\boldsymbol{\theta}}). \] \end{theo} \noindent\textbf{(Proof)} According to Proposition \ref{prop:optimal_condition}, $\hat{{\boldsymbol{\beta}}} \in \partial g^\star({\boldsymbol{X}}^\top\hat{{\boldsymbol{\theta}}}) \subset \bigcup_{{\boldsymbol{\theta}}\in{\mathcal{R}}}\partial g^\star({\boldsymbol{X}}^\top{\boldsymbol{\theta}})$. \hfill$\Box$\vspace{2mm} Theorem \ref{theo:screening_and_dual_safe} provides a general method for feature screening. A simple example is the following corollary. \begin{coro} Consider an optimization problem, i.e., Eq.~\eqref{eq:primal_problem} with $g({\boldsymbol{\beta}})={\|{\boldsymbol{\beta}}\|}_1$. Assume that $\hat{{\boldsymbol{\theta}}}\in{\mathcal{R}}$. We then have \[ \max_{{\boldsymbol{\theta}}\in{\mathcal{R}}}|{\boldsymbol{x}}_i^\top{\boldsymbol{\theta}}| < 1 \implies \hat{{\boldsymbol{\beta}}}_i=0. \] \end{coro} \noindent\textbf{(Proof)} By definition of $g$, we have $\partial g^\star({\boldsymbol{X}}^\top{\boldsymbol{\theta}}) \subset \{ {\boldsymbol{\beta}} \mid {\boldsymbol{\beta}}_i=0 \} \iff |{\boldsymbol{x}}_i^\top{\boldsymbol{\theta}}| < 1$. When $\max_{{\boldsymbol{\theta}}\in{\mathcal{R}}}|{\boldsymbol{x}}_i^\top{\boldsymbol{\theta}}| < 1$, we have $\hat{{\boldsymbol{\beta}}} \in \bigcup_{{\boldsymbol{\theta}}\in{\mathcal{R}}}\partial g^\star({\boldsymbol{X}}^\top{\boldsymbol{\theta}}) \subset \{ {\boldsymbol{\beta}} \mid {\boldsymbol{\beta}}_i=0 \}$ by Theorem \ref{theo:screening_and_dual_safe}. \hfill$\Box$\vspace{2mm} Note that the computational cost of $\bigcup_{{\boldsymbol{\theta}}\in{\mathcal{R}}}\partial g^\star({\boldsymbol{X}}^\top{\boldsymbol{\theta}})$ depends on the simplicity of $g$ and ${\mathcal{R}}$. The key challenge of screening is to determine the simple narrow region ${\mathcal{R}}$. Many regions have been proposed for various problems. In the next section, we provide a general framework for constructing a safe region. \section{General Framework for Constructing Safe Region} Herein, we propose a general framework for constructing a dual region that has the solution to the optimization problem in Eq.~\eqref{eq:dual_problem}. Our framework consists of a general lower bound and a problem adaptive upper-bound of the optimal value. Hence, we can derive a narrower region than the framework with a general upper bound under certain situations. The general lower-bound is given in the next Theorem. \begin{theo} \label{theo:opt_lower_bound} Consider the optimization problem in Eq.~\eqref{eq:dual_problem} and assume that $f^\star$ is $L$-strongly convex ($L\ge0$). Let $\hat{{\boldsymbol{\theta}}}$ be the solution to \eqref{eq:dual_problem}. Then, for $\forall \tilde{{\boldsymbol{\theta}}} \in \mathbb{R}^n$, we have \begin{equation} l(\hat{{\boldsymbol{\theta}}};\tilde{{\boldsymbol{\theta}}}) \le D(\hat{{\boldsymbol{\theta}}}), \label{eq:opt_lower_bound} \end{equation} where \[ l({\boldsymbol{\theta}};\tilde{{\boldsymbol{\theta}}}) = \frac{L}{2}{\|{\boldsymbol{\theta}}-\tilde{{\boldsymbol{\theta}}}\|}_2^2 + D(\tilde{{\boldsymbol{\theta}}}). \] \end{theo} \noindent\textbf{(Proof)} According to Proposition \ref{prop:optimal_condition}, ${\boldsymbol{X}}\hat{{\boldsymbol{\beta}}} \in \partial f^\star(-\hat{{\boldsymbol{\theta}}})$ and $\hat{{\boldsymbol{\beta}}} \in \partial g^\star({\boldsymbol{X}}^\top\hat{{\boldsymbol{\theta}}})$ hold. Because $f^\star$ is $L$-strongly convex and $g^\star$ is convex, for $\forall \tilde{{\boldsymbol{\theta}}} \in \mathbb{R}^n$, we have \begin{align*} f^\star(-\hat{{\boldsymbol{\theta}}}) + {({\boldsymbol{X}}\hat{{\boldsymbol{\beta}}})}^\top(-\tilde{{\boldsymbol{\theta}}}+\hat{{\boldsymbol{\theta}}}) + \frac{L}{2}{\|\tilde{{\boldsymbol{\theta}}}-\hat{{\boldsymbol{\theta}}}\|}_2^2 & \le f^\star(-\tilde{{\boldsymbol{\theta}}}), \\ g^\star({\boldsymbol{X}}^\top\hat{{\boldsymbol{\theta}}}) + \hat{{\boldsymbol{\beta}}}^\top({\boldsymbol{X}}^\top\tilde{{\boldsymbol{\theta}}}-{\boldsymbol{X}}^\top\hat{{\boldsymbol{\theta}}}) & \le g^\star({\boldsymbol{X}}^\top\tilde{{\boldsymbol{\theta}}}). \end{align*} Adding these two inequalities, we have the inequality \eqref{eq:opt_lower_bound}. \hfill$\Box$\vspace{2mm} This means that $\hat{{\boldsymbol{\theta}}}$ is within the region of $\{ {\boldsymbol{\theta}} \mid l({\boldsymbol{\theta}};\tilde{{\boldsymbol{\theta}}}) \le D({\boldsymbol{\theta}}) \}$. Because this region is too complicated for screening, we use a simple upper bound of $D({\boldsymbol{\theta}})$ to construct a simple safe region. The next theorem can be directly derived from Theorem \ref{theo:opt_lower_bound}. \begin{theo} \label{theo:general_safe_region} Consider the optimization problem in Eq.~\eqref{eq:dual_problem} and assume that $f^\star$ is $L$-strongly convex ($L\ge0$). Let $\hat{{\boldsymbol{\theta}}}$ be the solution to Eq.~\eqref{eq:dual_problem}. Assume $D({\boldsymbol{\theta}})$ is upper bounded by $u({\boldsymbol{\theta}})$, i.e., $\forall {\boldsymbol{\theta}}\in\mathbb{R}^n \ D({\boldsymbol{\theta}}) \le u({\boldsymbol{\theta}})$. Then, for $\forall\tilde{{\boldsymbol{\theta}}}\in\mathbb{R}^n$, we have \[ \hat{{\boldsymbol{\theta}}} \in \mathcal{R}(\tilde{{\boldsymbol{\theta}}}, u) = \{ {\boldsymbol{\theta}} \mid l({\boldsymbol{\theta}};\tilde{{\boldsymbol{\theta}}}) \le u({\boldsymbol{\theta}}) \}. \] \end{theo} The complexity of $\mathcal{R}(\tilde{{\boldsymbol{\theta}}}, u)$ depends on the complexity of $u$. For example, if $u$ is linear, then $\mathcal{R}(\tilde{{\boldsymbol{\theta}}}, u)$ is a sphere. We can construct a narrow, simple, and safe region with a tight simple upper-bound $u$. In fact, the Gap Safe Sphere region \citep{fercoq2015mind,ndiaye2017gap} can be derived easily from this theorem and weak duality. \begin{coro} (Gap Safe Sphere) Consider the optimization problem in Eq.~\eqref{eq:dual_problem} and assume that $f^\star$ is $L$-strongly convex ($L\ge0$). Let $\hat{{\boldsymbol{\theta}}}$ be the solution to Eq.~\eqref{eq:dual_problem}. For $\forall\tilde{{\boldsymbol{\beta}}}\in\mathbb{R}^d$ and $\forall\tilde{{\boldsymbol{\theta}}}\in\mathbb{R}^n$, the region of the Gap Safe Sphere is given as \begin{equation} {\mathcal{R}}^{\mathrm{GS}}(\tilde{{\boldsymbol{\beta}}},\tilde{{\boldsymbol{\theta}}}) = \{ {\boldsymbol{\theta}} \mid l({\boldsymbol{\theta}};\tilde{{\boldsymbol{\theta}}}) \le P(\tilde{{\boldsymbol{\beta}}}) \}. \end{equation} Then, \[ \hat{{\boldsymbol{\theta}}} \in {\mathcal{R}}^{\mathrm{GS}}(\tilde{{\boldsymbol{\beta}}},\tilde{{\boldsymbol{\theta}}}). \] \end{coro} \noindent\textbf{(Proof)} Based on a weak duality, we have $\forall {\boldsymbol{\theta}} \ D({\boldsymbol{\theta}}) \le P(\tilde{{\boldsymbol{\beta}}})$. Using this constant function as an upper bound in Theorem \ref{theo:general_safe_region}, the corollary is derived directly. \hfill$\Box$\vspace{2mm} Hence, our framework can be seen as a generalization of Gap Safe. Owing to this generalization, we can use a stronger problem-adaptive upper-bound than a weak duality. In the next section, we derive specific regions for \emph{Lasso-Like} problem. Some regions for other problems are given in the Appendix. \section{Safe region for Lasso-like problem} In this section, we introduce a strong upper bound for the dual problems of Lasso and similar problems. The dome region derived from it can be seen as a generalization of Sasvi \citep{liu2014safe} and is narrower than Gap Safe Sphere and Gap Safe Dome. \subsection{Norm-regularized least squares problem and its generalization} Norm-regularized least squares is an optimization problem and is formulated as \[ \underset{{\boldsymbol{\beta}}\in\mathbb{R}^d}{\rm{minimize}}~~ \frac{1}{2}{\|{\boldsymbol{y}}-{\boldsymbol{X}}{\boldsymbol{\beta}}\|}_2^2 + g({\boldsymbol{\beta}}) \] where $g$ is a norm. Apparently, this is a subset of problems \ref{eq:primal_problem}. Although this formulation includes Lasso \citep{tibshirani1996regression}, (overlapping) group-Lasso \citep{yuan2006group,jacob2009overlap}, and ordered weighted L1 regression \citep{figueiredo2016ordered}, the non-negative Lasso is not included. To unify them, we define the \emph{Lasso-like} problem as follows: \begin{equation} \underset{{\boldsymbol{\beta}}\in\mathbb{R}^d}{\mathrm{minimize}}~~ \frac{1}{2}{\|{\boldsymbol{y}}-{\boldsymbol{X}}{\boldsymbol{\beta}}\|}_2^2 + g({\boldsymbol{\beta}}), \label{eq:primal_lassolike} \end{equation} where the problem satisfies all assumptions for Eq.~\eqref{eq:primal_problem} and $g$ satisfies \begin{equation} \forall k \ge 0, {\boldsymbol{\beta}} \in \mathbb{R}^d \ g(k{\boldsymbol{\beta}})=kg({\boldsymbol{\beta}}). \label{eq:g_normlike} \end{equation} For the Lasso-like problem, the Fenchel conjugate function of $f$ and $g$ are given as \begin{align} f^\star(-{\boldsymbol{\theta}}) = & \frac{1}{2}{\|{\boldsymbol{\theta}}\|}_2^2 - {\boldsymbol{y}}^\top{\boldsymbol{\theta}}, \label{eq:dual_f_sqloss} \\ g^\star({\boldsymbol{X}}^\top{\boldsymbol{\theta}}) = & \begin{cases} 0 & (\forall{\boldsymbol{\beta}}\ \ {\boldsymbol{\theta}}^\top{\boldsymbol{X}}{\boldsymbol{\beta}} - g({\boldsymbol{\beta}}) \le 0)\\ \infty & (\exists{\boldsymbol{\beta}}\ \ {\boldsymbol{\theta}}^\top{\boldsymbol{X}}{\boldsymbol{\beta}} - g({\boldsymbol{\beta}}) > 0). \label{eq:dual_g_normlike} \end{cases} \end{align} Note that $\{{\boldsymbol{\theta}} \mid g^\star({\boldsymbol{X}}^\top{\boldsymbol{\theta}})=0\}$ is a closed convex set. Hence, the Lasso-like problem is a class of problems whose Fenchel-Rockafellar dual can be seen as a convex projection. \subsection{Proposed Dome Region for Lasso-like problem} Thanks to Theorem \ref{theo:opt_lower_bound}, we can construct a safe region by proposing an upper bound $u({\boldsymbol{\theta}})$. In this section, we propose a tight upper bound for Lasso-like problems. The direct expression of $f^\star$ in Eq.~\eqref{eq:dual_f_sqloss} is sufficiently simple. We only need an upper bound of $-g^\star$ to construct a simple region. The upper bound is given as follows: \begin{lemm} \label{lemm:upper_gdual_lassolike} For Lasso-like problems \eqref{eq:primal_lassolike}, for $\forall\tilde{{\boldsymbol{\beta}}}\in\mathbb{R}^d$ and $\forall{\boldsymbol{\theta}}\in\mathbb{R}^n$, we have \begin{align} - g^\star({\boldsymbol{X}}^\top{\boldsymbol{\theta}}) & \le \inf_{k\ge0}g(k\tilde{{\boldsymbol{\beta}}}) - {\boldsymbol{\theta}}^\top{\boldsymbol{X}}(k\tilde{{\boldsymbol{\beta}}}) \nonumber \\ & = \begin{cases} 0 & (g(\tilde{{\boldsymbol{\beta}}}) - {\boldsymbol{\theta}}^\top {\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}} \ge 0) \\ -\infty & (g(\tilde{{\boldsymbol{\beta}}}) - {\boldsymbol{\theta}}^\top {\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}} < 0). \\ \end{cases} \label{eq:gdual_linear_const} \end{align} \end{lemm} \noindent\textbf{(Proof)} Based on a Fenchel-Young inequality \eqref{eq:fenchel_young}, we have \[ - g^\star({\boldsymbol{X}}^\top{\boldsymbol{\theta}}) \le \inf_{k\ge0} g(k\tilde{{\boldsymbol{\beta}}}) - {\boldsymbol{\theta}}^\top{\boldsymbol{X}}(k\tilde{{\boldsymbol{\beta}}}). \] Under the condition of Eq.~\eqref{eq:g_normlike}, we have $g(k\tilde{{\boldsymbol{\beta}}}) = k g(\tilde{{\boldsymbol{\beta}}})$. Therefore, the optimal value of the upper bound is zero if $g(\tilde{{\boldsymbol{\beta}}}) - {\boldsymbol{\theta}}^\top{\boldsymbol{X}}(\tilde{{\boldsymbol{\beta}}}) \ge 0$ and $-\infty$ otherwise. \hfill$\Box$\vspace{2mm} The next theorem can be directly derived from Lemma \ref{lemm:upper_gdual_lassolike}. \begin{theo} Consider Lasso-like problems in Eq.~\eqref{eq:primal_lassolike}. Let \begin{equation} u^{\mathrm{DS}}({\boldsymbol{\theta}};\tilde{{\boldsymbol{\beta}}}) := \begin{cases} -f^\star(-{\boldsymbol{\theta}}) & (g(\tilde{{\boldsymbol{\beta}}}) - {\boldsymbol{\theta}}^\top{\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}} \ge 0) \\ - \infty & (g(\tilde{{\boldsymbol{\beta}}}) - {\boldsymbol{\theta}}^\top{\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}} < 0) \end{cases}. \label{eq:upper_dsasvi} \end{equation} Then, for $\forall\tilde{{\boldsymbol{\beta}}}\in\mathbb{R}^d$ and $\forall{\boldsymbol{\theta}}\in\mathbb{R}^n$, \[ D({\boldsymbol{\theta}}) \le u^{\mathrm{DS}}({\boldsymbol{\theta}};\tilde{{\boldsymbol{\beta}}}). \] \end{theo} Then, Theorem \ref{theo:general_safe_region} provides a simple and safe region. \begin{theo} \label{theo:region_dsasvi} Consider the Lasso-like problem in Eq.~\eqref{eq:primal_lassolike} and its Fenchel-Rockafellar dual problem in Eq.~\eqref{eq:dual_problem}. Let $\hat{{\boldsymbol{\theta}}}$ be the dual optimal point. We assume that $\tilde{{\boldsymbol{\beta}}}\in\mathbb{R}^d$ and $\tilde{{\boldsymbol{\theta}}}\in\mathrm{dom}(D)$. Then, $\hat{{\boldsymbol{\theta}}}$ is within the Dynamic Sasvi region, which is given as an intersection of a sphere and a half space: \begin{align*} & {\mathcal{R}}^{\mathrm{DS}}(\tilde{{\boldsymbol{\beta}}},\tilde{{\boldsymbol{\theta}}}) \\ := & \{ {\boldsymbol{\theta}} \mid l({\boldsymbol{\theta}};\tilde{{\boldsymbol{\theta}}}) \le u^{\mathrm{DS}}({\boldsymbol{\theta}};\tilde{{\boldsymbol{\beta}}}) \}, \\ = & \{ {\boldsymbol{\theta}} \mid {\left\|{\boldsymbol{\theta}} - \frac{1}{2}(\tilde{{\boldsymbol{\theta}}}+{\boldsymbol{y}})\right\|}_2^2 \le \frac{1}{4}{\|(\tilde{{\boldsymbol{\theta}}}-{\boldsymbol{y}})\|}_2^2 \\. & \land 0 \le g(\tilde{{\boldsymbol{\beta}}}) - {\boldsymbol{\theta}}^\top{\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}} \}. \end{align*} \end{theo} The proof of Theorem \ref{theo:region_dsasvi} is given in the Appendix. Because of continuity, ${\mathcal{R}}^{\mathrm{DS}}(\tilde{{\boldsymbol{\beta}}}^{(t)},\tilde{{\boldsymbol{\theta}}}^{(t)})$ converges to ${\mathcal{R}}^{\mathrm{DS}}(\hat{{\boldsymbol{\beta}}},\hat{{\boldsymbol{\theta}}})=\{\hat{{\boldsymbol{\theta}}}\}$ if $\lim_{t\to\infty}\tilde{{\boldsymbol{\beta}}}^{(t)}=\hat{{\boldsymbol{\beta}}}$ and $\lim_{t\to\infty}\tilde{{\boldsymbol{\theta}}}^{(t)}=\hat{{\boldsymbol{\theta}}}$ hold. \subsection{Relation to Sasvi} In this section, we show that safe screening with variational inequality (Sasvi) \citep{liu2014safe} is a special case of our screening rule. First, we review Sasvi. The target task of Sasvi is to minimize $\frac{1}{2}{\left\|{\boldsymbol{y}}-{\boldsymbol{X}}{\boldsymbol{\beta}}\right\|}_2^2 + \lambda{\|{\boldsymbol{\beta}}\|}_1$ with many $\lambda$s. Divided by $\lambda^2$ and change optimization variable, we obtain the following: \[ \underset{{\boldsymbol{\beta}}\in\mathbb{R}^d}{\mathrm{minimize}}~~ \frac{1}{2}{\left\|\frac{1}{\lambda}{\boldsymbol{y}}-{\boldsymbol{X}}{\boldsymbol{\beta}}\right\|}_2^2 + {\|{\boldsymbol{\beta}}\|}_1. \] Let ${\hat{\boldsymbol{\beta}}}^{(\lambda)}$ and ${\hat{\boldsymbol{\theta}}}^{(\lambda)}$ be the optimal points of the primal problem and the Fenchel-Rockafellar dual problem, respectively. Sasvi uses ${\hat{\boldsymbol{\theta}}}^{(\lambda_0)}$ to construct a safe region for ${\hat{\boldsymbol{\theta}}}^{(\lambda)}$. Although Sasvi was originally proposed for Lasso, it can be easily generalized for the Lasso-like problem as follows. \begin{theo} \label{theo:sasvi_lassolike} Let ${\hat{\boldsymbol{\theta}}}^{(\lambda)}$ be the optimal point of the Fenchel-Rockafellar dual problem of the Lasso-like problem (that is, $g$ satisfies Eq.~\eqref{eq:g_normlike}) \[ \underset{{\boldsymbol{\theta}} : g^\star({\boldsymbol{X}}^\top{\boldsymbol{\theta}})=0}{\mathrm{maximize}}~~ - \frac{1}{2}{\|{\boldsymbol{\theta}}-\frac{1}{\lambda}{\boldsymbol{y}}\|}_2^2 + \frac{1}{2}{\|\frac{1}{\lambda}{\boldsymbol{y}}\|}_2^2. \] Assume we have an exact ${\hat{\boldsymbol{\theta}}}^{(\lambda_0)}$. We then have \begin{align*} {\hat{\boldsymbol{\theta}}}^{(\lambda)} \in & {\mathcal{R}}^{\mathrm{Sasvi}}(\lambda,\lambda_0) \\ := & \{ {\boldsymbol{\theta}} \mid 0 \ge {\left(\frac{1}{\lambda}{\boldsymbol{y}}-{\boldsymbol{\theta}}\right)}^\top\left({\hat{\boldsymbol{\theta}}}^{(\lambda_0)}-{\boldsymbol{\theta}}\right) \\ & \land 0 \ge {\left(\frac{1}{\lambda_0}{\boldsymbol{y}}-{\hat{\boldsymbol{\theta}}}^{(\lambda_0)}\right)}^\top\left({\boldsymbol{\theta}}-{\hat{\boldsymbol{\theta}}}^{(\lambda_0)}\right) \} \end{align*} \end{theo} \noindent\textbf{(Proof)} Because the duality of the Lasso-like problem can be interpreted as a projection from $\frac{1}{\lambda}{\boldsymbol{y}}$ to a closed convex set $\{{\boldsymbol{\theta}} \mid g^\star({\boldsymbol{X}}^\top{\boldsymbol{\theta}})=0\}$, two variational inequalities hold. See \citep{liu2014safe} for more details. \hfill$\Box$\vspace{2mm} We can then prove that ${\mathcal{R}}^{\mathrm{Sasvi}}(\lambda_0):={\mathcal{R}}^{\mathrm{Sasvi}}(1,\lambda_0)$ equals ${\mathcal{R}}^{\mathrm{DS}}({\hat{\boldsymbol{\beta}}}^{(\lambda_0)},{\hat{\boldsymbol{\theta}}}^{(\lambda_0)})$. Note that we can set $\lambda=1$ without a loss of generality because multiplying the same scalar to ${\boldsymbol{y}}$, $\lambda$, and $\lambda_0$ does not change the problem or the region. \begin{theo} \label{theo:equality_sasvi} Consider the Lasso-like problem \[ \underset{{\boldsymbol{\beta}}\in\mathbb{R}^d}{\mathrm{minimize}}~~ \frac{1}{2}{\left\|\frac{1}{\lambda}{\boldsymbol{y}}-{\boldsymbol{X}}{\boldsymbol{\beta}}\right\|}_2^2 + g({\boldsymbol{\beta}}). \] Let ${\hat{\boldsymbol{\beta}}}^{(\lambda)}$ and ${\hat{\boldsymbol{\theta}}}^{(\lambda)}$ be the primal/dual optimal points, respectively. We then have \[ {\mathcal{R}}^{\mathrm{Sasvi}}(\lambda_0) = {\mathcal{R}}^{\mathrm{DS}}({\hat{\boldsymbol{\beta}}}^{(\lambda_0)},{\hat{\boldsymbol{\theta}}}^{(\lambda_0)}). \] where ${\mathcal{R}}^{\mathrm{Sasvi}}(\lambda_0)$ and ${\mathcal{R}}^{\mathrm{DS}}({\hat{\boldsymbol{\beta}}}^{(\lambda_0)},{\hat{\boldsymbol{\theta}}}^{(\lambda_0)})$ are safe regions for ${\hat{\boldsymbol{\theta}}}^{(1)}$. \end{theo} The proof of Theorem \ref{theo:equality_sasvi} is given in the Appendix. For this reason, we have labeled it "Dynamic Sasvi." This generalization increases the speed of the solver significantly because the region of our method may be extremely narrow in the late stage of optimization. As pointed out in \citep{fercoq2015mind}, some sequential safe screening rules, including Sasvi, are not safe in practice because we do not have the exact solution for $\lambda_0$. Dynamic Sasvi overcomes this problem because its region is safe if it is not the exact solution. \subsection{Comparison to Gap Safe Dome and Gap Safe Sphere} \begin{figure*}[t] \centering \begin{subfigure}[t]{0.325\textwidth} \centering \includegraphics[width=0.99\textwidth]{dsde.png} \caption{Regions of dynamic Sasvi (dark green) and dynamic EDPP (light green).} \label{fig:regions_dsde} \end{subfigure} \begin{subfigure}[t]{0.325\textwidth} \centering \includegraphics[width=0.99\textwidth]{dsgsgd.png} \caption{Regions of dynamic Sasvi (green), Gap Safe Sphere (light red) and Gap Safe Dome (dark red).} \label{fig:regions_dsgsgd} \end{subfigure} \begin{subfigure}[t]{0.325\textwidth} \centering \includegraphics[width=0.99\textwidth]{degsgd.png} \caption{Regions of dynamic EDPP (green), Gap Safe Sphere (light red), Gap Safe Dome (dark red).} \label{fig:regions_degsgd} \end{subfigure} \caption{Comparisons of various safe regions for Lasso (${\boldsymbol{X}}=\begin{pmatrix} 2 & 0 \\ -1 & 3 \\ \end{pmatrix}$, ${\boldsymbol{y}}=\begin{pmatrix} 1.5 \\ 1 \\ \end{pmatrix}$). The blue region is the feasible region. $\tilde{{\boldsymbol{\beta}}}$ was obtained by a cycle of coordinate descent. $\tilde{{\boldsymbol{\theta}}}=\phi(\tilde{{\boldsymbol{\beta}}})$.} \vspace{-.2in} \end{figure*} Here, we show that the proposed method is stronger than Gap Safe Dome \citep{fercoq2015mind} and Gap Safe Sphere \citep{fercoq2015mind}, \citep{ndiaye2017gap} for Lasso-like problems. As shown in \citep{fercoq2015mind}, for Lasso, the regions of the Gap Safe Dome and Gap Safe Sphere are the relaxation of the intersection of a sphere and the contra of another sphere. We call this unrelaxed region Gap Safe Moon. Although Gap Safe Moon is defined only for Lasso in \citep{fercoq2015mind}, it can be naturally generalized for Lasso-like problems. Gap Safe Moon can be derived from Corollary \ref{theo:general_safe_region}. \begin{theo} (Gap Safe Moon) \label{theo:upperbound_gm} Consider the Lasso-like problem in Eq.~\eqref{eq:primal_lassolike} and its dual Fenchel-Rockafellar equation, i.e., Eq. ~\eqref{eq:dual_problem}. Let $\hat{{\boldsymbol{\theta}}}$ be the dual optimal point. For $\tilde{{\boldsymbol{\beta}}}\in\mathbb{R}^d$, the Gap Safe Moon upper bound is given as \begin{equation} u^{\mathrm{GM}}({\boldsymbol{\theta}}; \tilde{{\boldsymbol{\beta}}}) = \begin{cases} -f^\star(-{\boldsymbol{\theta}}) & (-f^\star(-{\boldsymbol{\theta}}) \le P(\tilde{{\boldsymbol{\beta}}})) \\ -\infty & (-f^\star(-{\boldsymbol{\theta}}) > P(\tilde{{\boldsymbol{\beta}}})) \end{cases}. \label{eq:upper_gm} \end{equation} Then, for $\forall\tilde{{\boldsymbol{\beta}}}\in\mathbb{R}^d$, $\forall\tilde{{\boldsymbol{\theta}}}\in\mathrm{dom}(D)$, and $\forall{\boldsymbol{\theta}}\in\mathbb{R}^n$, we have \[ D({\boldsymbol{\theta}}) \le u^{\mathrm{GM}}({\boldsymbol{\theta}}; \tilde{{\boldsymbol{\beta}}}) \] and hence \begin{align*} \hat{{\boldsymbol{\theta}}} \in & \{ {\boldsymbol{\theta}} \mid l({\boldsymbol{\theta}}; \tilde{{\boldsymbol{\theta}}}) \le u^{\mathrm{GM}}({\boldsymbol{\theta}}; \tilde{{\boldsymbol{\beta}}}) \} \\ = & \{ {\boldsymbol{\theta}} \mid -f^\star(-{\boldsymbol{\theta}}) \le P(\tilde{{\boldsymbol{\beta}}}) \land {(\tilde{{\boldsymbol{\theta}}}-{\boldsymbol{\theta}})}^\top({\boldsymbol{y}}-{\boldsymbol{\theta}}) \le 0\}. \end{align*} \end{theo} The proof of Theorem \ref{theo:upperbound_gm} is given in the Appendix. We can then derive the next theorem. \begin{theo} (Gap Safe Moon and Dynamic Sasvi) For $\forall\tilde{{\boldsymbol{\beta}}}\in\mathbb{R}^d$ and $\forall{\boldsymbol{\theta}}\in\mathbb{R}^n$, we have \[ u^{\mathrm{DS}}({\boldsymbol{\theta}}; \tilde{{\boldsymbol{\beta}}}) \le u^{\mathrm{GM}}({\boldsymbol{\theta}}; \tilde{{\boldsymbol{\beta}}}). \] \end{theo} \noindent\textbf{(Proof)} If $g(\tilde{{\boldsymbol{\beta}}}) - {\boldsymbol{\theta}}^\top{\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}}$ is negative, $u^{\mathrm{DS}}({\boldsymbol{\theta}}; \tilde{{\boldsymbol{\beta}}}) = -\infty$, and thus the inequality holds. If $0 \le g(\tilde{{\boldsymbol{\beta}}}) - {\boldsymbol{\theta}}^\top{\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}}$, by adding the Fenchel-Young inequality \eqref{eq:fenchel_young}, we have $-f^\star(-{\boldsymbol{\theta}}) \le P(\tilde{{\boldsymbol{\beta}}})$ and $u^{\mathrm{DS}}({\boldsymbol{\theta}}; \tilde{{\boldsymbol{\beta}}}) = u^{\mathrm{GM}}({\boldsymbol{\theta}}; \tilde{{\boldsymbol{\beta}}}) = -f^\star(-{\boldsymbol{\theta}})$. \hfill$\Box$\vspace{2mm} This theorem means that the region of dynamic Sasvi is a subset of the region of Gap Safe Moon. Because Gap Safe Dome and Gap Safe Sphere are based on the relaxation of the Gap Safe Moon region, our screening is always stronger than them. Figure \ref{fig:regions_dsgsgd} shows the regions of Dynamic Sasvi, Gap Safe Dome and Gap Safe Sphere. \subsection{Sphere Relaxation (Dynamic EDPP)} In some situations, even a dome region is too complicated to calculate $\bigcup_{{\boldsymbol{\theta}}\in{\mathcal{R}}}\partial g^\star(-{\boldsymbol{X}}^\top{\boldsymbol{\theta}})$. We propose using a minimum radius sphere that includes the dynamic Sasvi region in such cases. This method can be seen as a dynamic variant of the enhanced dual polytope projections (EDPP) \citep{wang2015lasso} because the EDPP is the minimum radius sphere relaxation of Sasvi. \begin{theo} \label{theo:region_dynamicedpp} Consider the Lasso-like problem in Eq.~\eqref{eq:primal_lassolike} and its Fenchel-Rockafellar dual problem in Eq.~\eqref{eq:dual_problem}. We assume that $\tilde{{\boldsymbol{\beta}}}\in\mathbb{R}^d$ and $\tilde{{\boldsymbol{\theta}}}\in\mathrm{dom}(D)$. If $n\ge2$, the minimum radius sphere including ${\mathcal{R}}^{\mathrm{DS}}(\tilde{{\boldsymbol{\beta}}},\tilde{{\boldsymbol{\theta}}})$ is \begin{equation} {\mathcal{R}}^{\mathrm{DE}}(\tilde{{\boldsymbol{\beta}}},\tilde{{\boldsymbol{\theta}}}) = \{ {\boldsymbol{\theta}} \mid {\|{\boldsymbol{\theta}} - {\boldsymbol{\theta}}_c\|}_2^2 \le r^2 \}, \label{eq:region_SLL} \end{equation} where \begin{align*} {\boldsymbol{\theta}}_c & = \frac{1}{2}(\tilde{{\boldsymbol{\theta}}}+{\boldsymbol{y}})-\alpha{\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}} \\ r^2 & = \frac{1}{4}{\|\tilde{{\boldsymbol{\theta}}}-{\boldsymbol{y}}\|}_2^2 - \alpha^2{\|{\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}}\|}_2^2 \\ \alpha & = \max\left(0, \frac{1}{{\|{\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}}\|}_2^2}\left(\frac{1}{2}{(\tilde{{\boldsymbol{\theta}}}+{\boldsymbol{y}})}^\top{\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}} - g(\tilde{{\boldsymbol{\beta}}})\right)\right). \end{align*} \end{theo} The proof of Theorem \ref{theo:region_dynamicedpp} is given in the Appendix. Figures \ref{fig:regions_dsde} and \ref{fig:regions_degsgd} show the dynamic EDPP region and other regions. Note that the dynamic EDPP region is not guaranteed to be within the Gap Safe Sphere region. However, its radius is always smaller than that of Gap Safe Sphere. \section{Implementation for Lasso} \begin{algorithm}[tb] \caption{Coordinate descent with Dynamic Sasvi for Lasso} \label{alg:implementation} \begin{algorithmic}[1] \STATE {\bfseries Input: ${\boldsymbol{X}}, {\boldsymbol{y}}, {\boldsymbol{\beta}}^0, T, c, \epsilon$} \STATE Initialize $\tilde{{\boldsymbol{\beta}}} \leftarrow {\boldsymbol{\beta}}^0$, ${\mathcal{A}} \leftarrow [\![d]\!]$ \FOR{$t \in [\![T]\!]$} \IF{$k \mod c = 1$} \STATE Compute $\tilde{{\boldsymbol{\theta}}}=\phi_{\mathcal{A}}(\tilde{{\boldsymbol{\beta}}})$ \IF{$P(\tilde{{\boldsymbol{\beta}}})-D(\tilde{{\boldsymbol{\theta}}}) \le \epsilon$} \STATE {\bf break} \ENDIF \STATE ${\mathcal{R}} \leftarrow {\mathcal{R}}^{\mathrm{DS}}(\tilde{{\boldsymbol{\beta}}},\tilde{{\boldsymbol{\theta}}})$ \STATE ${\mathcal{A}} \leftarrow \{j \in {\mathcal{A}} : \max_{{\boldsymbol{\theta}}\in{\mathcal{R}}}|{\boldsymbol{x}}_j^\top{\boldsymbol{\theta}}| \ge 1\}$ \FOR{$j \in [\![d]\!]-{\mathcal{A}}$} \STATE $\tilde{{\boldsymbol{\beta}}}_j \leftarrow 0$ \ENDFOR \ENDIF \FOR{$j \in {\mathcal{A}}$} \STATE $u \leftarrow \tilde{{\boldsymbol{\beta}}}_j{\|{\boldsymbol{x}}_j\|}_2^2-{\boldsymbol{x}}_j^\top({\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}}-{\boldsymbol{y}})$ \STATE $\tilde{{\boldsymbol{\beta}}}_j \leftarrow \frac{1}{{\|{\boldsymbol{x}}_j\|}_2^2}\text{sign}(u)\max(0,|u|-1)$ \ENDFOR \ENDFOR \STATE {\bfseries Output:$\tilde{{\boldsymbol{\beta}}}$} \end{algorithmic} \end{algorithm} In this section, we provide a specific solver based on Theorem \ref{theo:region_dsasvi}. Because the algorithm used to calculate $\bigcup_{{\boldsymbol{\theta}}\in{\mathcal{R}}}\partial g^\star({\boldsymbol{X}}^\top{\boldsymbol{\theta}})$ depends on $g$, we introduce a Lasso solver as an example. We must choose an iterative solver to combine with screening methods because they cannot estimate the solution alone. Although our methods can work with any iterative method, we use coordinate descent, which is recommended in \citep{friedman2007}. \subsection{Choice of $\tilde{{\boldsymbol{\theta}}}$} As shown in the previous section, $\lim_{t\to\infty}{\mathcal{R}}^{\mathrm{DS}}({\boldsymbol{\beta}}^t,{\boldsymbol{\theta}}^t)$ converges to $\{\hat{{\boldsymbol{\theta}}}\}$ when $\lim_{t\to\infty}{\boldsymbol{\beta}}^t=\hat{{\boldsymbol{\beta}}}$ and $\lim_{t\to\infty}{\boldsymbol{\theta}}^t=\hat{{\boldsymbol{\theta}}}$ holds. Because the iterative solver provides such a sequence of primal points and screening does not harm its convergence, we only need a converging sequence of dual points to obtain a converging safe region. The next theorem provides such a sequence. \begin{theo} (Converging ${\boldsymbol{\theta}}^t$) Consider the optimization problem Eq. \eqref{eq:primal_lassolike} with $g({\boldsymbol{\beta}})={\|{\boldsymbol{\beta}}\|}_1$. Let $\hat{{\boldsymbol{\beta}}}\in\mathbb{R}^d$ and $\hat{{\boldsymbol{\theta}}}\in\mathbb{R}^n$ be the primal/dual solution. Assume $\lim_{t\to\infty}{\boldsymbol{\beta}}^t=\hat{{\boldsymbol{\beta}}}$. Let us define $\phi:\mathbb{R}^d\to\mathbb{R}^n$ as \[ \phi({\boldsymbol{\beta}}) := \frac{1}{\max(1,{\|{\boldsymbol{X}}^\top({\boldsymbol{y}}-{\boldsymbol{X}}{\boldsymbol{\beta}})\|}_\infty)}({\boldsymbol{y}}-{\boldsymbol{X}}{\boldsymbol{\beta}}). \] Then, $\forall {\boldsymbol{\beta}} \ \phi({\boldsymbol{\beta}})\in\mathrm{dom}(D)$ and $\lim_{t\to\infty}\phi({\boldsymbol{\beta}}^t)=\hat{{\boldsymbol{\theta}}}$ hold. \end{theo} \noindent\textbf{(Proof)} $\phi({\boldsymbol{\beta}})\in\mathrm{dom}(D)$ is directly derived from ${\|{\boldsymbol{X}}^\top\phi({\boldsymbol{\beta}})\|}_\infty = \min({\|{\boldsymbol{X}}^\top({\boldsymbol{y}}-{\boldsymbol{X}}{\boldsymbol{\beta}})\|}_\infty,1) \le 1$. Because $\phi$ is continuous and $\phi(\hat{{\boldsymbol{\beta}}})=\hat{{\boldsymbol{\theta}}}$, $\lim_{t\to\infty}\phi({\boldsymbol{\beta}}^t)=\hat{{\boldsymbol{\theta}}}$ also holds. \hfill$\Box$\vspace{2mm} Actually, if ${\mathcal{A}}$ is the set of features that is not yet eliminated, we can use \[ \phi_{\mathcal{A}}({\boldsymbol{\beta}}) := \frac{1}{\max(1,\max_{j\in{\mathcal{A}}}{\boldsymbol{x}}_j^\top({\boldsymbol{y}}-{\boldsymbol{X}}{\boldsymbol{\beta}}))}({\boldsymbol{X}}{\boldsymbol{\beta}}-{\boldsymbol{y}}) \] instead of $\phi({\boldsymbol{\beta}})$. Although $\phi_{\mathcal{A}}({\boldsymbol{\beta}})\in\mathrm{dom}(D)$ is not guaranteed, $\phi_{\mathcal{A}}({\boldsymbol{\beta}})$ is guaranteed to satisfy all constraints that are active in the dual solution. In other words, $\phi_{\mathcal{A}}({\boldsymbol{\beta}})$ is in the domain of the dual problem of the small primal problem without eliminated features. Now, we can optimize the problem with the proposed screening. The pseudo code is described in Algorithm \ref{alg:implementation}. Direct expression of $\max_{{\boldsymbol{\theta}}\in{\mathcal{R}}^{\mathrm{DS}}(\tilde{{\boldsymbol{\beta}}},\tilde{{\boldsymbol{\theta}}})}|{\boldsymbol{x}}_j^\top{\boldsymbol{\theta}}|$ is given in the Appendix. \subsection{Computational Cost of Dynamic Sasvi Screening} In Dynamic Sasvi screening, the calculation of $\phi_{\mathcal{A}}(\tilde{{\boldsymbol{\beta}}})$ and $\max_{{\boldsymbol{\theta}}\in{\mathcal{R}}}|{\boldsymbol{x}}_j^\top{\boldsymbol{\theta}}|$ controls the computational cost. If we have ${\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}}$, ${\boldsymbol{X}}^\top{\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}}$, and ${\boldsymbol{X}}^\top{\boldsymbol{y}}$, we can obtain $\phi_{\mathcal{A}}({\boldsymbol{\beta}})$ with $O(n+d)$ calculations. If we have ${\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}}$, ${\boldsymbol{X}}^\top{\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}}$, $\tilde{{\boldsymbol{\theta}}}$, ${\boldsymbol{X}}^\top\tilde{{\boldsymbol{\theta}}}$, and ${\boldsymbol{X}}^\top{\boldsymbol{y}}$, we can obtain $\max_{{\boldsymbol{\theta}}\in{\mathcal{R}}}|{\boldsymbol{x}}_j^\top{\boldsymbol{\theta}}|$ for all $j$ with $O(n+d)$ calculations. Because ${\boldsymbol{X}}^\top{\boldsymbol{y}}$ is constant and $\tilde{{\boldsymbol{\theta}}}=\phi_{\mathcal{A}}(\tilde{{\boldsymbol{\beta}}})$ is a linear combination of ${\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}}$ and ${\boldsymbol{y}}$, the calculations of only ${\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}}$ and ${\boldsymbol{X}}^\top{\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}}$ cost $O(nd)$. Hence, the screening cost is almost the same for all methods, which require ${\boldsymbol{X}}^\top{\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}}$, including Gap Safe. \subsection{Computation of Lasso Path} In practice, we formulate the Lasso problem as follows: \[ \underset{{\boldsymbol{\beta}}\in\mathbb{R}^d}{\rm{minimize}}~~ \frac{1}{2}{\left\|\frac{1}{\lambda}{\boldsymbol{y}}-{\boldsymbol{X}}{\boldsymbol{\beta}}\right\|}_2^2 + {\|{\boldsymbol{\beta}}\|}_1 \] and solve for many values of $\lambda$ to choose the best solution. Considering the situation in which we have to estimate the solutions $\hat{{\boldsymbol{\beta}}}^{(\lambda_1)}, \hat{{\boldsymbol{\beta}}}^{(\lambda_2)}, \cdots, \hat{{\boldsymbol{\beta}}}^{(\lambda_M)}$ corresponds to $\lambda_1 > \lambda_2 > \cdots > \lambda_M$. Many studies (e.g., \citep{fercoq2015mind}) recommend using the estimated solution for $\lambda_{m-1}$ as the initial vector in the estimation of $\hat{{\boldsymbol{\beta}}}^{(\lambda_m)}$ because $\hat{{\boldsymbol{\beta}}}^{(\lambda_{m-1})}$ and $\hat{{\boldsymbol{\beta}}}^{(\lambda_m)}$ may be close. In our implementation, we set the initial vector as $k\tilde{{\boldsymbol{\beta}}}^{(\lambda_{m-1})}$, where $\tilde{{\boldsymbol{\beta}}}^{(\lambda_{m-1})}$ is the estimation of $\hat{{\boldsymbol{\beta}}}^{(\lambda_{m-1})}$ and \begin{align*} k := & \mathop{\mathrm{argmin\,}}_{k\ge0}\frac{1}{2}{\left\|\frac{1}{\lambda_m}{\boldsymbol{y}}-k{\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}}^{(\lambda_{m-1})}\right\|}_2^2 + k{\|\tilde{{\boldsymbol{\beta}}}^{(\lambda_{m-1})}\|}_1 \\ = & \frac{1}{{\|{\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}}^{(\lambda_{m-1})}\|}_2^2}({\|\tilde{{\boldsymbol{\beta}}}^{(\lambda_{m-1})}\|}_1 - \frac{1}{\lambda_m}{\boldsymbol{y}}^\top{\boldsymbol{X}}\tilde{{\boldsymbol{\beta}}}^{(\lambda_{m-1})}). \end{align*} \begin{figure*}[t] \centering \begin{subfigure}[t]{0.325\textwidth} \centering \includegraphics[width=0.99\textwidth]{neliminated_leukemia_y100f10.png} \caption{Feature remaining rate (Leukemia).} \label{fig:feature_remaininf_rate_leukemia} \end{subfigure} \begin{subfigure}[t]{0.325\textwidth} \centering \includegraphics[width=0.99\textwidth]{pathtime_leukemia_lamb100.png} \caption{Computational time (Leukemia).} \label{fig:time_path_leukemia} \end{subfigure} \begin{subfigure}[t]{0.325\textwidth} \centering \includegraphics[width=0.99\textwidth]{pathtime_20news_lamb100.png} \caption{Computational time (20newsgroup).} \label{fig:time_path_20news} \end{subfigure} \caption{(a): Feature remaining rate of each iteration for Lasso on Leukemia (density $n=72, d=7128$). (b) Average computational time of Lasso path on subsampled Leukemia (density of $n=50, d=7128$). (c): Average computational time of Lasso path on subsampled 20newsgroup (sparsity of $n=800, d=18571$)} \vspace{-.2in} \end{figure*} \begin{table*}[t] \begin{center} \caption{Logarithm of acceleration ratio for Leukemia and 20newsgroup. The smaller values indicate a greater speed up.} \label{tb:logtimeratio} \begin{tabular}{l|l|c|c|c|c|} \hline Dataset &-log epsilon & Dynamic Sasvi & Dynamic EDPP & Gap Safe Dome & Gap Safe Sphere \\ \hline \multirow{3}{*}{Leukemia}& 4 & $-0.468\pm0.066$ & $-0.487\pm0.066$ & $-0.349\pm0.066$ & $-0.358\pm0.060$ \\ \cline{2-6} & 6 & $-0.828\pm0.072$ & $-0.838\pm0.067$ & $-0.719\pm0.074$ & $-0.725\pm0.072$ \\ \cline{2-6} & 8 & $-0.987\pm0.057$ & $-0.997\pm0.056$ & $-0.902\pm0.066$ & $-0.907\pm0.066$ \\ \hline \hline \multirow{3}{*}{20newsgroup}& 4 & $-0.338\pm0.020$ & $-0.358\pm0.025$ & $-0.274\pm0.023$ & $-0.293\pm0.024$ \\ \cline{2-6} & 6 & $-0.517\pm0.022$ & $-0.532\pm0.022$ & $-0.463\pm0.023$ & $-0.477\pm0.025$ \\ \cline{2-6} & 8 & $-0.604\pm0.023$ & $-0.617\pm0.025$ & $-0.562\pm0.026$ & $-0.572\pm0.024$ \\ \hline \end{tabular} \end{center} \end{table*} \section{Experiments} In this section, we show the efficacy of the proposed methods using real-world data. \subsection{Setup} We compared the proposed methods with Gap Safe Sphere and Gap Safe Dome \citep{fercoq2015mind,ndiaye2017gap}, which are state-of-the-art dynamic safe screening methods. All methods were run on a Macbook Air with a 1.1 GHz quad-core Intel Core i5 CPU with 16 GB of RAM. We implemented all methods in C++ using the Accelerate framework, which is the native framework for basic calculations. \subsection{Number of screened variables} First, we compared the number of screened variables among the four dynamic safe screening methods. We solved the Lasso problem using the Leukemia dataset (dense data with 72 samples and 7128 features) and $\lambda=\frac{1}{100}{\|{\boldsymbol{X}}^\top{\boldsymbol{y}}\|}_\infty$. We used cyclic coordinate descent as the iterative algorithm and screen variables for 10 iterations each. Figure \ref{fig:feature_remaininf_rate_leukemia} shows the ratio of the uneliminated features at each iteration. As guaranteed theoretically, we can see that Dynamic Sasvi eliminates more variables in earlier steps than Gap Safe Dome and Gap Safe Sphere. The figure also shows that Dynamic EDPP, relaxed version of Dynamic Sasvi, eliminated almost the same number of features as Dynamic Sasvi. \subsection{Gains in the computation of Lasso paths} Next, we compared the computation time of the path of the Lasso solutions for various values of $\lambda$. Because $\lambda$ may be defined by a cross validation in practice, computing the path of the solutions is an important task. We used $\lambda_j={100}^{-\frac{j}{99}}{\|{\boldsymbol{X}}^\top{\boldsymbol{y}}\|}_\infty$ ($j=0,\dots,99$). The iterative solver stops when the duality gap is smaller than $\epsilon(P({\boldsymbol{0}})-D({\boldsymbol{0}}))$. Note that $P({\boldsymbol{0}})-D({\boldsymbol{0}})$ makes the stopping criterion independent of the data scale. We used the Leukemia and tf-idf vectorized 20newsgroup datasets (baseball versus hockey) (sparse data with 1197 samples and 18571 features). We subsampled the data 50 times and ran all methods for the same 50 subsamples. The subsampled data size is 50 for leukemia and 800 for 20newsgroup. Figures \ref{fig:time_path_leukemia} and \ref{fig:time_path_20news} show the average computation time of the Lasso path for the Leukemia dataset and 20news datasets, respectively. For all settings, dynamic Sasvi and dynamic EDPP outperform Gap Safe Dome and Gap Safe Sphere. Table \ref{tb:logtimeratio} shows the the average and standard deviations of the logarithm of the acceleration ratio to the computational time for the same subsample without screening. Proposed methods are significantly faster than Gap Safe methods. In addition, Dynamic EDPP is a little faster than Dynamic Sasvi because the computational cost of Dynamic EDPP screening is smaller than the one of Dynamic Sasvi. \section{Conclusion} In this paper, we proposed a framework for safe screening based on Fenchel-Rockafellar duality and derived Dynamic Sasvi and Dynamic EDPP, which are specific safe screening methods for Lasso-like problems. Dynamic Sasvi and Dynamic EDPP can be regarded as dynamic feature elimination variants of Sasvi and EDPP, respectively. We proved that Dynamic Sasvi always eliminates more features than Gap Safe Sphere and Gap Safe Dome. Dynamic EDPP is based on the sphere relaxation of the Dynamic Sasvi region and eliminates almost the same number of features as Dynamic Sasvi. We also showed experimentally that the computational costs of the proposed methods are smaller than those of Gap Safe Sphere and Gap SafeDome.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,357
With picture rails you can display your pictures, art or other wall decorations in a very easy but classic and beautiful way. Picture rails offer a practical solution for all wall decorations. There are different types of picture rails. The narrowest and most innovative picture rail in the world. Elegant and barely noticeable. In just three easy steps you can attach your rail to the wall. Flexible, easy and quick. Perfect for suspended ceilings, designed for attaching to the wooden side-bar of the ceiling. Specially for a gallery or exhibitions. Safely and easily interchangeable. For those who want to give their picture an extra dimension. Multiply lighting options. A practical paper presentation system. Ideal for home, school or office. An ornamental picture rail that can be hidden behind every ornament profile. To prevent plaster walls from crumbling. No need for nails in the wall. After finding the picture rail that's best for you now you can complete your picture hanging system with our picture rail hangers. If you have any questions about picture rails or accessories, don't hesitate to contact us for free support.
{ "redpajama_set_name": "RedPajamaC4" }
8,224
\chapter{Arithmetic subgroups and modular symbols} Let $d \ge 1$ and $F$ be a global field of positive characteristic. We give the definition of our main object of study, an arithmetic subgroup $\Gamma \subset \GL_d(F)$. Fixing a place $\infty$ of $F$ and denoting by $F_\infty$ the completion at $\infty$ of $F$, arithmetic subgroups act on the Bruhat-Tits building $\cBT_\bullet$ of $\PGL_d(F_\infty)$. We are interested in the $(d-1)$-st Borel-Moore homology of the quotient $\Gamma \backslash \cBT_\bullet$. This is an analogue of $\Gamma\backslash \cH$ where $\cH$ is the upper half space, $\Gamma$ is a congruence subgroup of $\SL_d(\Z)$. The first cohomology group is known to be related to the space of elliptic modular forms. In Section~\ref{sec:def modular symbol}, we define modular symbols in the Borel-Moore homology group. Recall that a building is a union of subsimplicial complexes called apartements. They are indexed by bases of $F_\infty^{\oplus d}$, but we restrict to those coming from bases of $F^{\oplus d}$ (we say $F$-basis). A modular symbol is the image of the fundamental class of an apartment associated with an $F$-basis. We state our main result in Section~\ref{sec:state main theorem}. It computes a bound on the index of the subgroup generated by modular symbols in the Borel-Moore homology of the quotient. Recall that one property of an arithmetic subgroup $\Gamma$ of $\GL_d(\Q)$ is that there exists a subgroup of finite index $\Gamma' \subset \Gamma$ such that $\Gamma$ is torsion free. In positive characteristic of characteristic $p$, this is not expected to hold true. Instead, there always exist a $p'$-torsion subgroup (i.e., the torsion subgroup is a $p$-group) of finite index. We compute a bound on the index in Section~\ref{sec:p'torsion}. \section{Arithmetic subgroups} \label{sec:4.1.1} We define arithmetic subgroups. The main examples of arithmetic subgroups are the congruence subgroups. We verify the basic properties (see (1)-(5) below) in Section~\ref{sec:def Gamma}. We then see that an arithmetic subgroup acts on the Bruhat-Tits building with finite stabilizer groups and the map from an apartment to a quotient is a locally finite map (so the pushforward map of Borel-Moore homology groups is defined). \subsection{} \label{sec:def arithmetic} Let us give the setup. (This is common when considering Drinfeld modules.) We let $F$ denote a global field of positive characteristic $p>0$. Let $C$ be a proper smooth curve over a finite field whose function field is $F$. Let $\infty$ be a place of $F$ and let $K=F_\infty$ denote the local field at $\infty$. We let $A=H^0(C \setminus \{\infty\}, \cO_C)$. Here we identified a closed point of $C$ and a place of $F$. We write $\wh{A}=\varprojlim_I A/I$, where the limit is taken over the nonzero ideals of $A$. We let $\A^\infty=\wh{A} \otimes_A F$ denote the ring of finite adeles. \begin{definition} A subgroup $\Gamma \subset \GL_d(K)$ is called arithmetic subgroup if there exists a compact open subgroup $\bK^\infty \subset \GL_d(\A^\infty)$ such that $\Gamma=\GL_d(F) \cap \bK^\infty \subset \GL_d(K)$. \end{definition} \subsection{Congruence subgroups} \label{sec:congruence} Let $I \subset A$ be a nonzero ideal. Set $\Gamma_I= \Ker [ \GL_d(A) \to \GL_d(A/I) ] $ where the map is the canonical map induced by the projection $A \to A/I$. Then $\Gamma_I$ is an arithmetic subgroup because we can take $\bK^\infty$ to be $\Ker [\GL_d(\wh{A}) \to \GL_d(\wh{A}/I \wh{A}) ]$. These are also known as congruence subgroups. \subsection{} Let $\Gamma$ be an arithmetic subgroup. Then $\Gamma \cap \mathrm{SL}_d(F)=\Gamma \cap \mathrm{SL}_d(K)$ is a subgroup of $\Gamma$ of finite index, and is an $S$-arithmetic subgroup of $\mathrm{SL}_d$ over $F$ for $S=\{\infty\}$ in the paper of Harder \cite{Harder}. \subsection{} \label{sec:def Gamma} Let $\Gamma \subset \GL_d(K)$ be a subgroup. We consider the following Conditions (1) to (5) on $\Gamma$. \begin{enumerate} \item $\Gamma \subset \GL_d(K)$ is a discrete subgroup, \item $\{\det(\gamma) \,|\, \gamma \in \Gamma \} \subset O_\infty^\times$ where $O_\infty$ is the ring of integers of $K$, \item $\Gamma \cap Z(\GL_d(K))$ is finite. \end{enumerate} Let $A_\bullet=A_{v_1,\dots,v_d,\bullet}$ denote the apartment corresponding to a basis $v_1,\dots, v_d\in K^{\oplus d}$ (defined in Section~\ref{sec:521}). \begin{enumerate} \setcounter{enumi}{3} \item For any apartment $A_\bullet=A_{v_1,\dots, v_d,\bullet}$ with $v_1, \dots, v_d \in F^{\oplus d}$, the composition $A_\bullet \hookrightarrow \cB\cT_\bullet \to \Gamma \bsl \cB\cT_\bullet$ is quasi-finite, that is, the inverse image of any simplex by this map is a finite set. \item (Harder) The cohomology group $H^{d-1}(\Gamma, \Q)$ is a finite dimensional $\Q$-vector space. \end{enumerate} We verify below that the conditions above are satisfied for any arithmetic subgroup~$\Gamma$. \subsection{} \label{sec:finite stab} These properties are used in the following way. The condition (2) implies that each element in the stabilizer group of a simplex fixes the vertices of the simplex. Under the condition (1), the condition (3) implies that the stabilizer of a simplex is finite. This implies that the $\Q$-coefficient group homology of $\Gamma$ and the homology of $\Gamma \bsl |\cB\cT_\bullet|$ are isomorphic. The condition (4) will be used to define a class in Borel-Moore homology of $\Gamma\bsl\cB\cT_\bullet$ starting from an apartment (Section \ref{sec:def modular symbol}). We do not use Condition (5) in this form but we record it here because it is related to the finite dimensionality of the space of cusp forms of fixed level. \subsection{} For the rest of this subsection, we give a proof that these Conditions hold true. \begin{prop} For an arithmetic subgroup $\Gamma$, Conditions (1)-(5) of Section~\ref{sec:def Gamma} hold. \end{prop} \begin{proof} This follows from Lemmas below. \end{proof} \begin{lem} For an arithmetic subgroup $\Gamma$, Conditions (1), (2), and (3) hold. \end{lem} \begin{proof} Condition (1) holds trivially. We note that there exists an element $g \in \GL_d(\A^\infty)$ such that $g \bK^\infty g^{-1} \subset \GL_d(\wh{A})$. Since $\det(\gamma) \in F^\times \cap \wh{A}^\times \subset O_\infty^\times$ for $\gamma \in \Gamma$, (2) holds. Because $F^\times \cap \GL_d(\wh{A})$ is finite, (3) holds. \end{proof} \begin{lem} \label{lem:arithmetic} Let $\Gamma$ be an arithmetic subgroup. Then (4) holds. \end{lem} \begin{proof} We show that the inverse image of each simplex of $\Gamma\bsl \cB\cT_\bullet$ under the map in (4) is finite. For $0 \le i \le d-1$, the set of $i$-dimensional simplices $\cB\cT_i$ is is identified (see Section~\ref{sec:6.1.2} for the identification) with the coset $\GL_d(K)/\wt{\bK}_\infty$ for an open subgroup $\wt{\bK}_\infty \subset \GL_d(K)$ which contains $K^\times \bK_\infty$ as a subgroup of finite index for some compact open subgroup $\bK_\infty \subset \GL_d(K)$. Let $T \subset \GL_d$ denote the diagonal maximal torus. The set of simplices of $A_\bullet$ of fixed dimension is identified with the image of the map \[ \coprod_{w \in S_d} gwT(K) \to \GL_d(K)/\wt{\bK}_\infty \] for some $g \in \GL_d(F)$. Since $S_d$ is a finite group, it then suffices to show that for any $w \in S_d$, the map $$ \mathrm{Image}[gwT(K) \to \GL_d(K)/K^\times \bK_\infty ] \to \Gamma \bsl \GL_d(K) /K^\times \bK_\infty $$ is quasi-finite. The inverse image under the last map of the image of $gwt \in gwT(K)$ is isomorphic to the set \[ \begin{array}{ll} \{\gamma \in\Gamma\,|\, \gamma gwt \in gwT(K) K^\times \bK_\infty\} &=\Gamma \cap gwT(K) K^\times \bK_\infty (gwt)^{-1}\\ &=\Gamma \cap (gw)T(K) t \bK_\infty t^{-1} (gw)^{-1}. \end{array} \] Hence, if we let $g'=gw$ and $\bK'_\infty =t\bK_\infty t^{-1}$, this set equals \[ \begin{array}{ll} \Gamma\cap g'T(K) \bK'_\infty g'^{-1} & =\GL_d(F) \cap (\bK^\infty \times g'T(K) \bK'_\infty g'^{-1}) \\ &= g'(\GL_d(F) \cap (g'^{-1} \bK^\infty g' \cap T(K) \bK'_\infty) g'^{-1}. \end{array} \] The finiteness of this set is proved in the following lemma. \end{proof} \begin{lem} For any compact open subgroup $\bK \subset \GL_d(\A)$, the set $\GL_d(F) \cap T(K) \bK$ is finite. \end{lem} \begin{proof} Let $U=T(K) \cap \bK$. Then $T(O_\infty) \supset U$ and is of finite index. Note that there exist a non-zero ideal $I\subset A$ and an integer $N$ such that $\bK \subset I^{-1}\varpi_\infty^{-N} \mathrm{Mat}_d(\wh{A})\times \mathrm{Mat}_d(O_\infty)$ where $\varpi_\infty$ is a uniformizer in $O_\infty$. Let $\alpha:T(K)/U \to T(K)/T(O_\infty) \cong \Z^{\oplus d}$ be the (quasi-finite) map induced by the inclusion $U \subset T(O_\infty)$. For $h \in T(K)$, we write $(h_1,\dots, h_d)=\alpha(h)$. Then for $i=1,\ldots,d$, the $i$-th row of $h \bK$ is contained in $(I^{-1}\wh{A} \times \varpi_\infty^{-N}\varpi_\infty^{h_i}O_\infty)^{\oplus d}$. Hence, for sufficiently large $h_i$, the intersection $h \bK \cap \GL_d(F)$ is empty. We then have, for sufficiently large $N'$, \begin{equation}\label{lem q-finite} \GL_d(F)\cap T(K) \bK =\coprod_{h\in T(K)/U, \atop h_1, \ldots, h_d \le N'} \GL_d(F) \cap h\bK. \end{equation} The adelic norm of the determinant of an element in $\GL_d(F)$ is 1, while that of an element in $h \bK$ is $|\det h\,|_\infty = \sum_{i=1}^d h_i$. So (\ref{lem q-finite}) equals \[\displaystyle \coprod_{h\in T(K)/U, h_i \le N', \sum h_i=0} \GL_d(F) \cap h \bK. \] The index set of the disjoint union above is finite since $\alpha$ is quasi-finite, and $\GL_d(F) \cap h \bK$ is finite since $\GL_d(F)$ is discrete and $h \bK$ is compact. The claim follows. \end{proof} \begin{lem} \label{lem:Harder} Let $\Gamma$ be an arithmetic subgroup. Then (5) holds. \end{lem} \begin{proof} This follows from \cite[p.136, Satz 2]{Harder}. \end{proof} \section{$p'$-torsion free subgroups} \label{sec:p'torsion} For an arithmetic subgroup (or congruence subgroup) $\Gamma$ of $\SL_d(\Z)$, there always exists a torsion free subgroup $\Gamma' \subset \Gamma$ of finite index. In our positive characteristic (of characteristic $p$) setup, an arithmetic subgroup always contains some $p$-torsion. We therefore define an arithmetic subgroup to be $p'$-torsion free if any torsion element has a $p$-power order. A similar fact that any arithmetic subgroup contains a $p'$-torsion free subgroup of finite index holds true. \subsection{} Let $p>0$ denote the characteristic of $F$. \begin{definition} We say that a subgroup $\Gamma \subset \GL_d(K)$ is $p'$-torsion free if any element of $\Gamma$ of finite order is of order a power of $p$. \end{definition} \begin{lem} \label{lem:quasi-neat criterion1} Let $\Gamma \subset \GL_d(F)$ be an arithmetic subgroup and let $v \neq \infty$ be a place of $F$. Let $F_v$ denote the completion of $F$ at $v$, and $\cO_v \subset F_v$ its ring of integers. Suppose that there exists an element $g_v \in \GL_d(F_v)$ such that the image of $\Gamma$ in $\GL_d(F_v)$ is contained in $g_v (I_d + \wp_v M_d(\cO_v)) g_v^{-1}$, where $I_d$ is the identity $d$-by-$d$ matrix, $\wp_v$ is the maximal ideal of $\cO_v$, and $M_d(\cO_v)$ denotes the ring of $d$-by-$d$ matrices with coefficients in $\cO_v$. Then $\Gamma$ is $p'$-torsion free. \end{lem} \begin{proof} It suffices to show that any matrix $h \in I_d + \wp_v M_d(\cO_v)$ of finite order is of order a power of $p$. Let us fix an algebraic closure $\overline{F}_v$ of $F_v$ and let $\overline{\cO}_v$ denote its ring of integers. Then $\overline{\cO}_v$ is a valuation ring. Let $h \in I_d + \wp_v M_d(\cO_v)$ be an element of finite order. Then any eigenvalue $\alpha$ of $h$ in $\overline{F}_v$ belongs to $\overline{\cO}_v$ and is congruent to $1$ modulo the maximal ideal of $\overline{\cO}_v$. Since $h$ is of finite order, $\alpha$ is a root of unity. This implies $\alpha=1$. Hence $(h -I_d )^N = 0$ for sufficiently large $N$. Choose a power $q$ of $p$ satisfying $q \ge N$. Then $h^q - I_d = (h-I_d)^q =0$. This shows that the order of $h$ is a power of $p$. \end{proof} \begin{cor}\label{cor:quasi-neat criterion2} Let $\Gamma \subset \GL_d(F)$ be an arithmetic subgroup and let $v \neq \infty$ be a place of $F$. Suppose that, as a subgroup of $\GL_d(F_v)$, the group $\Gamma$ is contained in a pro-$p$ open compact subgroup $\bK_v$ of $\GL_d(F_v)$. Then $\Gamma$ is $p'$-torsion free. \end{cor} \begin{proof} Let us choose an open subgroup $\bK'_v$ of $\bK_v \cap (I_d + \wp_v M_d(\cO_v))$ such that $\bK'_v$ is a normal subgroup of $\bK_v$. Then by Lemma \ref{lem:quasi-neat criterion1}, $\Gamma' = \Gamma \cap \bK'_v$ is $p'$-torsion free. Since $\Gamma'$ is equal to the kernel of the composite $\Gamma \inj \bK_v \surj \bK_v/\bK'_v$ and $\bK_v/\bK'_v$ is a finite $p$-group, $\Gamma'$ a normal subgroup of $\Gamma$ and the quotient $\Gamma/\Gamma'$ is a finite $p$-group. It follows that $\Gamma$ is $p'$-torsion free. \end{proof} \begin{cor} Let $I \subset A$ be a nonzero ideal. Then $\Gamma_I$ (see Section~\ref{sec:congruence}) is $p'$-torsion free. \end{cor} \begin{proof} Let $v$ be a prime which divides $I$. Then $\Gamma$ is contained in the compact open subgroup $I_d + \wp_v M_d(\cO_v)$ where $I_d$ is the identity matrix. Hence the claim follows from Corollary~\ref{cor:quasi-neat criterion2}. \end{proof} \subsection{}As the compact open subgroups \[ \Ker[\GL_d(\widehat{A}) \to \GL_d(\widehat{A}/J\widehat{A})] \] as $J$ runs over the ideals form a fundamental system of neihborhoods of the identity matrix $I_d$, an arithmetic subgroup $\Gamma$ contains some congruence subgroup $\Gamma_I$ and it is of finite index in $\Gamma$. \section{Arithmetic quotients of the Bruhat-Tits building} \label{sec:explicit beta2} Let us define simplicial complex $\Gamma \bsl \cBT_\bullet$ for an arithmetic subgroup $\Gamma$ and check that the canonical quotient map is well defined. \subsection{} We need a lemma. \begin{lem} \label{lem:stabilizer} Let $i \ge 0$ be an integer, let $\sigma \in \cBT_i$ and let $v,v' \in V(\sigma)$ be two vertices with $v \neq v'$. Suppose that an element $g \in \GL_d(K)$ satisfies $|\det\, g|_\infty =1$. Then we have $gv \neq v'$. \end{lem} \begin{proof} Let $\wt{\sigma}$ be an element $(L_j)_{j \in \Z}$ in $\wt{\cBT}_i$ such that the class of $\wt{\sigma}$ in $\cBT_i$ is equal to $\sigma$. There exist two integers $j,j' \in \Z$ such that $v$, $v'$ is the class of $L_j$, $L_{j'}$, respectively. Assume that $g v =v'$. Then there exists an integer $k \in \Z$ such that $L_j g^{-1} = \varpi_\infty^{k} L_{j'} = L_{j' + (i+1) k}$. Let us fix a Haar measure $d\mu$ of the $K$-vector space $V_\infty=K^{\oplus d}$. As is well-known, the push-forward of $d\mu$ with respect to the automorphism $V_\infty \to V_\infty$ given by the right multiplication by $\gamma$ is equal to $|\det\, \gamma|_\infty^{-1} d\mu$ for every $\gamma \in \GL_d(K)$. Since $|\det\, g|_\infty =1$, it follows from the equality $L_j g^{-1} = L_{j'+(i+1)k}$ that the two $\cO_\infty$-lattices $L_j$ and $L_{j'+(i+1)k}$ have a same volume with respect to $d\mu$. Hence we have $j=j'+(i+1)k$, which implies $L_j = \varpi_\infty^k L_{j'}$. It follows that the class of $L_j$ in $\cBT_0$ is equal to the class of $L_{j'}$, which contradicts the assumption $v \neq v'$. \end{proof} \subsection{quotients and the canonical maps} Let $\Gamma \subset \GL_d(K)$ be an arithmetic subgroup. It follows from Lemma~\ref{lem:stabilizer} (using Condition (2) of Section~\ref{sec:def Gamma}) that for each $i \ge 0$ and for each $\sigma \in \cBT_i$, the image of $V(\sigma)$ under the surjection $\cBT_0 \surj \Gamma \bsl \cBT_0$ is a subset of $\Gamma \bsl \cBT_0$ with cardinality $i+1$. We denote this subset by $V(\cl(\sigma))$, since it is easily checked that it depends only on the class $\cl(\sigma)$ of $\sigma$ in $\Gamma \bsl \cBT_i$. Thus the collection $\Gamma \bsl \cBT_\bullet =(\Gamma \bsl \cBT_i)_{i \ge 0}$ has a canonical structure of a simplicial complex such that the collection of the canonical surjection $\cBT_i \surj \Gamma \bsl \cBT_i$ is a map of simplicial complexes $\cBT_\bullet \surj \Gamma \bsl \cBT_\bullet$. \section{Definition of modular symbols} \label{sec:def modular symbol} We define modular symbols here. \subsection{} Notice that apartments are defined for any basis of $F_\infty^{\oplus d}$. However, the ones of interest in number theory are those associated with the $F$-bases (i.e., a basis of $F^{\oplus d}$ regarded as a basis of $F_\infty^{\oplus d}$). \subsection{} Let $v_1,\dots, v_d$ be an $F$-basis (that is, a basis of $F^{\oplus d}$ regarded as a basis of $F_\infty^{\oplus d}$). We consider the composite \begin{equation} \label{quasifinite} A_\bullet \xto{\iota_{v_1,\ldots,v_d}} \cBT_\bullet \to \Gamma \bsl \cBT_\bullet. \end{equation} Condition (4) implies that the map (\ref{quasifinite}) is a finite map of simplicial complexes in the sense of Section~\ref{sec:quasifinite}. It follows that the map (\ref{quasifinite}) induces a homomorphism $$ H^\BM_{d-1}(A_\bullet, \Z) \to H^\BM_{d-1}(\Gamma \bsl \cBT_\bullet, \Z). $$ We let $\beta_{v_1,\ldots,v_d} \in H^\BM_{d-1}(\Gamma \bsl \cBT_\bullet, \Z)$ denote the image under this homomorphism of the element $\beta \in H^\BM_{d-1}(A_\bullet, \Z)$ introduced in Section~\ref{sec:fundamental class}. We call this the class of the apartment $A_{v_1,\dots, v_d,\bullet}$. \begin{definition} We let $\MS(\Gamma)_\Z \subset H^\BM_{d-1}(\Gamma \bsl \cBT_\bullet, \Z)$ denote the submodule generated by the classes $\beta_{v_1,\ldots,v_d}$ as $v_1, \dots, v_d$ runs over the set of ordered $F$-bases. \end{definition} \section{Statement of Main Theorem} \label{sec:state main theorem} We are ready to state our theorem. The proof begins in Chapter~\ref{ch:pf for ums} and ends in Chapter~\ref{ch:compare ms}. \begin{thm}\label{lem:apartment} Let $\Gamma \subset \GL_d(K)$ be an arithmetic subgroup. We write $\MS(\Gamma)_\Z \subset H^\BM_{d-1}(\Gamma \bsl \cBT_\bullet, \Z) $ for the submodule generated by the classes of apartments associated to $F$-bases. \begin{enumerate} \item We have \[ H^\BM_{d-1}(\Gamma \bsl \cBT_\bullet, \Q)=MS(\Gamma)_\Z \otimes \Q \] \item Suppose that $\Gamma$ is $p'$-torsion free. Set $$ e(d) = (d-2)\left(1+ \frac{(d-1)(d-2)}{2}\right) $$ Then \[ p^{e(d)}H^\BM_{d-1}(\Gamma \bsl \cBT_\bullet, \Z) \subset \MS(\Gamma)_\Z. \] \item Let $v \neq \infty$ be a prime of $F$, and let $F_v$ denote the completion of $F$ at $v$. Let $\bK_v$ be a pro-$p$ open compact subgroup of $\GL_d(F_v)$. Let us consider the intersection $\Gamma' = \Gamma \cap \bK_v$ in $\GL_d(F_v)$. Then \[ p^{e(d)} [\Gamma:\Gamma'] H^\BM_{d-1}(\Gamma \bsl \cBT_\bullet, \Z) \subset \MS(\Gamma)_\Z. \] \item Let $v_0 \neq \infty$ be a prime of $F$ such that the cardinality $q_0$ of the residue field $\kappa(v_0)$ at $v_0$ is smallest among those at the primes $v \neq \infty$. Set $N(d) = \prod_{i=1}^{d} (q_0^i-1)$. Then \[ p^{e(d)} N(d) H^\BM_{d-1}(\Gamma \bsl \cBT_\bullet, \Z) \subset \MS(\Gamma)_\Z \]. \item Suppose that $d=2$. Then \[ H^\BM_{1}(\Gamma \bsl \cBT_\bullet, \Z) =\MS(\Gamma)_\Z. \] \end{enumerate} \end{thm} \chapter{Automorphic Forms with Steinberg at infinity} Let $d \le 1$ and $F$ be a global field. An automoprhic form for $\GL_d$ over $F$ is a $\C$-valued function on $\GL_d(F) \backslash \GL_d(\A_F)$, where $\A_F$ is the ring of adeles, satisfying some more conditions. Instead of studying all automorphic forms, we study certain subset consisting of those satisfying a certain condition at a fixed place $\infty$ of $F$. We call them automorphic forms with Steinberg at infinity In Section~\ref{sec:Laumon review}, we recall the basics and some results on automorphic forms from Laumon's book \cite{Laumon2}. The reader is also referred to \cite{BJ} and Cogdell's lectures \cite{CKM}. In Section~\ref{sec:Steinberg autom}, we give the definition of the automorphic forms of interest to us. These are the automorphic forms that appear when studying the cohomology of Drinfeld modular varieties. We can apply known results to determine (Proposition~\ref{prop:66_3}) the direct sum decomposition, as automorphic representation, of the space of cusp forms that are Steinberg at infinity. For the space of all automorphic forms that are Steinberg at infinity, we have Theorem~\ref{7_PROP1}. This does not seem to follow from previously known results. We remark that this space is not contained in the space of square integrable automorphic forms. The proof of this theorem is given in Chapter~\ref{ch:pf of Thm17}. \section{Automorphic forms with values in $\C$} \label{sec:Laumon review} Let $F$ be the global field in positive characteristic. We fix a place $\infty$ of $F$. We write $\A$ for the ring of adeles of $F$. \subsection{} The contents of this section is summarized in the following two diagrams. We define each object and explain the inclusions: \\ \begin{tikzcd} C_{\C,\chi}^\infty \arrow[r, phantom, "\subset"] & C_{\C}^\infty \arrow[r, phantom, "\subset"] & C_{\C}, \\ C_{\C, \mathrm{cusp}}^\infty \arrow[r, phantom, "\subset"] & C_{\C}^\infty \arrow[u, ,sloped, phantom, "\subset"] \end{tikzcd} \begin{tikzcd} \cA_{\C,\chi}^\infty \arrow[r, phantom, "\subset"] & \cA_{\C}^\infty \\ \cA_{\C, \mathrm{cusp}}^\infty \arrow[r, phantom, "\subset"] & \cA_{\C}^\infty \arrow[u, sloped, phantom, "\subset"] \end{tikzcd} \subsection{} We set \[ C_\C=\Hom(\GL_d(F)\backslash\GL_d(\A), \C). \] This is a $\GL_d(\A)$-module, where an element $g$ acts as $(gf)(x)=f(xg)$. We set \[ C_\C^\infty=\displaystyle\bigcup_\bK C_\C^\bK \] where $C_\C^\bK$ denotes the invariants, and $\bK \subset \GL_d(\A)$ runs over compact open subgroups. This is the space of smooth vectors. This is a $\GL_d(\A)$-submodule of $C_\C$. Let $\chi_{\C, \infty}$ denote the abelian group of smooth complex characters of $Z(F)\backslash Z(\A)$ \cite[p.4]{Laumon2}. For $\chi \in \chi_{\C, \infty}$, we denote by \[ C^\infty_{\C,\chi}=C^\infty_\chi(\GL_d(F)\backslash \GL_d(\A), \C) \subset \C_\C^\infty \] the $\C$-vector subspace of the functions $\varphi \in C^\infty_\C$ such that \[ \varphi(zm)=\chi(z)\varphi(m) \] for $z\in Z(\A), m \in \GL_d(\A)$ (\cite[p.9, 9.1.9]{Laumon2}. \subsection{} We let \[ \cA_\C \subset C_\C^\infty \] denote the space of automorphic forms. We use the definitions from \cite[p.2, 9.1]{Laumon2}. Then we set \cite[p.12, 9.1.14]{Laumon2} \[ A_{\C, \chi}=\cA_\C \cap \C_{\C, \chi}^\infty \] Let $C^\infty_{\C, \mathrm{cusp}} \subset C^\infty_\C$ denote the $\C$ subvector space of cuspidal functions (see \cite[p.15, 9.2.3]{Laumon2} for definition). We write \[ \cA_{\C, \mathrm{cusp}}=\cA_\C \cap C^\infty_{\C, cusp} \subset C^\infty_\C \] for the space of cusp forms. For $\chi \in \chi_\C^\infty$, we set \[ \cA_{\C, \mathrm{cusp}, \chi}=\cA_{\C, \mathrm{cusp}} \cap \cA_{\C, \chi} \] \subsection{} Let $\chi \in \chi_\C^\infty$ be a unitary character. We let $\cA^2_{\C, \chi} \subset \cA_{\C, \chi}$ denote the $\C$-vector subspace of square-integrable automorphic forms (\cite[p.24, 9.3]{Laumon2}). Lemma 9.3.3 of \cite{Laumon2} says \[ \cA_{\C, \chi, \mathrm{cusp}} \subset \cA^2_{\C, \chi}. \] \section{Automorphic forms with Steinberg at infinity} \label{sec:Steinberg autom} \subsection{} Let $\St_{d,\C}$ denote the Steinberg representation. It is an admissible representation of $\GL_d(F_\infty)$. Later we will also use the $\Q$-vector space version $\St_{d,\Q}$. \subsubsection{} Let $\cI \subset \GL_d(\cO_\infty)$ denote the Iwahori subgroup. It is known that the Iwahori fixed part $\St_{d,\C}^\cI$ of the Steinberg representation is a one dimensional vector space. Take a nonzero element $v \in \St_{d,\C}$. For a $\GL_d(F_\infty)$-module $M$, we set $M_\St$ to be the image by the evaluation at $v$: \[ M_\St= \mathrm{Image} [\Hom_{\GL_d(F_\infty)} (\St_{d,\C}, M) \to M] \subset M \] By the one-dimensionality, $M_\St$ does not depend on the choice of $v$. We have the following lemma. \begin{lem} We have \\ (1) $C_{\C,\St}^\infty=\cA_{\C,\St}$, \\ (2) $C_{\C,c, \St}^\infty=\cA_{\C, cusp, \St}$. \end{lem} \begin{proof}(Proof of (1)) The $\GL_d(F_\infty)$-representation generated by a vector of $C^\infty_{\C, \St}$ is the Steinberg representation. Since the Steinberg representation is an admissible representation, by definition of automorphic form (see, for example, \cite[Prop. 4.5, p.196]{BJ}), the vector is an automorphic form. \end{proof} \subsubsection{} (See \cite[p.35]{Laumon2}) Let $\chi \in \chi^\infty_\C$ and let us assume that $\chi$ is trivial on $F_\infty^\times$. Then $\chi$ is automatically of finite order (since $F_\infty^\times F^\times\backslash \A^\times$ is compact) therefore unitary. Let $f \in \cA_{\C, \mathrm{cusp},\St}$ and let $\chi$ be its central quasi-character. Then $\chi(F_\infty)=1$ because $F_\infty^\times$ acts trivially for the Steinberg representation. Therefore, as seen above, $\chi$ is unitary. We write $\cA_{\C, \mathrm{cusp}, \St, \chi} \subset \cA_{\C, \mathrm{cusp}, \St}$ for the subspace consisting of those cusp forms whose unitary central character is $\chi$. \subsubsection{} Let \[ \cA_{\chi, \mathrm{disc}}^2 \subset \cA^2_\chi \] denote the discrete spectrum (see \cite[9.3.4, p.25]{Laumon2}). For a quasi-character $\chi \in \chi_\C^\infty$ such that $\chi(F_\infty^\times)=1$ (hence $\chi$ is unitary), Corollary 9.5.6 of \cite[p.36]{Laumon2} implies \[ \cA_{\C, \mathrm{cusp}, \chi, \St} =\cA^2_{\chi, \mathrm{disc}, \St}. \] \subsubsection{} Let $C_{\C,c}^\infty \subset C_{\C}^\infty$ denote the $\C$ subvector space of compactly supported functions. Then Harder's theorem (see \cite[Theorem 9.2.6, p.16]{Laumon2} implies that $\cA_{\C, \mathrm{cusp}} \subset C_{\C,c}^\infty$. The elements of $C^\infty_{\C,c, \St}$ are automorphic and square-integrable hence $C^\infty_{\C,c, \St} \subset \cA^2_{\mathrm{disc}}$. Now, for a unitary character $\chi$, let us write $C^\infty_{\C,c,\chi, \St}=C^\infty_{\C,c,\St} \cap \cA^2_\chi$. \begin{proof}(Proof of (2)) From the discussions above, we obtain \[ \cA_{\C, \mathrm{cusp}, \St, \chi} \subset C_{\C,c,\chi, \St}^\infty \subset \cA^2_{\chi, \mathrm{disc}, \St} = \cA_{\C, \mathrm{cusp}, \St, \chi}. \] This proves (2). \end{proof} \section{Admissiblity} The admissiblity follows from a result of Harder. \begin{prop} \label{prop:admissible} The vector spaces $\cA_{\C, \mathrm{cusp},\St}$ and $\cA_{\C,\St}$ are admissible representations of $\GL_d(\A)$. \end{prop} \begin{proof} Because of the condition at $\infty$, we can use the result of Harder \cite[p.198, Proposition 5.2]{BJ} to see that $\cA_{\C, \St}$ is admissible. Then the claim follows for the subspace $\cA_{\C, \mathrm{cusp}, \St}$. \end{proof} \section{Decompositions} \subsection{} There are well-known theorems on the structure of the space of cusp forms, which in turn imply the following. \begin{prop} \label{prop:66_3} As a representation of $\GL_d(\A^\infty)$, $$ \cA_{\C, \mathrm{cusp}, \St} \cong \bigoplus_\pi \pi^\infty, $$ where $\pi = \pi^\infty \otimes \pi_\infty$ runs over (the isomorphism classes of) the irreducible cuspidal automorphic representations of $\GL_d(\A)$ such that $\pi_\infty$ is isomorphic to the Steinberg representation of $\GL_d(F_\infty)$. \end{prop} \begin{proof} The theorems to use are \cite[Theorem 9.2.14, p.22]{Laumon2} due to Gelfand and Piatetski-Shapiro, and the multiplicity one theorem \cite[Remark 9.2.15, p.22]{Laumon2} due to Shalika. \end{proof} \subsection{} We prove the following structure theorem for $\cA_{\C,\St}$ in Chapter~\ref{ch:pf of Thm17}. \begin{thm}\label{7_PROP1} Let $\pi = \pi^\infty \otimes \pi_\infty$ be an irreducible smooth representation of $\GL_d(\A)$ such that $\pi^\infty$ appears as a subquotient of $\cA_{\C,\St}$. Then there exist an integer $r \ge 1$, a partition $d=d_1 + \cdots + d_r$ of $d$, and irreducible cuspidal automorphic representations $\pi_i$ of $\GL_{d_i}(\A)$ for $i=1,\ldots,d$ which satisfy the following properties: \begin{itemize} \item For each $i$ with $0 \le i \le r$, the component $\pi_{i,\infty}$ at $\infty$ of $\pi_i$ is isomorphic to the Steinberg representation of $\GL_{d_i}(F_\infty)$. \item Let us write $\pi_i = \pi_i^\infty \otimes \pi_{i,\infty}$. Let $P \subset \GL_d$ denote the standard parabolic subgroup corresponding to the partition $d=d_1 + \cdots + d_r$. Then $\pi^\infty$ is isomorphic to a subquotient of the unnormalized parabolic induction $\Ind_{P(\A^\infty)}^{\GL_d(\A^\infty)} \pi_1^\infty \otimes \cdots \otimes \pi_r^\infty$. \end{itemize} Moreover for any subquotient $H$ of $\cA_{\C,\St}$ which is of finite length as a representation of $\GL_d(\A^\infty)$, the multiplicity of $\pi$ in $H$ is at most one. \end{thm} \chapter{Proof of Theorem \ref{7_PROP1}} \label{ch:pf of Thm17} We give a proof of Theorem~\ref{7_PROP1}. In Section \ref{sec:locfree}, we introduce some terminology for locally free $\cO_C$-modules of rank $d$ and then describe the sets of simplices of $X_{\bK,\bullet}$ in terms of chains of locally free $\cO_C$-modules of rank $d$. In Section \ref{sec:Xsigma}, we follow Section 4 of \cite{Gra} and define certain subsimplicial complexes of $X_{\bK, \bullet}$. \section{Chains of locally free $\cO_C$-modules} \label{sec:locfree} \subsection{} Let $\eta : \Spec F \to C$ denote the generic point of $C$. \begin{definition} For each $g \in \GL_d(\A^\infty)$ and an $\cO_\infty$-lattice $L_\infty \subset \cO_\infty^{\oplus d}$, we denote by $\cF[g,L_{\infty}]$ the $\cO_C$-submodule of $\eta_* F^{\oplus d}$ characterized by the following properties: \begin{itemize} \item $\cF[g,L_{\infty}]$ is a locally free $\cO_C$-module of rank $d$. \item $\Gamma(\Spec A, \cF[g,L_{\infty}])$ is equal to the $A$-submodule $\wh{A}^{\oplus d} g^{-1} \cap F^{\oplus d}$ of $F^{\oplus d} = \Gamma(\Spec A,\eta_* F^{\oplus d})$. \item Let $\iota_\infty$ denote the morphism $\Spec \cO_\infty \to C$. Then $\Gamma(\Spec \cO_\infty, \iota_\infty^* \cF[g,L_{\infty}])$ is equal to the $\cO_\infty$-submodule $L_{\infty}$ of $F_\infty^{\oplus d} = \Gamma(\Spec \cO_\infty, \iota_\infty^* \eta_* F^{\oplus d})$. \end{itemize} \end{definition} \subsection{} Let $\cF$ be a locally free $\cO_C$-modules of rank $d$. Let $I \subset A$ be a non-zero ideal. We regard the $A$-module $A/I$ as a coherent $\cO_C$-module of finite length. \begin{definition} A level $I$-structure on $\cF$ is a surjective homomorphism $\cF \to (A/I)^{\oplus d}$ of $\cO_C$-modules. \end{definition} Let $\bK^\infty_I \subset \GL_d(\wh{A})$ be the kernel of the homomorphism $\GL_d(\wh{A}) \to \GL_d(\wh{A}/I \wh{A})$. The group $\GL_d(A/I) \cong \GL_d(\wh{A})/\bK^\infty_I$ acts from the left on the set of level $I$-structures on $\cF$, via its left action on $(A/I)^{\oplus d}$. (We regard $(A/I)^{\oplus d}$ as an $A$-module of row vectors. The left action of $\GL_d(A/I)$ on $(A/I)^{\oplus d}$ is described as $g\cdot b = b g^{-1}$ for $g\in \GL_d(A/I)$, $b\in (A/I)^{\oplus d}$.) \begin{definition} For a subgroup $\bK \subset \GL_d(\wh{A})$ containing $\bK_I^\infty$, a level $\bK$-structure on $\cF$ is a $\bK/\bK^\infty_I$-orbit of level $I$-structures on $\cF$. \end{definition} For an open subgroup $\bK \subset \GL_d(\wh{A})$, the set of level $\bK$-structures on $\cF$ does not depend, up to canonical isomorphisms, on the choice of an ideal $I$ with $\bK_I^\infty \subset \bK$. \subsection{} Let $\bK \subset \GL_d(\wh{A})$ be an open subgroup. Let $(g,\sigma)$ be an $i$-simplex of $\wt{X}_{\bK,\bullet}$. Take a chain \[ \cdots \supsetneqq L_{-1} \supsetneqq L_0 \supsetneqq L_1 \supsetneqq \cdots \] of $\cO_{\infty}$-lattices of $F_{\infty}^{\oplus d}$ which represents $\sigma$. To $(g,\sigma)$ we associate the chain \[ \cdots \supsetneqq \cF[g,L_{-1}] \supsetneqq \cF[g,L_0] \supsetneqq \cF[g,L_1] \supsetneqq \cdots \] of $\cO_C$-submodules of $\eta_* F^{\oplus d}$. Then the set of $i$-simplices in $\wt{X}_{\bK,\bullet}$ is identified with the set of the equivalences classes of chains \[ \cdots \supsetneqq \cF_{-1} \supsetneqq \cF_0 \supsetneqq \cF_1 \supsetneqq \cdots \] of locally free $\cO_C$-submodules of rank $d$ of $\eta_* \eta^* \cO_C^{\oplus d}$ with a level $\bK$-structure such that $\cF_{j-i-1}$ equals the twist $\cF_{j}(\infty)$ as an $\cO_C$-submodule of $\eta_* F^{\oplus d}$ with a level $\bK$-structure for every $j\in \Z$. Two chains $\cdots \supsetneqq \cF_{-1} \supsetneqq \cF_0 \supsetneqq \cF_1 \supsetneqq \cdots$ and $\cdots \supsetneqq \cF'_{-1} \supsetneqq \cF'_0 \supsetneqq \cF'_1 \supsetneqq \cdots$ are equivalent if and only if there exists an integer $l$ such that $\cF_{j} = \cF'_{j+l}$ as an $\cO_C$-submodule of $\eta_* F^{\oplus d}$ with a level structure for every $j\in \Z$. \subsection{} Let $g\in \GL_d(\A^\infty)$ and let $L_\infty$ be an $\cO_\infty$-lattice of $F_\infty^{\oplus d}$. For $\gamma \in \GL_d(F)$, the two $\cO_C$-submodules $\cF[g,L_\infty]$ and $\cF[\gamma g,\gamma L_\infty]$ are isomorphic as $\cO_C$-modules. The set of $i$-simplices in $X_{\bK,\bullet}$ is identified with the set of the equivalence classes of chains $\cdots \inj \cF_{1} \inj \cF_0 \inj \cF_{-1} \inj \cdots$ of injective non-isomorphisms of locally free $\cO_C$-modules of rank $d$ with a level $\bK$-structure such that the image of $\cF_{j+i+1}\to \cF_j$ equals the image of the canonical injection $\cF_{j}(-\infty)\inj \cF_j$ for every $j\in \Z$. Two chains $\cdots \inj \cF_{1} \inj \cF_0 \inj \cF_{-1} \inj \cdots$ and $\cdots \inj \cF'_{1} \inj \cF'_0 \inj \cF'_{-1} \inj \cdots$ are equivalent if and only if there exists an integer $l$ and an isomorphism $\cF_{j} \cong \cF'_{j+l}$ of $\cO_C$-modules with level structures for every $j\in \Z$ such that the diagram $$ \begin{CD} \cdots @>>> \cF_{1} @>>> \cF_0 @>>> \cF_{-1} @>>> \cdots \\ @. @V{\cong}VV @V{\cong}VV @V{\cong}VV @. \\ \cdots @>>> \cF'_{l+1} @>>> \cF'_l @>>> \cF'_{l-1} @>>> \cdots \end{CD} $$ is commutative. \subsection{the Harder-Narasimhan polygons} \label{sec:HN} We use functions $\Delta_{p_\cF}$ from the theory of Harder-Narasimhan polygons. Let us recall the properties we use below. Let $\cF$ be a locally free $\cO_C$-module of rank $r$. For an $\cO_C$-submodule $\cF' \subset \cF$ (note that $\cF'$ is automatically locally free), we set $z_{\cF}(\cF') = (\rank(\cF'),\deg(\cF')) \in \Q^2$. It is known that there exists a unique convex, piecewise affine, affine on $[i-1,i]$ for $i=1,\ldots, r$, continuous function $p_{\cF} :[0,r] \to \R$ on the interval $[0,r]$ such that the convex hull of the set $\{z_{\cF}(\cF')\ |\ \cF' \subset \cF\}$ in $\R^2$ equals $\{(x,y)\ |\ 0\le x\le r,\, y\le p_{\cF}(x) \}$. We define the function $\Delta p_{\cF}: \{1,\ldots, r-1\} \to \R$ as $\Delta p_{\cF}(i) = 2 p_{\cF}(i) - p_{\cF}(i-1) - p_{\cF}(i+1)$. Then $\Delta p_{\cF}(i) \ge 0$ for all $i$. We note that for an invertible $\cO_C$-module $\cL$, $\Delta p_{\cF \otimes \cL}$ equals $\Delta p_{\cF}$. The theory of Harder-Narasimhan filtration (\cite{HN}) implies that, if $i \in \Supp(\Delta p_{\cF}) = \{ i\ |\ \Delta p_{\cF}(i)>0 \}$, then there exists a unique $\cO_C$-submodule $\cF' \subset \cF$ satisfying $z_{\cF}(\cF')=(i,p_{\cF}(i))$. We denote this $\cO_C$-submodule $\cF'$ by $\cF_{(i)}$. The submodule $\cF_{(i)}$ has the following properties. \begin{itemize} \item If $i, j \in \Supp(\Delta p_{\cF})$ with $i\le j$, then $\cF_{(i)}\subset \cF_{(j)}$ and $\cF_{(j)}/\cF_{(i)}$ is locally free. \item If $i \in \Supp(\Delta p_{\cF})$, then $p_{\cF_{(i)}}(x) = p_{\cF}(x)$ for $x \in [0,i]$ and $p_{\cF/\cF_{(i)}}(x-i) =p_{\cF}(x) - \deg(\cF_{(i)})$ for $x \in [i,r]$. \end{itemize} \begin{lem}\label{7_diffdeg} Let $\cF$ be a locally free $\cO_C$-module of finite rank, and let $\cF'\subset \cF$ be a $\cO_C$-submodule of the same rank. Then we have $0 \le p_{\cF}(i) -p_{\cF'}(i) \le \deg(\cF)-\deg(\cF')$ for $i =1,\ldots,\rank(\cF)-1$. \end{lem} \begin{proof} Immediate from the definition of $p_{\cF}$. \end{proof} \begin{lem}\label{7_HNdiff} Let $\cF$ be a locally free $\cO_C$-module of rank $d$. Let $\cF' \subset \cF$ be a n $\cO_C$-submodule of the same rank. Suppose that $\Delta p_{\cF}(i) > \deg(\cF) -\deg(\cF')$. Then we have $\cF'_{(i)} = \cF_{(i)} \cap \cF'$. \end{lem} \begin{proof} It suffices to prove that $\cF'_{(i)}\subset \cF_{(i)}$. Assume otherwise. Let us consider the short exact sequence $$ 0\to \cF'_{(i)}\cap \cF_{(i)} \to \cF'_{(i)} \to \cF'_{(i)}/(\cF'_{(i)}\cap \cF_{(i)}) \to 0 $$ Let $r$ denote the rank of $\cF'_{(i)}\cap \cF_{(i)}$. By assumption, $r$ is strictly smaller than $i$. Hence $$ \begin{array}{rl} \deg(\cF'_{(i)}) & = \deg(\cF'_{(i)}\cap \cF_{(i)}) + \deg(\cF'_{(i)}/(\cF'_{(i)}\cap \cF_{(i)})) \\ & \le p_{\cF}(r) + p_{\cF/\cF_{(i)}}(i-r) \\ & \le p_{\cF}(i) - (i-r)(p_{\cF}(i)-p_{\cF}(i-1)) + (i-r)(p_{\cF}(i+1) -p_{\cF}(i)) \\ & = \deg(\cF_{(i)}) - (i-r) \Delta p_{\cF}(i) \\ & < \deg(\cF_{(i)}) - (\deg(\cF)-\deg(\cF')). \end{array} $$ On the other hand, Lemma~\ref{7_diffdeg} shows that $\deg(\cF_{(i)}\cap \cF')\ge \deg(\cF_{(i)}) - (\deg(\cF)-\deg(\cF'))$. This is a contradiction. \end{proof} \section{Some subsimplicial complexes} \label{sec:Xsigma} Using the functions $\Delta_{p_\cF}$ defined above, we introduce some subsimplicial complexes of $X_{\bK,\bullet}$ and of $\wt{X}_{\bK,\bullet}$. In \cite{Gra}, Grayson considered subsimplicial complexes of the building but here we also consider subsimplicial complexes of the quotients. We introduce three spaces. \subsection{} Given a subset $\cD \subset \{1,\ldots,d-1\}$ and a real number $\alpha > 0$, we define the simplicial subcomplex $X_{\bK,\bullet}^{(\alpha),\cD}$ of $X_{\bK,\bullet}$ as follows: A simplex of $X_{\bK,\bullet}$ belongs to $X_{\bK,\bullet}^{(\alpha),\cD}$ if and only if each of its vertices is represented by a locally free $\cO_C$-module $\cF$ of rank $d$ with a level $\bK$-structure such that $\Delta p_{\cF}(i) \ge \alpha$ holds for every $i \in \cD$. Let $X_{\bK,\bullet}^{(\alpha)}$ denote the union $X_{\bK,\bullet}^{(\alpha)} = \bigcup_{\cD \neq \emptyset} X_{\bK,\bullet}^{(\alpha),\cD}$. \subsection{} Write $\cD=\{i_1, \dots, i_{r-1}\}$ with $i_1<\cdots <i_{r-1}$. Let $\Flag_{\cD}$ denote the set $$ \Flag_{\cD} = \{ f= [0 \subset V_1 \subset \cdots \subset V_{r-1} \subset F^{\oplus d}] \ |\ \dim(V_j)=i_j \} $$ of flags in $F^{\oplus d}$. Let $\wt{X}_{\bK,\bullet}^{(\alpha),\cD}$ denote the inverse image of $X_{\bK,\bullet}^{(\alpha),\cD}$ by the morphism $\wt{X}_{\bK,\bullet} \to X_{\bK,\bullet}$. For $f = [0 \subset V_1 \subset \cdots \subset V_{r-1} \subset F^{\oplus d}]\in \Flag_{\cD}$, let $\wt{X}_{\bK,\bullet}^{(\alpha),\cD,f}$ denote the simplicial subcomplex of $\wt{X}_{\bK,\bullet}^{(\alpha),\cD}$ consisting of the simplices in $\wt{X}_{\bK,\bullet}$ whose representative $\cdots \supsetneqq \cF_{-1} \supsetneqq \cF_0 \supsetneqq \cF_1 \supsetneqq \cdots$ satisfies $\cF_{l,(i_j)} = \cF_l \cap \eta_* V_{i_j}$ for every $l\in\Z$, $j=1,\ldots,r-1$. Lemma~\ref{7_HNdiff} implies that, for $\alpha > (d-1)\deg(\infty)$, $\wt{X}_{\bK,\bullet}^{(\alpha),\cD}$ is decomposed into a disjoint union $\wt{X}_{\bK,\bullet}^{(\alpha),\cD} = \coprod_{f \in \Flag_{\cD}} \wt{X}_{\bK,\bullet}^{(\alpha),\cD,f}$. \subsection{} For $g \in \GL_d(\A^\infty)$, we set $$ \wt{Y}_{\bK,\bullet}^{(\alpha),\cD,g} = \wt{X}_{\bK,\bullet}^{(\alpha),\cD,f_0} \cap (P_{\cD}(\A^\infty)g/(g^{-1}P_{\cD}(\A^\infty)g \cap \bK) \times \cBT_{\bullet}) $$ and $Y_{\bK,\bullet}^{(\alpha),\cD,g} = P_{\cD}(F)\bsl \wt{Y}_{\bK,\bullet}^{(\alpha),\cD,g}$. We omit the superscript $g$ on $\wt{Y}_{\bK,\bullet}^{(\alpha),\cD,g}$ and $Y_{\bK,\bullet}^{(\alpha),\cD,g}$ if $g=1$. If we take a complete set $T \subset \GL_d(\A^\infty)$ of representatives of $P_{\cD}(\A^\infty)\bsl \GL_d(\A^\infty)$, then we have $\wt{X}_{\bK,\bullet}^{(\alpha),\cD,f_0} = \coprod_{g \in T} \wt{Y}_{\bK,\bullet}^{(\alpha),\cD,g}$. \section{the finite adele actions} \begin{lem}\label{7_betag} For every $g \in \GL_d(\A^\infty)$ satisfying $g^{-1}\bK g \subset \GL_d(\wh{A})$, there exists a real number $\beta_g \ge 0$ such that the isomorphism $\xi_g:X_{\bK,\bullet} \xto{\cong} X_{g^{-1}\bK g,\bullet}$ sends $X_{\bK,\bullet}^{(\alpha),\cD}$ to $X_{g^{-1} \bK g,\bullet}^{(\alpha -\beta_g),\cD} \subset X_{g^{-1}\bK g,\bullet}$ for all $\alpha > \beta_g$, and for all nonempty subset $\cD \subset \{1,\ldots,d-1 \}$. \end{lem} \begin{proof} Take two elements $a,b \in \A^{\infty \times} \cap \wh{A}$ such that both $ag$ and $bg^{-1}$ lie in $\GL_d(\A^\infty)\cap \Mat_d(\wh{A})$. Then for any $h \in \GL_d(\A^\infty)$ we have $a \wh{A}^{\oplus d}h^{-1} \subset \wh{A}^{\oplus d}g^{-1} h^{-1} \subset b^{-1} \wh{A}^{\oplus d}h^{-1}$. This implies that, for any vertex $x \in X_{\bK,0}$, if we take suitable representatives $\cF_x$, $\cF_{\xi_{g}(x)}$ of the equivalence classes of locally free $\cO_C$-modules corresponding to $x$, $\xi_{g}(x)$, then there exists a sequence of injections $\cF_x(-\divi(a)) \inj \cF_{\xi_g(x)} \inj \cF_x(\divi(b))$. Applying Lemma~\ref{7_diffdeg}, we see that there exists a positive real number $m_g>0$ not depending on $x$ such that $|p_{\cF_x}(i) - p_{\cF_{\xi_{g}(x)}}(i)| < m_g$ for all $i$. Hence the claim follows. \end{proof} \begin{remark} Any open compact subgroup of $\GL_d(\A^\infty)$ is conjugate to an open subgroup of $\GL_d(\wh{A})$. The set of open subgroups of $\GL_d(\wh{A})$ is cofinal in the inductive system of all open compact subgroups of $\GL_d(\A^\infty)$. Therefore, to prove Theorem~\ref{7_PROP1}, we may without loss of generality assume that the group $\bK$ is contained in $\GL_d(\wh{A})$, and we may replace the inductive limit $\varinjlim_{\bK}$ in the definition of $H_{d-1}^\mathrm{BM}(X_{\lim,\bullet},M)$ and $H_{d-1}(X_{\lim,\bullet},M)$ with the inductive limit $\varinjlim_{\bK\subset \GL_d(\wh{A})}$. \end{remark} From now on until the end of this section, we exclusively deal with the subgroups $\bK \subset \GL_d(\A^\infty)$ contained in $\GL_d(\wh{A})$. The notation $\varinjlim_{\bK}$ henceforth means the inductive limit $\varinjlim_{\bK\subset \GL_d(\wh{A})}$. Thus the group $\GL_d(\A^\infty)$ acts on $\varinjlim_{\bK} \varinjlim_{\alpha > 0} H^{*}(X^{(\alpha),\cD}_{\bK,\bullet},\Q)$ in such a way that the exact sequence (\ref{7_cohseq}) is $\GL_d(\A^\infty)$-equivariant. \subsection{} An argument similar to that in the proof of Lemma~\ref{7_betag} shows that, for each $g \in \GL_d(\A^\infty)$ satisfying $g^{-1}\bK g \subset \GL_d(\wh{A})$, there exists a real number $\beta'_g > \beta_g$ such that the isomorphism $\wt{\xi}_g$ sends $\wt{X}_{\bK,\bullet}^{(\alpha),\cD,f}$ to $\wt{X}_{g \bK g^{-1},\bullet}^{(\alpha-\beta_g),\cD,f} \subset \wt{X}_{g \bK g^{-1},\bullet}$ for $\alpha > \beta'_g$ and for any $f \in \Flag_{\cD}$. \subsection{} For $\gamma \in \GL_d(F)$, the action of $\gamma$ on $\wt{X}_{\bK,\bullet}$ sends $\wt{X}_{\bK,\bullet}^{(\alpha),\cD,f}$ bijectively to $\wt{X}_{\bK,\bullet}^{(\alpha),\cD,\gamma f}$. Let $f_0 = [0\ \subset F^{\oplus i_1}\oplus \{0\}^{\oplus d-i_1} \subset \cdots \subset F^{\oplus i_{r-1}} \oplus \{0\}^{\oplus d -i_{r-1}} \subset F^{\oplus d}]\in \Flag_{\cD}$ be the standard flag. The group $\GL_d(F)$ acts transitively on $\Flag_{\cD}$ and its stabilizer at $f_0$ equals $P_{\cD}(F)$. Hence for $\alpha > (d-1)\deg(\infty)$, $X_{\bK,\bullet}^{(\alpha),\cD}$ is isomorphic to the quotient $P_{\cD}(F)\bsl \wt{X}_{\bK,\bullet}^{(\alpha),\cD,f_0}$. For $g \in \GL_d(\A^\infty)$, we set $$ \wt{Y}_{\bK,\bullet}^{(\alpha),\cD,g} = \wt{X}_{\bK,\bullet}^{(\alpha),\cD,f_0} \cap (P_{\cD}(\A^\infty)g/(g^{-1}P_{\cD}(\A^\infty)g \cap \bK) \times \cBT_{\bullet}) $$ and $Y_{\bK,\bullet}^{(\alpha),\cD,g} = P_{\cD}(F)\bsl \wt{Y}_{\bK,\bullet}^{(\alpha),\cD,g}$. We omit the superscript $g$ on $\wt{Y}_{\bK,\bullet}^{(\alpha),\cD,g}$ and $Y_{\bK,\bullet}^{(\alpha),\cD,g}$ if $g=1$. \newcommand{\overline{g}}{\overline{g}} We note that, when $\bK$, $\alpha$ and $\cD$ are fixed, $\wt{Y}_{\bK,\bullet}^{(\alpha),\cD,g}$ and $Y_{\bK,\bullet}^{(\alpha),\cD,g}$ depend only on the class $\overline{g} = P_{\cD}(\A^\infty)g$ of $g$ in $P_{\cD}(\A^\infty)\bsl \GL_d(\A^\infty)$. By abuse of notation, we denote $\wt{Y}_{\bK,\bullet}^{(\alpha),\cD,g}$ and $Y_{\bK,\bullet}^{(\alpha),\cD,g}$ by $\wt{Y}_{\bK,\bullet}^{(\alpha),\cD,\overline{g}}$ and $Y_{\bK,\bullet}^{(\alpha),\cD,\overline{g}}$, respectively. Then we have $\wt{X}_{\bK,\bullet}^{(\alpha),\cD,f_0} = \coprod_{\overline{g} \in P_{\cD}(\A^\infty)\bsl \GL_d(\A^\infty)} \wt{Y}_{\bK,\bullet}^{(\alpha),\cD,\overline{g}}$. Hence we have \begin{equation} \label{eq:decomposition} X_{\bK,\bullet}^{(\alpha),\cD} = \coprod_{\overline{g} \in P_{\cD}(\A^\infty)\bsl \GL_d(\A^\infty)} Y_{\bK,\bullet}^{(\alpha),\cD,\overline{g}} \end{equation} for $\alpha > (d-1) \deg(\infty)$. \subsection{} We use a covering spectral sequence \begin{equation}\label{7_specseq} E_1^{p,q} = \bigoplus_{\sharp \cD = p+1} H^q(X^{(\alpha),\cD}_{\bK,\bullet},\Q) \Rightarrow H^{p+q}(X^{(\alpha)}_{\bK,\bullet},\Q) \end{equation} with respect to the covering $X_{\bK,\bullet}^{(\alpha)} = \bigcup_{1\le i \le d-1} X_{\bK,\bullet}^{(\alpha),\{i\}}$ of $X_{\bK,\bullet}^{(\alpha)}$. For $\alpha' \ge \alpha>0$, the inclusion $X_{\bK,\bullet}^{(\alpha),\cD} \to X_{\bK,\bullet}^{(\alpha'),\cD}$ induces a morphism of spectral sequences. Taking the inductive limit, we obtain the spectral sequence $$ E^{p,q}_1 = \bigoplus_{\sharp \cD = p+1} \varinjlim_{\alpha} H^q(X^{(\alpha),\cD}_{\bK,\bullet},\Q) \Rightarrow \varinjlim_{\alpha} H^{p+q}(X^{(\alpha)}_{\bK,\bullet},\Q). $$ For $g \in \GL_d(\A^\infty)$ satisfying $g^{-1}\bK g \subset \GL_d(\wh{A})$, let $\beta_g$ be as in Lemma~\ref{7_betag}. Then for $\alpha >\beta_g$ the isomorphism $\xi_g :X_{\bK,\bullet}\xto{\cong} X_{g\bK g^{-1},\bullet}$ induces a homomorphism from the spectral sequence (\ref{7_specseq}) for $X_{\bK,\bullet}^{(\alpha)}$ to that for $X_{\bK,\bullet}^{(\alpha -\beta_g)}$. Passing to the inductive limit with respect to $\alpha$ and then passing to the inductive limit with respect to $\bK$, we obtain the left action of the group $\GL_d(\A^\infty)$ on the spectral sequence \begin{equation}\label{7_limspecseq} E^{p,q}_1 = \bigoplus_{\sharp \cD = p+1} \varinjlim_{\bK} \varinjlim_{\alpha} H^q(X^{(\alpha),\cD}_{\bK,\bullet},\Q) \Rightarrow \varinjlim_{\bK} \varinjlim_{\alpha} H^{p+q}(X^{(\alpha)}_{\bK,\bullet},\Q). \end{equation} \section{Finiteness and an application} We prove that the complement of the boundary is finite. Then we express the Borel-Moore homology and cohomology with compact support as a limit of relative homology and cohomology respectively. \begin{lem}\label{7_lem:finiteness} For any $\alpha >0$, the set of the simplices in $X_{\bK,\bullet}$ not belonging to $X_{\bK,\bullet}^{(\alpha)}$ is finite. \end{lem} \begin{proof} Let $\cP$ denote the set of continuous, convex functions $p':[0,d]\to \R$ with $p'(0)=0$ such that $p'(i)\in \Z$ and $p'$ is affine on $[i-1,i]$ for $i=1,\ldots,d$. It is known that for any $r \ge 1$ and $f \in \Z$, there are only a finite number of isomorphism classes of semi-stable locally free $\cO_C$-modules of rank $r$ with degree $f$. Hence by the theory of Harder-Narasimhan filtration, for any $p' \in \cP$, the set of the isomorphism classes of locally free $\cO_C$-modules $\cF$ with $p_{\cF} = p'$ is finite. Let us give an action of the group $\Z$ on the set $\cP$, by setting $(a\cdot p')(x)=p'(x)+ a\deg(\infty)x$ for $a \in \Z$ and for $p' \in \cP$. Then $p_{\cF(a \infty)}= a\cdot p_{\cF}$ for any $a \in \Z$ and for any locally free $\cO_C$-module $\cF$ of rank $d$. For $\alpha >0$ let $\cP^{(\alpha)} \subset \cP$ denote the set of functions $p' \in \cP$ with $2p'(i)- p'(i-1)-p'(i+1) \le \alpha$ for each $i \in \{1,\ldots, d-1\}$. An elementary argument shows that the quotient $\cP^{(\alpha)}/\Z$ is a finite set, whence the claim follows. \end{proof} \subsection{} \label{sec:limit BM isom} Lemma~\ref{7_lem:finiteness} implies that $H_{d-1}^\mathrm{BM}(X_{\bK,\bullet},\Q)$ is canonically isomorphic to the projective limit $\varprojlim_{\alpha >0} H_{d-1}(X_{\bK,\bullet},X_{\bK,\bullet}^{(\alpha)}; \Q)$ and $H_c^{d-1}(X_{\bK,\bullet},\Q)$ is canonically isomorphic to the inductive limit $\varinjlim_{\alpha >0} H^{d-1}(X_{\bK,\bullet},X_{\bK,\bullet}^{(\alpha)}; \Q)$. Thus from the (usual) long exact sequence of relative homology, we have an exact sequence \begin{equation}\label{7_cohseq} \varinjlim_{\alpha >0} H^{d-2}(X^{(\alpha)}_{\bK,\bullet},\Q) \to H_c^{d-1}(X_{\bK,\bullet},\Q) \to H^{d-1}(X_{\bK,\bullet},\Q) \to \varinjlim_{\alpha >0} H^{d-1}(X^{(\alpha)}_{\bK,\bullet},\Q). \end{equation} \section{Some isomorphisms} \begin{prop}\label{7_PROP1b} For $\alpha' \ge \alpha >(d-1)\deg(\infty)$, the homomorphism $H^*(X^{(\alpha)}_{\bK,\bullet},\Q) \to H^*(X^{(\alpha')}_{\bK,\bullet},\Q)$ is an isomorphism. \end{prop} \begin{lem}\label{7_contract} For any $g \in \GL_d(\A^\infty)$, the simplicial complex $\wt{X}_{\bK,\bullet}^{(\alpha),\cD,f_0} \cap (\{ g\bK \}\times \cBT_{\bullet})$ is non-empty and contractible. \end{lem} \begin{proof} Since $\wt{X}_{\bK,\bullet}^{(\alpha),\cD,f_0} \cap (\{ g\bK \}\times \cBT_{\bullet})$ is isomorphic to $\wt{X}_{\GL_d(\wh{A}),\bullet}^{(\alpha),\cD,f_0} \cap (\{ g\GL_d(\wh{A}) \}\times \cBT_{\bullet})$, we may assume that $\bK = \GL_d(\wh{A})$. We set $X=\wt{X}_{\GL_d,\GL_d(\wh{A}),\bullet}^{(\alpha),\cD,f_0} \cap (\{ g\GL_d(\wh{A}) \}\times \cBT_{\GL_d,\bullet})$. We proceed by induction on $d$, in a manner similar to that in the proof of Theorem~4.1 of \cite{Gra}. Let $i\in \cD$ be the minimal element and set $d'=d-i$. We define the subset $\cD' \subset \{1,\ldots,d'-1 \}$ as $\cD' = \{ i' -i\ |\ i' \in \cD, i'\neq i \}$. We define $f'_0 \in \Flag_{\cD'}$ as the image of the flag $f_0$ in $F^{\oplus d}$ with respect to the the projection $F^{\oplus d} \surj F^{\oplus d}/(F^{\oplus i}\oplus \{0\}^{\oplus d'}) \cong F^{\oplus d'}$. Take an element $g' \in \GL_{d'}(\A^\infty)$ such that the quotient $\wh{A}^{\oplus d} g^{-1}/ (\wh{A}^{\oplus d}g^{-1}\cap (\A^{\infty \oplus i} \oplus \{0\}^{\oplus d'}))$ equals $\wh{A}^{\oplus d'}g^{\prime -1}$ as an $\wh{A}$-lattice of $\A^{\infty \oplus d'}$. We set $X'=\wt{X}_{\GL_{d'}, \GL_{d'}(\wh{A}),\bullet}^{(\alpha),\cD',f'_0} \cap (\{ g' \GL_{d'}(\wh{A}) \}\times \cBT_{\GL_{d'},\bullet})$ if $\cD'$ is non-empty. Otherwise we set $X' = \wt{X}_{\GL_{d'}, \GL_{d'}(\wh{A}),\bullet} \cap (\{ g' \GL_{d'}(\wh{A}) \}\times \cBT_{\GL_{d'},\bullet})$. By induction hypothesis, $|X'|$ is contractible. There is a canonical morphism $h:X \to X'$ which sends an $\cO_C$-submodule $\cF[g,L_\infty]$ of $\eta_* F^{\oplus d}$ to the $\cO_C$-submodule $\cF[g,L_\infty]/\cF[g,L_\infty]_{(i)}$ of $\eta_* F^{\oplus d'}$. Let $\epsilon : \Vertex(X) \to \Z$ and $\epsilon' : \Vertex(X') \to \Z$ denote the maps that send a locally free $\cO_C$-module $\cF$ to the integer $[p_{\cF}(1)/\deg(\infty)]$. We fix an $\cO_C$-submodule $\cF_0$ of $\eta_* F^{\oplus d}$ whose equivalence class belongs to $X$. By twisting $\cF_0$ by some power of $\cO_C(\infty)$ if necessary, we may assume that $p_{\cF_0}(i) - p_{\cF_0}(i-1) > \alpha$. We fix a splitting $\cF_0 = \cF_{0,(i)}\oplus \cF'_0$. This splitting induces an isomorphism $\varphi : \eta_* \eta^* \cF'_0 \cong \eta_* F^{\oplus d}$. Let $h' : X' \to X$ denote the morphism that sends an $\cO_C$-submodule $\cF'$ of $\eta_* \eta^* F^{\oplus d'}$ to the $\cO_C$-submodule $\cF_{0,(i)}(\epsilon'(\cF')\infty) \oplus \varphi^{-1}(\cF')$ of $\eta_* F^{\oplus d}$. For each $n \in \Z$, define a morphism $G_n : X \to X$ by sending an $\cO_C$-submodule $\cF$ of $\eta_* \eta^* F^{\oplus d}$ to the $\cO_C$-submodule $\cF_{0,(i)}((n+\epsilon(\cF))\infty) + \cF$ of $\eta_* F^{\oplus d}$. Then the argument in~\cite[p. 85--86]{Gra} shows that $f$ and $|h'|\circ|h| \circ f$ are homotopic for any map $f:Z \to |X|$ from a compact space $Z$ to $|X|$. Since the map $|h'|\circ|h| \circ f$ factors through the contractible space $|X'|$, $f$ is null-homotopic. Hence $|X|$ is contractible. \end{proof} \begin{proof}[Proof of Proposition~\ref{7_PROP1b}] For any simplex $\sigma$ in $\wt{X}_{\bK,\bullet}$, the stabilizer group $\Gamma_\sigma \subset \GL_d(F)$ is finite, as remarked in Section~\ref{sec:finite stab}. Hence by Lemma~\ref{7_contract}, both $H^*(Y_{\bK,\bullet}^{(\alpha),\cD,g},\Q)$ and $H^*(Y_{\bK,\bullet}^{(\alpha'),\cD,g},\Q)$ are canonically isomorphic to the same group $H^*(P_{\cD}(F), \Map(P_{\cD}(\A)g/(g^{-1}P_{\cD}(\A^\infty)g \cap \bK),\Q))$ for any non-empty subset $\cD \subset \{1,\ldots, d-1\}$ and for $g \in \GL_d(\A^\infty)$. This shows that $H^*(Y_{\bK,\bullet}^{(\alpha),\cD,g},\Q) \to H^*(Y_{\bK,\bullet}^{(\alpha'),\cD,g},\Q)$ is an isomorphism. One can decompose, by using \eqref{eq:decomposition}, the $\Q$-vector space $H^*(X_{\bK,\bullet}^{(\alpha),\cD},\Q)$ as the product \begin{equation} \label{eq:decomposition2} H^*(X_{\bK,\bullet}^{(\alpha),\cD},\Q) = \prod_{\overline{g} \in P_{\cD}(\A^\infty)\bsl \GL_d(\A^\infty)} H^*(Y_{\bK,\bullet}^{(\alpha),\cD,\overline{g}},\Q). \end{equation} Hence the homomorphism $H^*(X_{\bK,\bullet}^{(\alpha),\cD},\Q) \to H^*(X_{\bK,\bullet}^{(\alpha'),\cD},\Q)$ is an isomorphism. \end{proof} \section{Proof of Theorem \ref{7_PROP1}} \subsection{} For a subset $\cD$ of $\{1,\ldots,d-1\}$, we define the algebraic groups $P_{\cD}$, $N_{\cD}$ and $M_{\cD}$ as follows. We write $\cD= \{i_1,\ldots,i_{r-1} \}$, with $i_0=0 < i_1 < \cdots < i_{r-1} <i_r=d$ and set $d_j = i_j -i_{j-1}$ for $j=1,\ldots,r$. We define $P_{\cD}$, $N_{\cD}$ and $M_{\cD}$ as the standard parabolic subgroup of $\GL_d$ of type $(d_1,\ldots,d_r)$, the unipotent radical of $P_{\cD}$, and the quotient group $P_{\cD}/N_{\cD}$ respectively. We identify the group $M_{\cD}$ with $\GL_{d_1}\times\cdots \times \GL_{d_r}$. Let us consider the smooth $\GL_d(\A^\infty)$-module, $\varinjlim_{\bK} \varinjlim_{\alpha} H^*(X_{\bK,\bullet}^{(\alpha),\cD},\Q)$. For a fixed $\bK$, we have \begin{equation} \label{eq:decomposition3} \varinjlim_{\alpha} H^*(X_{\bK,\bullet}^{(\alpha),\cD},\Q) = \prod_{\overline{g} \in \overline{g} \in P_{\cD}(\A^\infty)\bsl \GL_d(\A^\infty)} \varinjlim_{\alpha} H^*(Y_{\bK,\bullet}^{(\alpha),\cD,\overline{g}},\Q). \end{equation} since $H^*(Y_{\bK,\bullet}^{(\alpha),\cD,\overline{g}},\Q) \to H^*(Y_{\bK,\bullet}^{(\alpha'),\cD,\overline{g}},\Q)$ is an isomorphism for $\alpha' \ge \alpha > (d-1)\deg(\infty)$. We note that $\varinjlim_{\bK} \varinjlim_{\alpha} H^*(Y_{\bK,\bullet}^{(\alpha),\cD,g},\Q)$ is a smooth $g^{-1} P_\cD(\A^\infty) g$-module for any $g \in \GL_d(\A^\infty)$. Via \eqref{eq:decomposition3} we regard $\varinjlim_{\bK} \varinjlim_{\alpha} H^*(X_{\bK,\bullet}^{(\alpha),\cD},\Q)$ as a submodule of $\prod_{\overline{g} \in P_{\cD}(\A^\infty)\bsl \GL_d(\A^\infty)} \varinjlim_{\bK} \varinjlim_{\alpha} H^*(Y_{\bK,\bullet}^{(\alpha),\cD,\overline{g}},\Q)$. Let $g \in \GL_d(\A^\infty)$. For an open compact subgroup $\bK \subset \GL_d(\wh{A})$ satisfying $g^{-1}\bK g \subset \GL_d(\wh{A})$, there exists a real number $\beta'_g > \beta_g$ such that the isomorphism $\wt{\xi}_g$ sends $\wt{Y}_{\bK,\bullet}^{(\alpha),\cD,g'}$ to $\wt{Y}_{g \bK g^{-1},\bullet}^{(\alpha-\beta_g),\cD,g'g} \subset \wt{X}_{g \bK g^{-1},\bullet}$ for $\alpha > \beta'_g$, for any $f \in \Flag_{\cD}$ and for any $g' \in \GL_d(\A^\infty)$. This induces a morphism $\xi_{g,g',\bK}: Y_{\bK,\bullet}^{(\alpha),\cD,g'} \to Y_{g \bK g^{-1},\bullet}^{(\alpha-\beta_g),\cD,g'g}$ of (generalized) simplicial complexes. By varying $\alpha$, we have a homomorphism $$ \xi_{g,g',\bK}^* : \varinjlim_{\alpha} H^*(Y_{g \bK g^{-1},\bullet}^{(\alpha),\cD,g'g},\Q) \to \varinjlim_{\alpha} H^*(Y_{\bK,\bullet}^{(\alpha),\cD,g'},\Q). $$ The homomorphism $\xi_{g,g',\bK}^*$ is an isomorphism since the homomorphism $\xi_{g^{-1},g'g,g \bK g^{-1}}^*$ gives its inverse. By varying $\bK$, we obtain an isomorphism $$ \xi_{g,g'}^* : \varinjlim_{\bK} \varinjlim_{\alpha} H^*(Y_{\bK,\bullet}^{(\alpha),\cD,g'g},\Q) \xto{\cong} \varinjlim_{\bK} \varinjlim_{\alpha} H^*(Y_{\bK,\bullet}^{(\alpha),\cD,g'},\Q). $$ Let us choose a complete set $T \subset \GL_d(\A^\infty)$ of representatives such that $1 \in T$. Then $\varinjlim_{\bK} \varinjlim_{\alpha} H^*(X_{\bK,\bullet}^{(\alpha),\cD},\Q)$ is a submodule of $\prod_{g \in T} \varinjlim_{\bK} \varinjlim_{\alpha} H^*(Y_{\bK,\bullet}^{(\alpha),\cD,g},\Q)$. We set $$ H^{*,\cD}_\Q = \varinjlim_{\bK} \varinjlim_{\alpha} H^*(Y_{\bK,\bullet}^{(\alpha),\cD},\Q). $$ The isomorphism $\xi_{g,1}^* : \varinjlim_{\bK} \varinjlim_{\alpha} H^*(Y_{\bK,\bullet}^{(\alpha),\cD,g},\Q) \xto{\cong}H^{*,\cD}_\Q$ for each $g \in T$ gives an isomorphism $$ \prod_{g \in T} \varinjlim_{\bK} \varinjlim_{\alpha} H^*(Y_{\bK,\bullet}^{(\alpha),\cD,g},\Q) \cong \prod_{g \in T} H^{*,\cD}_\Q. $$ Via this isomorphism we regard $\varinjlim_{\bK} \varinjlim_{\alpha} H^*(X_{\bK,\bullet}^{(\alpha),\cD},\Q)$ as a submodule of $\prod_{g \in T} H^{*,\cD}_\Q$. Let $g' \in \GL_d(\A^\infty)$ be an arbitrary element. For each $g \in T$, let us write $gg' = h_g g_1$ with $g_1 \in T$ and $h_g \in P_\cD(\A^\infty)$. Then, with respect to the inclusion $\varinjlim_{\bK} \varinjlim_{\alpha} H^*(X_{\bK,\bullet}^{(\alpha),\cD},\Q) \inj \prod_{g \in T} H^{*,\cD}_\Q$, the action of $g'$ on $\varinjlim_{\bK} \varinjlim_{\alpha} H^*(X_{\bK,\bullet}^{(\alpha),\cD},\Q)$ is compatible with the automorphism $\theta(g')$ of $\prod_{g \in T} H^{*,\cD}_\Q$ that sends $(x_g)_{g \in T}$ to $(\xi_{h_g,1}^*(x_{g_1}))_{g \in T}$. The automorphisms $\theta(g')$ for various $g'$ give an action of $\GL_d(\A^\infty)$ on $\prod_{g \in T} H^{*,\cD}_\Q$ from the left and an element $x$ of $\prod_{g \in T} H^{*,\cD}_\Q$ belongs to the submodule $\varinjlim_{\bK} \varinjlim_{\alpha} H^*(X_{\bK,\bullet}^{(\alpha),\cD},\Q)$ if and only if $x$ is invariant under some open compact subgroup $\bK \subset \GL_d(\A^\infty)$. Observe that the smooth part of $\prod_{g \in T} H^{*,\cD}_\Q$ with respect to the action of $\GL_d(\A^\infty)$ introduced above is equal to the (unnormalized) parabolic induction $\Ind_{P_{\cD}(\A^\infty)}^{\GL_d(\A^\infty)} \varinjlim_{\bK} \varinjlim_{\alpha} H^* (Y_{\bK,\bullet}^{(\alpha),\cD},\Q)$. Thus we obtain an isomorphism $$ \varinjlim_{\bK} \varinjlim_{\alpha} H^*(X_{\bK,\bullet}^{(\alpha),\cD},\Q) \cong \Ind_{P_{\cD}(\A^\infty)}^{\GL_d(\A^\infty)} \varinjlim_{\bK} \varinjlim_{\alpha} H^* (Y_{\bK,\bullet}^{(\alpha),\cD},\Q). $$ It is straightforward to check that this isomorphism is independent of the choice of $T$. \begin{prop}\label{7_prop2} Let the notations be above. Then as a smooth $\GL_d(\A^\infty)$-module, $\varinjlim_{\bK} \varinjlim_{\alpha >0} H^*(X_{\bK,\bullet}^{(\alpha),\cD},\Q)$ is isomorphic to $$ \Ind_{P_{\cD}(\A^\infty)}^{\GL_d(\A^\infty)} \bigotimes_{j=1}^r \varinjlim_{\bK_j \subset \GL_{d_j}(\wh{A})} H^*(X_{\GL_{d_j},\bK_j,\bullet},\Q), $$ where the group $P_{\cD}(\A^\infty)$ acts on ${\displaystyle \bigotimes_{j=1}^r \varinjlim_{\bK_j\subset \GL_{d_j}(\wh{A})}} H^*(X_{\GL_{d_j},\bK_j,\bullet},\Q)$ via the quotient $P_{\cD}(\A^\infty) \to M_{\cD}(A^\infty) = \prod_j \GL_{d_j}(\A^\infty)$, and $\Ind_{P_{\cD}(\A^\infty)}^{\GL_d(\A^\infty)}$ denotes the parabolic induction unnormalized by the modulus function. \end{prop} We give a proof in the following subsections. \subsection{} For $j=1,\ldots,r$, let $\bK_j \subset \GL_{d_i}(\A^\infty)$ denote the image of $\bK \cap P_{\cD}(\A^\infty)$ by the composition $P_{\cD}(\A^\infty) \to M_{\cD}(\A^\infty) \to \GL_{d_i}(\A^\infty)$. We define the continuous map $\wt{\pi}_{\cD,j}: |\wt{Y}_{\bK,\bullet}^{(\alpha),\cD}| \to |\wt{X}_{\GL_{d_j},\bK_j,\bullet}|$ of topological spaces in the following way. Let $\sigma$ be an $i$-simplex in $\wt{Y}_{\bK,\bullet}^{(\alpha),\cD}$. Take a chain $\cdots \supsetneqq \cF_{-1} \supsetneqq \cF_0 \supsetneqq \cF_1 \supsetneqq \cdots$ of $\cO_C$-modules representing $\sigma$. For $l\in \Z$ we set $\cF_{l,j} = \cF_{l,(i_j)}/\cF_{l,(i_{j-1})}$, which is an $\cO_C$-submodule of $\eta_* F^{\oplus d_j}$. We set $S_j = \{ l \in \Z \ |\ \cF_{l,j}\neq \cF_{l+1,j} \}$. Define the map $\psi_j : \Z \to S_j$ as $\psi_j(l) = \min \{ l'\ge l\ |\ l' \in S_j \}$. Take an order-preserving bijection $\varphi_j :S_j \xto{\cong} \Z$. For $l \in\Z$ set $\cF'_{l} =\cF_{\varphi_j^{-1}(l), j}$. Then the chain $\cdots \supsetneqq \cF'_{-1} \supsetneqq \cF'_0 \supsetneqq \cF'_1 \supsetneqq \cdots$ defines a simplex $\sigma'$ in $\wt{X}_{\GL_{d_j},\bK_j,\bullet}$. We define a continuous map $|\sigma| \to |\sigma'|$ as the affine map sending the vertex of $\sigma$ corresponding to $\cF_l$ to the vertex of $\sigma'$ corresponding to $\cF'_{\varphi_j \circ \psi_j(l)}$. Gluing these maps, we obtain a continuous map $\wt{\pi}_{\cD,j}: |\wt{Y}_{\bK,\bullet}^{(\alpha),\cD}| \to |\wt{X}_{\GL_{d_j},\bK_j,\bullet}|$. We set $\wt{\pi}_{\cD} = (\wt{\pi}_{\cD,1},\ldots,\wt{\pi}_{\cD,r}) :|\wt{Y}_{\bK,\bullet}^{(\alpha),\cD}| \to \prod_{j=1}^{r} |\wt{X}_{\GL_{d_j},\bK_j,\bullet}|$. This continuous map descends to the continuous map $\pi_{\cD}: |Y_{\bK,\bullet}^{(\alpha),\cD}| \to \prod_{j=1}^{r} |X_{\GL_{d_j},\bK_j,\bullet}|$. \subsection{} If $g \in P_{\cD}(\A^\infty)$ and $g^{-1}\bK g \subset \GL_d(\wh{A})$, then the isomorphism $\xi_g :X_{\bK,\bullet} \xto{\cong} X_{g^{-1}\bK g,\bullet}$ sends $Y_{\bK,\bullet}^{(\alpha),\cD}$ inside $Y_{g^{-1}\bK g,\bullet}^{(\alpha -\beta_g),\cD}$. If we denote by $(g_1,\ldots,g_r)$ the image of $g$ in $M_{\cD}(\A^\infty) = \prod_{j=1}^r \GL_{d_j}(\A^\infty)$, then the diagram $$ \begin{CD} |Y_{\bK,\bullet}^{(\alpha),\cD}| @>{\xi_g}>> |Y_{g^{-1}\bK g,\bullet}^{(\alpha-\beta_g),\cD}| \\ @V{\pi_{\cD}}VV @V{\pi_{\cD}}VV \\ \prod_{j=1}^r |X_{\GL_{d_j},\bK_j,\bullet}| @>{(\xi_{g_1},\ldots,\xi_{g_r})}>> \prod_{j=1}^r |X_{\GL_{d_j},g_j^{-1} \bK_j g_j,\bullet}| \end{CD} $$ is commutative. \subsection{} With the notations as above, suppose that the open compact subgroup $\bK \subset \GL_d(\A^\infty)$ has the following property. \begin{equation} \label{7_property} \text{the homomorphism } P_{\cD}(\A^\infty)\cap \bK \to \bK_1 \times \cdots \times \bK_r \text{ is surjective.} \end{equation} For a simplicial complex $X$, we set $I_X = \Map(\pi_0(X),\Q)$, where $\pi_0(X)$ is the set of the connected components of $X$. Let us consider the following commutative diagram. \begin{equation}\label{7_CD} \begin{CD} H^*(M_{\cD}(F), \Map(\prod_{j=1}^r \pi_0(X_{\GL_{d_j},\bK_j,\bullet}),\Q)) @>>> H^*(P_{\cD}(F), I_{Y_{\bK,\bullet}^{(\alpha),\cD}}) \\ @VVV @VVV \\ H^*_{M_{\cD}(F)} (\prod_{j=1}^r |\wt{X}_{\GL_{d_j},\bK_j,\bullet}|,\Q) @>>> H^*_{P_{\cD}(F)}(|Y_{\bK,\bullet}^{(\alpha),\cD}|,\Q)\\ @AAA @AAA \\ H^*(\prod_{j=1}^r |X_{\GL_{d_j},\bK_j,\bullet}|,\Q) @>>> H^*(|Y_{\bK,\bullet}^{(\alpha),\cD}|,\Q). \end{CD} \end{equation} Here $H^*_{M_{\cD}(F)}$ and $H^*_{P_{\cD}(F)}$ denote the equivariant cohomology groups. \subsection{} \begin{prop}\label{7_prop3} All homomorphisms in the above diagram (\ref{7_CD}) are isomorphisms. \end{prop} \begin{proof} We prove that the upper horizontal arrow and the four vertical arrows are isomorphisms. First we consider the upper horizontal arrow. \begin{lem}\label{thislemma} For $q\ge 1$, the group $H^q (N_{\cD}(F), I_{Y_{\bK,\bullet}^{(\alpha),\cD}})$ is zero. \end{lem} \begin{proof}[Proof of Lemma \ref{thislemma}] For each $x \in N_{\cD}(F) \bsl \pi_0(Y_{\bK,\bullet}^{(\alpha),\cD})$, take a lift $\wt{x} \in \pi_0(Y_{\bK,\bullet}^{(\alpha),\cD})$ of $x$ and let $N_x \subset N_{\cD}(F)$ denote the stabilizer of $\wt{x}$. Then the group $H^*(N_{\cD}(F), I_{Y_{\bK,\bullet}^{(\alpha),\cD}})$ is isomorphic to the direct product $$\prod_{x \in N_{\cD}(F) \bsl \pi_0(Y_{\bK,\bullet}^{(\alpha),\cD})} H^*(N_x ,\Q).$$ We note that the group $N_{\cD}(F)$ is a union $N_{\cD}(F) = \bigcup_{i} U_i$ of finite subgroups of $p$-power order where $p$ is the characteristic of $F$. (This follows easily from \cite[p.2, 1.A.2 Lemma]{KeWe} or from \cite[p.60, 1.L.1 Theorem]{KeWe}.) Hence $N_x = \bigcup_{i} (U_i \cap N_x)$. We claim $H^{j}(N_x,\Q) =0$ for $j\ge 1$. Because the projective system of cochain complexes that compute $H^*(U_i \cap N_x, M)$, where $M$ is a $\Q$-vector space, satisfies the Mittag-Leffler condition, the cohomology $H^{j}(N_x,\Q)$ equals the projective limit $\varprojlim_i H^{j}(U_i \cap N_x, \Q)$ (see \cite{WeHA} p.83, Ex 3.5.2, 3.5.8). Since $U_i \cap N_x$ is a finite group, their higher cohomology with $\Q$ coefficient vanishes. This proves the claim, hence the lemma. \end{proof} We note that $\pi_0(X_{\GL_{d_j},\bK_j,\bullet})$ is canonically isomorphic to $\GL_{d_i}(\A^\infty)/\bK_j$ for $j=1,\ldots,r$, and Lemma~\ref{7_contract} implies that $I_{Y_{\bK,\bullet}^{(\alpha),\cD}}$ is canonically isomorphic to $\Map(P_{\cD}(\A^\infty)/ (P_{\cD}(\A^\infty) \cap \bK),\Q)$. Since $N_{\cD}(F)$ is dense in $N_{\cD}(\A^\infty)$, the group $H^0(N_{\cD}(F), I_{Y_{\bK,\bullet}^{(\alpha),\cD}})$ is canonically isomorphic to the group $\Map(M_{\cD}(\A^\infty)/\prod_{j=1}^r \bK_j,\Q)$. Hence the upper horizontal arrow of the diagram (\ref{7_CD}) is an isomorphism. Next we consider the vertical arrows. Each connected component of $\wt{X}_{\GL_{d_j},g_j^{-1} \bK_j g_j,\bullet}$ is contractible since it is isomorphic to the Bruhat-Tits building for $\GL_{d_j}$. Recall that the simplicial complex $X_{\GL_{d_j},g_j^{-1} \bK_j g_j,\bullet}$ is the quotient of $\wt{X}_{\GL_{d_j},g_j^{-1} \bK_j g_j,\bullet}$ by the action of $\GL_{d_j}(F)$. For any simplex $\sigma$ in $\wt{X}_{\GL_{d_j},g_j^{-1} \bK_j g_j,\bullet}$, the stabilizer group $\Gamma_\sigma \subset \GL_{d_j}(F)$ of $\sigma$ is finite, as remarked in Section~\ref{sec:finite stab}. Hence the left two vertical arrows in the diagram (\ref{7_CD}) are isomorphisms. Similarly, bijectivity of the two right vertical arrows in the diagram (\ref{7_CD}) follows from Lemma~\ref{7_contract}. Thus we have a proof of Proposition~\ref{7_prop3}. \end{proof} \begin{proof}[Proof of Proposition~\ref{7_prop2}] Let us consider the lower horizontal arrow in the diagram (\ref{7_CD}). By Proposition~\ref{7_prop3} it is an isomorphism. We note that the compact open subgroups $\bK \subset \GL_d(\wh{A})$ with property (\ref{7_property}) form a cofinal subsystem of the inductive system of all open compact subgroups of $\GL_d(\A^\infty)$. Therefore, passing to the inductive limits with respect to $\alpha$ and $\bK$ with property (\ref{7_property}), we have $\varinjlim_\bK \varinjlim_\alpha H^*(Y_{\bK,\bullet}^{(\alpha),\cD},\Q) \cong \bigotimes_{j=1}^r \varinjlim_{\bK_j} H^*(X_{\GL_{d_j},\bK_j,\bullet},\Q)$ as desired. \end{proof} \subsection{} Let $\bK, \bK' \subset \GL_d(\A^\infty)$ be two compact open subgroups with $\bK' \subset \bK$. The pull-back morphism from the cochain complex of $X_{\bK,\bullet}$ to that of $X_{\bK',\bullet}$ preserves the cochains with finite supports. Thus we have pull-back homomorphisms $H^*_c(X_{\bK,\bullet},\Q) \to H^*_c(X_{\bK',\bullet},\Q)$ which is compatible with the usual pull-back homomorphism $H^*(X_{\bK,\bullet},\Q) \to H^*(X_{\bK',\bullet},\Q)$. For an abelian group $M$, we let $H^*(X_{\lim,\bullet},M)=H^*(X_{\GL_d,\lim,\bullet},M)$ and $H_c^*(X_{\lim,\bullet},M)=H_c^*(X_{\GL_d,\lim,\bullet},M)$ denote the inductive limits $\varinjlim_{\bK}H^*(X_{\bK,\bullet},M)$ and $\varinjlim_{\bK}H_c^*(X_{\bK,\bullet},M)$, respectively. If $M$ is a $\Q$-vector space, then for each compact open subgroup $\bK \subset \GL_d(\A^\infty)$, the homomorphism $H^*(X_{\bK,\bullet},M) \to H^*(X_{\lim,\bullet},M)$ is injective and its image is equal to the $\bK$-invariant part $H^*(X_{\lim,\bullet},M)^{\bK}$ of $H^*(X_{\lim,\bullet},M)$. Similar statement holds for $H_c^*$. It follows from Proposition~\ref{prop:admissible} that the inductive limits $H^{d-1}(X_{\lim,\bullet},\Q)$ and $H_c^{d-1}(X_{\lim,\bullet},\Q)$ are admissible $\GL_d(\A^\infty)$-modules, and are isomorphic to the contragradient of $H_{d-1}(X_{\lim,\bullet},\Q)$ and $H_{d-1}^\mathrm{BM}(X_{\lim,\bullet},\Q)$, respectively. \subsection{Proof of Theorem~\ref{7_PROP1}} Since $\St_{d,\C}$ is self-contragradient, it follows from the compatibility of the normalized parabolic inductions with taking contragradient that it suffice to prove that any irreducible subquotient of $H^{d-1}_c(X_{\lim,\bullet},\C)$ satisfies the properties in the statement of Theorem~\ref{7_PROP1}. Let $\pi$ be an irreducible subquotient of $H^{d-1}_c(X_{\lim,\bullet},\C)$. Then Proposition~\ref{7_prop2} combined with the spectral sequence (\ref{7_limspecseq}) shows that there exists a subset $\cD \subset \{1,\ldots,d-1\}$ such that $\pi^\infty$ is isomorphic to a subquotient of $\Ind_{P_{\cD}(\A^\infty)}^{GL_d(\A^\infty)} \bigotimes_{j=1}^r \varinjlim_{\bK_j} H^{d_j -1}(X_{\GL_{d_j},\bK_j,\bullet},\C)$. Here $r=\sharp \cD +1$, and $d_1,\ldots,d_{r} \ge 1$ are the integers satisfying $\cD=\{d_1,d_1+d_2,\ldots, d_1+\cdots+d_{r-1} \}$ and $d_1 +\cdots + d_{r} =d$. By Proposition~\ref{prop:66_3}, $\pi^\infty$ is isomorphic to a subquotient of the non-$\infty$-component of the induced representation from $P_{\cD}(\A)$ to $\GL_d(\A)$ of an irreducible cuspidal automorphic representation $\pi_1 \otimes \cdots \otimes \pi_r$ of $M_{\cD}(\A)$ whose component at $\infty$ is isomorphic to the tensor product of the Steinberg representations. It remains to prove the claim of the multiplicity. The Ramanujan-Petersson conjecture proved by Lafforgue \cite[p.6, Th\'{e}or\`{e}me]{Laf} shows that each place $v$ of $F$, the representation $\pi_{i,v}$ is tempered. Hence for almost all places $v$ of $F$, the representation $\pi_v$ of $\GL_d(F_v)$ is unramified and its associated Satake parameters $\alpha_{v,1},\cdots ,\alpha_{v,d}$ have the following property: for each $i$ with $1 \le i \le r$, exactly $d_i$ parameters of $\alpha_{v,1},\cdots ,\alpha_{v,d}$ have the complex absolute value $q_v^{a_i/2}$ where $q_v$ denotes the cardinality of the residue field at $v$ and $a_i = \sum_{i<j\le r} d_j - \sum_{1 \le j <i} d_j$. This shows that the subset $\cD$ is uniquely determined by $\pi$. It follows from the multiplicity one theorem and the strong multiplicity one theorem that the cuspidal automorphic representation $\pi_1 \otimes \cdots \otimes \pi_r$ of $M_{\cD}(\A)$ is also uniquely determined by $\pi$. Hence it suffices to show the following lemma. \begin{lem} The representation $\Ind_{P_{\cD}(F_v)}^{\GL_d(F_v)} \pi_{1,v} \otimes \cdots \otimes \pi_{r,v}$ of $\GL_d(F)$ is of multiplicity free for every place $v$ of $F$. \end{lem} \begin{proof} For $1 \le i \le r$, let $\Delta_i$ denote the multiset of segments corresponding to the representation $\pi_{i,v}\otimes |\det(\ )|_v^{a_i/2}$ in the sense of \cite{Zelevinsky}. We denote by $\Delta_i^t$ the Zelevinski dual of $\Delta_i$. Let $i_1, i_2$ be integers with $1 \le i_1 < i_2 \le r$ and suppose that there exist a segment in $\Delta_{i_1}^t$ and a segment in $\Delta_{i_2}^t$ which are linked. Since $\pi_{i_1,v}$ and $\pi_{i_2,v}$ are tempered, it follows that $i_2 = i_1 +1$ and that there exists a character $\chi$ of $F_v^\times$ such that both $\pi_{i_1,v} \otimes \chi$ and $\pi_{i_2,v} \otimes \chi$ are the Steinberg representations. In this case the multiset $\Delta_{i_j}^t$ consists of a single segment for $j=1,2$ and the unique segment in $\Delta_{i_1}^t$ and the unique segment in $\Delta_{i_2}^t$ are juxtaposed. Thus the claim is obtained by applying the formula in \cite[9.13, Proposition, p.201]{Zelevinsky}. \end{proof} This finishes the proof of Theorem~\ref{7_PROP1}. \qed \chapter{The Bruhat-Tits building and apartments} \label{sec:BT} Let $d\ge 1$ be a positive integer. In this chapter, we recall the definition of the Bruhat-Tits building of $\mathrm{PGL}_d$ over a nonarchimedean local field using lattices and subsimplicial complexes called apartments. (For the general theory of Bruhat-Tits building and apartments, the reader is referred to \cite{BT} and the book \cite{Ab-Br}.) Then we define the fundamental class of an apartment in its $(d-2)$-nd Borel-Moore homology group. Later, we define modular symbols to be the image of the fundamental classes of apartments associated with $F$-basis ($F$ is a global field) in the Borel-Moore homology of quotients of the Bruhat-Tits building. \section{The Bruhat-Tits building of $\mathrm{PGL}_d$} In the following paragraphs, we recall the definition of the Bruhat-Tits building of $\PGL_d$ over a nonarchimedean local field. We recall that it is a strict simplicial complex. \subsection{Notation} Let $K$ be a nonarchimedean local field. We let $\cO \subset K$ denote the ring of integers. We fix a uniformizer $\varpi \in \cO$. Let $d \ge 1$ be an integer. Let $V=K^{\oplus d}$. We regard it as the set of row vectors so that $\GL_d(K)$ acts from the right by multiplication \subsection{the Bruhat-Tits building (\cite{BT}) using lattices} \label{Bruhat-Tits} We do not recall here the most general definition of the Bruhat-Tits buildings. Let us give a definition using lattices (see also \cite[\S 4]{Gra}) first, and then give a more explicit description for later use. \subsubsection{} An $\cO$-lattice in $V$ is a free $\cO$-submodule of $V$ of rank $d$. We denote by $\Lat_{\cO}(V)$ the set of $\cO$-lattices in $V$. We regard the set $\Lat_{\cO}(V)$ as a partially ordered set whose elements are ordered by the inclusions of $\cO$-lattices. \subsubsection{} Two $\cO$-lattices $L$, $L'$ of $V$ are called homothetic if $L = \varpi^j L'$ for some $j \in \Z$. Let $\Latbar_{\cO}(V)$ denote the set of homothety classes of $\cO$-lattices $V$. We denote by $\cl$ the canonical surjection $\cl: \Lat_{\cO}(V) \to \Latbar_{\cO}(V)$. \begin{definition} We say that a subset $S$ of $\Latbar_{\cO}(V)$ is totally ordered if $\cl^{-1}(S)$ is a totally ordered subset of $\Lat_{\cO}(V)$. \end{definition} \subsubsection{} The pair $(\Latbar_{\cO}(V), \Delta)$ of the set $\Latbar_{\cO}(V)$ and the set $\Delta$ of totally ordered finite nonempty subsets of $\Latbar_{\cO}(V)$ forms a strict simplicial complex. \begin{definition} \label{def:BT} The Bruhat-Tits building of $\PGL_d$ over $K$ is the simplicial complex $\cBT_\bullet$ associated to this strict simplicial complex. \end{definition} \subsection{Explicit description of the building} In the next paragraphs we explicitly describe the simplicial complex $\cBT_\bullet$. \subsubsection{} For an integer $i \ge 0$, let $\wt{\cBT}_i$ be the set of sequences $(L_j)_{j \in \Z}$ of $\cO$-lattices in $V$ indexed by $j \in \Z$ such that $L_j \supsetneqq L_{j+1}$ and $\varpi L_j=L_{j+i+1}$ hold for all $j\in \Z$. In particular, if $(L_j)_{j \in \Z}$ is an element in $\wt{\cBT}_0$, then $L_j = \varpi^j L_0$ for all $j \in \Z$. We identify the set $\wt{\cBT}_0$ with the set $\Lat_{\cO}(V)$ by associating the $\cO$-lattice $L_0$ to an element $(L_j)_{j \in \Z}$ in $\wt{\cBT}_0$. We say that two elements $(L_j)_{j \in \Z}$ and $(L'_j)_{j \in \Z}$ in $\wt{\cBT}_i$ are equivalent if there exists an integer $\ell$ satisfying $L'_j=L_{j+\ell}$ for all $j \in \Z$. We will see below that the set of the equivalence classes in $\wt{\cBT}_i$ is identified with $\cBT_i$. For $i=0$, the identification $\wt{\cBT_0} \cong \Lat_{\cO}(V)$ gives an identification $\cBT_0 \cong \Latbar_{\cO}(V)$. \subsubsection{} Let $\sigma \in \cBT_i$ and take a representative $(L_j)_{j \in \Z}$ of $\sigma$. For $j \in \Z$, let us consider the class $\cl(L_j)$ in $\Latbar_{\cO}(V)$. Since $\varpi L_j = L_{j+i+1}$, we have $\cl(L_j) = \cl(L_{j+i+1})$. Since $L_j \supsetneqq L_k \supsetneqq \varpi L_j$ for $0 \le j < k \le i$, the elements $\cl(L_0), \ldots, \cl(L_i) \in \Latbar_{\cO}(V)$ are distinct. Hence the subset $V(\sigma) = \{ \cl(L_j)\ |\ j \in \Z \} \subset \cBT_0$ has cardinality $i+1$ and does not depend on the choice of $(L_j)_{j \in \Z}$. It is easy to check that the map from $\cBT_i$ to the set of finite subsets of $\Latbar_{\cO}(V)$ which sends $\sigma \in \cBT_i$ to $V(\sigma)$ is injective and that the set $\{V(\sigma)\ |\ \sigma \in \cBT_i\}$ is equal to the set of totally ordered subsets of $\Latbar_{\cO}(V)$ with cardinality $i+1$. In particular, for any $j \in \{0,\ldots,i\}$ and for any subset $V' \subset V(\sigma)$ of cardinality $j+1$, there exists a unique element in $\cBT_j$, which we denote by $\sigma \times_{V(\sigma)} V'$, such that $V(\sigma \times_{V(\sigma)} V')$ is equal to $V'$. Thus the collection $\cBT_\bullet = \coprod_{i \ge 0} \cBT_i$ together with the data $V(\sigma)$ and $\sigma \times_{V(\sigma)} V'$ forms a simplicial complex which is canonically isomorphic to the simplicial complex associated to the strict simplicial complex $(\Latbar_{\cO}(V), \Delta)$ which we introduced in the first paragraph of Section~\ref{Bruhat-Tits}. We call the simplicial complex $\cBT_\bullet$ the Bruhat-Tits building of $\PGL_d$ over $K$. \subsubsection{} \label{sec:dimension} The simplicial complex $\cBT_\bullet$ is of dimension at most $d-1$, by which we mean that $\cBT_i$ is an empty set for $i > d-1$. It follows from the fact that $\wt{\cBT}_i$ is an empty set for $i > d-1$, which we can check as follows. Let $i > d-1$ and assume that there exists an element $(L_j)_{j \in \Z}$ in $\wt{\cBT}_i$. Then for $j=0,\ldots,i+1$, the quotient $L_j/L_{i+1}$ is a subspace of the $d$-dimensional $(\cO/\varpi \cO)$-vector space $L_0/L_{i+1}=L_0/\varpi L_0$. These subspaces must satisfy $L_0/L_{i+1} \supsetneqq L_1/L_{i+1} \supsetneqq \cdots \supsetneqq L_{i+1}/L_{i+1}$. It is impossible since $i+1 > d$. \section{Apartments} \label{sec:apartments} Here we recall the definition of apartment. It is a simplicial subcomplex of the Bruhat-Tits building. We are interested only in the apartements of $\PGL_d$ of a nonarchimedean local field and not of other algebraic groups. To describe apartments, we do not use the general theory via root systems but give a simpler, ad hoc treatment, particular to $\PGL_d$. The readers are referred to \cite[p. 523, 10.1.7 Example]{Ab-Br} for the general theory. For example, when $d=2$, it is an infinite sequence of $1$-simplices. When $d=3$, it is an $\mathbb{R}^{2}$ tiled by triangles ($2$-simplices). The geometric realization is homeomorphic to $\mathbb{R}^{d-1}$. \subsection{} \label{sec:def apartment} Set $A_0 = \Z^{\oplus d}/\Z(1,\ldots,1)$. For two elements $x=(x_j), y=(y_j) \in \Z^{\oplus d}$, we write $x \le y$ when $x_j \le y_j$ for all $1 \le j \le d$. We regard $\Z^{\oplus d}$ as a partially ordered set with respect to $\le$. Let $\pi: \Z^{\oplus d} \to A_0$ denote the quotient map. We say that a subset $S$ of $A_0$ is totally ordered if $\pi^{-1}(S)$ is a totally ordered subset of $\Z^{\oplus d}$. The pair $(A_0, D)$ of the set $A_0$ and the set $D$ of totally ordered finite nonempty subsets of $A_0$ forms a strict simplicial complex. We denote by $A_\bullet =(A_i)_{i \ge 0}$ the simplicial complex associated to the strict simplicial complex $(A_0, \coprod_{i\ge 0} A_i)$. We note that $A_i$ is an empty set for $i \ge d$, since by definition there is no totally ordered subset of $A_0$ with cardinality larger than $d$. \subsection{} \label{sec:521} Let $v_1, \dots, v_d$ be a basis of $V =K^{\oplus d}$. We define a map $\iota_{v_1,\ldots,v_d} : A_\bullet \to \cBT_\bullet$ of simplicial complexes as follows. Let $\wt{\iota}_{v_1,\ldots,v_d}: \Z^{\oplus d} \to \wt{\cBT}_0$ denote the map that sends the element $(n_1,\ldots,n_d) \in \Z^d$ to the $\cO$-lattice $\cO\varpi^{n_1} v_1 \oplus \cO \varpi^{n_2} v_2 \oplus \dots \oplus \cO \varpi^{n_d} v_d$. The map $\wt{\iota}_{v_1,\ldots,v_d}$ is an order-embedding of partially ordered sets, and induces a map $\iota_{v_1,\ldots,v_d,0} : A_0 \to \cBT_0$ that makes the diagram $$ \begin{CD} \Z^{\oplus d} @>{\wt{\iota}_{v_1,\ldots,v_d}}>> \wt{\cBT}_0 \\ @VVV @VVV \\ A_0 @>{\iota_{v_1,\ldots,v_d,0}}>> \cBT_0 \end{CD} $$ of sets, where the vertical arrows are the quotient maps, commutative and cartesian. This implies that the map $\iota_{v_1,\ldots,v_d,0} : A_0 \to \cBT_0$ sends a totally ordered subset of $A_0$ to a totally ordered subset of $\cBT_0$. Hence the map $\iota_{v_1,\ldots,v_d,0} : A_0 \to \cBT_0$ induces a map $\iota_{v_1,\ldots,v_d} : A_\bullet \to \cBT_\bullet$ of simplicial complexes. It is easy to check that the map $\iota_{v_1,\ldots,v_d,i} : A_i \to \cBT_i$ is injective for every $i \ge 0$. We define a simplicial subcomplex $A_{v_1, \dots, v_d , \bullet}$ of $\cBT_\bullet$ to be the image of the map $\iota_{v_1,\ldots,v_d}$ so that $A_{v_1, \dots, v_d , i}$ is the image of the map $\iota_{v_1,\ldots,v_d,i}$ for each $i \ge 0$. We call the subcomplex $A_{v_1, \dots, v_d , \bullet}$ of $\cBT_\bullet$ the apartment in $\cBT_\bullet$ corresponding to the basis $v_1,\ldots,v_d$. Since the map $\iota_{v_1,\ldots,v_d,i}$ is injective for every $i \ge 0$, the map $\iota_{v_1,\ldots,v_d}$ induces an isomorphism $A_\bullet \xto{\cong} A_{v_1, \dots, v_d , \bullet}$ of simplicial complexes. \section{the fundamental class} \label{sec:fundamental class} We introduce a special element $\beta$ in the group $H^\BM_{d-1}(A_\bullet,\Z)$, which is an analogue of the fundamental class. \subsection{ } Let $\sigma \in A_i$ and let $V(\sigma) \subset A_0$ denote the set of vertices of $\sigma$. By definition, $V(\sigma)$ consists of exactly $i+1$ elements and the inverse image $\wt{V}(\sigma) = \pi^{-1}(V(\sigma))$ is a totally ordered subset of $\Z^d$. \begin{lem} \label{lem:totally} As a totally ordered set, $\wt{V}(\sigma)$ is isomorphic to $\Z$. \end{lem} \begin{proof} Let $\Sigma: \Z^{\oplus d} \to \Z$ denote the map that sends $(n_1,\ldots,n_d)$ to $n_1 + \cdots + n_d$. Then the composite of the inclusion $\wt{V}(\sigma) \inj \Z^{\oplus d}$ with $\Sigma$ is order-preserving and injective since $\wt{V}(\sigma)$ is totally ordered and for any $x,y \in \Z^{\oplus d}$, the relations $x \le y$ and $\Sigma(x) < \Sigma(y)$ implies $x < y$. Since $\wt{V}(\sigma)$ is closed under the addition of $\pm(1,\ldots,1)$, it follows that $\wt{V}(\sigma)$ is isomorphic, as a totally ordered set, to a subset of $\Z$ which is unbounded both from below and from above. This shows that $\wt{V}(\sigma)$ is isomorphic to $\Z$. \end{proof} Let $x \in V(\sigma)$ and let us choose a lift $\wt{x} \in \wt{V}(\sigma)$ of $x$. Lemma \ref{lem:totally} implies that there exists a maximum element $\wt{x}'$ of $\wt{V}(\sigma)$ satisfying $\wt{x}' < \wt{x}$. We set $e(\sigma,x) = \wt{x} - \wt{x}' \in \Z^d$. Then $e(\sigma,x)$ does not depend on the choice of the lift $\wt{x}$ and we have $e(\sigma,x) > 0$ and $\sum_{x \in V(\sigma)} e(\sigma,x) = (1,\ldots,1)$. \subsection{ } Now let us assume that $\sigma \in A_{d-1}$. Then the set $\{e(\sigma,x)\ |\ x \in V(\sigma)\}$ is equal to the set $\{\mathbf{e}_1,\ldots,\mathbf{e}_d\}$ where $\mathbf{e}_1, \ldots, \mathbf{e}_d$ denotes the standard $\Z$-basis of $\Z^{\oplus d}$. This implies that, for any $i \in \{1,\ldots,d\}$, there exists a unique element $x_i \in V(\sigma)$ satisfying $e(\sigma,x_i)=\mathbf{e}_i$. The map $\wt{[\sigma]} :\{1,\ldots,d\} \to V(\sigma)$ that sends $i$ to $x_i$ is bijective. Hence the map $\wt{[\sigma]}$ defines an element of $T(\sigma)$ (see Section~\ref{sec:orientation} for the definition). We denote by $[\sigma]$ the class of $\wt{[\sigma]}$ in $O(\sigma)$. We let $\wh{\beta}$ denote the element $\wh{\beta} = (\beta_{\nu})_{\nu \in A_{d-1}'}$ in $\prod_{\nu \in A_{d-1}'} \Z$ where $\beta_\nu =1$ if $\nu = [\sigma]$ for some $\sigma \in A_{d-1}$ and $\beta_\nu=0$ otherwise. We denote by $\beta$ the class of $\wh{\beta}$ in $(\prod_{\nu \in A_{d-1}'} \Z)_\pmone$. \subsection{ } Recall that we defined in Section~\ref{sec:def homology} a chain complex which computes the Borel-Moore homology of $A_\bullet$. \begin{prop} The element $\beta \in (\prod_{\nu \in A_{d-1}'} \Z)_\pmone$ is a $(d-1)$-cycle in the chain complex which computes the Borel-Moore homology of $A_\bullet$. \end{prop} \begin{proof} The assertion is clear for $d=1$ since the $(d-2)$-nd component of the complex is zero. Suppose that $d \ge 2$. Let $\tau$ be an element in $A_{d-2}$. Since $\sum_{x \in V(\tau)} e(\tau,x) = (1,\ldots,1)$, there exists a unique vertex $y \in V(\tau)$ such that $e(\tau,x)$ belongs to $\{\mathbf{e}_1,\ldots,\mathbf{e}_d\}$ for $x \neq y$ and $e(\tau,y)$ is equal to the sum of two distinct element of $\{\mathbf{e}_1,\ldots,\mathbf{e}_d\}$. Let us write $e(\tau,y) = \mathbf{e}_{j} + \mathbf{e}_{j'}$. Let us choose a lift $\wt{y} \in \wt{V}(\tau)$ of $y$ and set $\wt{y}' = \wt{y} - e(\tau,y)$. Then $\wt{y}' \in \wt{V}(\tau)$ and there are exactly two elements in $\Z^{\oplus d}$ which is larger than $\wt{y}'$ and which is smaller than $\wt{y}$, namely $\wt{y}-\mathbf{e}_j$ and $\wt{y}-\mathbf{e}_{j'}$. We set $y_1 = \pi(\wt{y}-\mathbf{e}_j)$ and $y_2 = \pi(\wt{y}-\mathbf{e}_{j'})$. For $i=1,2$, let $\sigma_i \in A_{d-1}$ denote the unique element satisfying $V(\sigma_i) = V(\tau) \cup \{y_i\}$. It is easily checked that the set of the elements in $A_{d-1}$ which has $\tau$ as a face is equal to $\{\sigma_1,\sigma_2\}$. Let $\iota: V(\sigma_1) \cong V(\sigma_2)$ denote the bijection such that $\iota(x)=x$ for any $x \in V(\tau)$ and $\iota(y_1)=y_2$. Then the composite $\iota \circ \wt{[\sigma_1]}$ is equal to the composite $\wt{[\sigma_2]} \circ (jj')$, where $(jj')$ denotes the transposition of $j$ and $j'$. Since the signature of $(jj')$ is equal to $-1$, it follows that the component in $(\prod_{\nu \in O(\tau)} \Z)_\pmone$ of the image of $\beta$ under the boundary map $(\prod_{\nu \in A_{d-1}'} \Z)_\pmone \to (\prod_{\nu' \in A_{d-2}'} \Z)_\pmone$ is equal to zero. This proves the claim. \end{proof} \begin{definition} We refer to the class in $H^{\BM}_{d-1}(A_\bullet, \Z)$ defined by the $(d-1)$-cycle $\beta$ as the fundamental class of the apartment. \end{definition} \chapter{Double cosets for automorphic forms} We introduce here the main arithmetic object of study $X_{\bK,\bullet}$ for a compact open subgroup $\bK \subset \GL_d(\A^\infty)$. It is a (generalized) simplicial complex. This is an analogue of the $\C$-valued points of a Shimura variety as a double coset. In Section~\ref{subsec:X_K}, we define a (generalized) simplicial complex in the form of double coset. We show (Proposition~\ref{prop:homology autom}) that (the limit of) the Borel-Moore homology is isomorphic to the space of our automorphic forms and the homology is isomorphic to the subspace of cusp forms. This gives a precise relation between the Borel-Moore homology/homology of the geometry and the space of automorphic forms. In particular, we see how our modular symbols are related to automorphic forms. Later, we study this geometry to prove Theorem~\ref{7_PROP1}. \section{Simplicial complexes for automorphic forms} \label{subsec:X_K} We give the definition of some double coset $X_{\bK, \bullet}$ associated with a compact open subgroup $\bK$. \subsection{} For an open compact subgroup $\bK \subset \GL_d(\A^\infty)$, we let $\wt{X}_{\GL_d,\bK,\bullet}$ denote the disjoint union $\wt{X}_{\GL_d,\bK,\bullet}=(\GL_d(\A^\infty)/\bK) \times \cBT_{\bullet}$ of copies of the Bruhat-Tits building $\cBT_{\bullet}$ indexed by $\GL_d(\A^\infty)/\bK$. We often omit the subscript $\GL_d$ on $\wt{X}_{\GL_d,\bK,\bullet}$ when there is no fear of confusion. The group $\GL_d(\A)=\GL_d(\A^\infty) \times \GL_d(F_\infty)$ acts on the simplicial complex $\wt{X}_{\bK,\bullet}$ from the left. $X_{\bK, \bullet}=\GL_d(F) \bsl \wt{X}_{\bK,\bullet} =\GL_d(F) \bsl (\GL_d(\A^\infty) \times \cBT_{\bullet}/\bK$ of $\wt{X}_{\bK,\bullet}$ by the subgroup $\GL_d(F) \subset \GL_d(\A)$. For $0\le i \le d-1$, we let $X_{\bK,i}= X_{\GL_d,\bK,i}$ denote the quotient $X_{\bK,i} = \GL_d(F) \bsl \wt{X}_{\GL_d,\bK,i}.$ We let $X_{\bK,\bullet}= \GL_d(F) \bsl \wt{X}_{\bK,\bullet} =\GL_d(F) \bsl (\GL_d(\A^\infty) \times \cBT_{\bullet})/\bK$. \subsection{} The non-adelic description is as follows. We set $J_\bK =\GL_d(F) \bsl \GL_d(\A^\infty)/\bK$. For each $j \in J_\bK$, we choose an element $g_j \in \GL_d(\A^\infty)$ in the double coset $j$ and set $\Gamma_j = \GL_d(F) \cap g_j \bK g_j^{-1}$. Then the set $X_{\bK,i}$ is isomorphic to the disjoint union $\coprod_j \Gamma_j \bsl \cBT_i$. For each $j$, the group $\Gamma_j \subset \GL_d(F)$ is an arithmetic subgroup as defined in Section~\ref{sec:def arithmetic}. It follows that the tuple $X_{\bK,\bullet}=(X_{\bK,i})_{0\le i \le d-1}$ forms a simplicial complex which is isomorphic to the disjoint union $\coprod_{j\in J_\bK} \Gamma_j \bsl \cBT_\bullet$. \subsection{} Since the simplicial complex $\wt{X}_{\GL_d,\bK,\bullet}$ is locally finite, it follows that the simplicial complex $X_{\bK,\bullet}$ is locally finite. Hence, as in Section~\ref{sec:def homology}, for an abelian group $M$, we may consider the cohomology groups with compact support $H_c^{*}(X_{\bK,\bullet},M)$ and the Borel-Moore homology groups $H_*^\mathrm{BM}(X_{\bK,\bullet},M)$. of the simplicial complex $X_{\bK,\bullet}$. \subsection{} Since the simplicial complex $X_{\bK,\bullet}$ has no $i$-simplex for $i \ge d$ as was remarked in Section~\ref{sec:dimension}, it follows that the $(d-1)$-st homology and Borel-Moore homology are isomorphic to the group of chains. This implies that the map $$ H_{d-1}(X_{\bK,\bullet},M) \to H_{d-1}^\mathrm{BM}(X_{\bK,\bullet},M) $$ is injective for any abelian group $M$. We regard $H_{d-1}(X_{\bK,\bullet},M)$ as a subgroup of $H_{d-1}^\mathrm{BM}(X_{\bK,\bullet},M)$ via this map. \section{Pull-back maps for homology groups} We study some functoriality for two open compact subgroups; then we are able to define the inductive limit of the Borel-Moore homology/homology groups. \subsection{} Let $\bK,\bK' \subset \GL_d(\A^\infty)$ be open compact subgroups with $\bK' \subset \bK$. We denote by $f_{\bK',\bK}$ the natural projection map $X_{\bK',i} \to X_{\bK,i}$. Since $\bK'$ is a subgroup of $\bK$ of finite index, it follows that for any $i$ with $0 \le i \le d-1$ and for any $i$-simplex $\sigma \in X_{\bK,i}$, the inverse image of $\sigma$ under the map $f_{\bK',\bK}$ is a finite set. Let $i$ be an integer with $0 \le i \le d-1$ and let $\sigma' \in X_{\bK',i}$. Let $\sigma$ denote the image of $\sigma'$ under the map $f_{\bK',\bK}$. Let us choose an $i$-simplex $\wt{\sigma}'$ of $\wt{X}_{\bK',\bullet}$ which is sent to $\sigma'$ under the projection map $\wt{X}_{\bK',\bullet} \to X_{\bK',\bullet}$. Let $\wt{\sigma}$ denote the image of $\wt{\sigma}'$ under the map $\wt{X}_{\bK',i} \to \wt{X}_{\bK,i}$. We let $$ \Gamma_{\wt{\sigma}'} = \{ \gamma \in \GL_d(F)\ |\ \gamma \wt{\sigma}' =\wt{\sigma}' \} $$ and $$ \Gamma_{\wt{\sigma}} = \{ \gamma \in \GL_d(F)\ |\ \gamma \wt{\sigma} =\wt{\sigma} \} $$ denote the stabilizer group of $\wt{\sigma}'$ and $\wt{\sigma}$, respectively. \begin{lem} Let the notation be as above. \begin{enumerate} \item The group $\Gamma_{\wt{\sigma}}$ is a finite group and the group $\Gamma_{\wt{\sigma}'}$ is a subgroup of $\Gamma_{\wt{\sigma}}$. \item The isomorphism class of the group $\Gamma_{\wt{\sigma}'}$ (\resp $\Gamma_{\wt{\sigma}}$) depends only on $\sigma'$ (\resp $\sigma$) and does not depends on the choice of $\wt{\sigma}'$. \end{enumerate} \end{lem} \begin{proof} For the case of a 0-dimensonal simplex of (1), see \cite[Proof of Theorem 0.8]{Gra}. In the stabilizer group for a higher dimensional simplex, the subgroup of elements that fix each vertex is of finite index. This proves (1). The claim (2) can be checked easily. \end{proof} \begin{definition} The lemma above shows in particular that the index $[\Gamma_{\wt{\sigma}} : \Gamma_{\wt{\sigma}'}]$ is finite and depends only on $\sigma'$ and $f_{\bK',\bK}$. We denote this index by $e_{\bK',\bK}(\sigma')$ and call it the ramification index of $f_{\bK',\bK}$ at $\sigma'$. \end{definition} \subsection{} Let $M$ be an abelian group. Let $i$ be an integer with $0 \le i \le d$. We set $X'_{\bK,i}= \coprod_{\sigma \in X_{\bK,i}} O(\sigma)$. The map $f_{\bK',\bK} : X_{\bK',\bullet} \to X_{\bK,\bullet}$ induces a map $X'_{\bK',i} \to X'_{\bK,i}$ which we denote also by $f_{\bK'.\bK}$. \subsubsection{} Let $m = (m_{\nu})_{\nu \in X'_{\bK,i}}$ be an element of the $\{\pm 1\}$-module $\prod_{\nu \in X'_{\bK,i}} M$. We define the element $f^*_{\bK',\bK}(m)$ in $\prod_{\nu \in X'_{\bK',i}} M$ to be $$ f^*_{\bK',\bK}(m) = (m'_{\nu'})_{\nu' \in X'_{\bK',i}} $$ where for $\nu' \in O(\sigma') \subset X'_{\bK',i}$, the element $m'_{\nu'} \in M$ is given by $m'_{\nu'} = e_{\bK',\bK}(\sigma') m_{f_{\bK',\bK}(\nu')}$. The following lemma can be checked easily. \begin{lem} Let the notation be as above. \begin{enumerate} \item The map $f^*_{\bK',\bK} : \prod_{\nu \in X'_{\bK,i}} M \to \prod_{\nu' \in X'_{\bK',i}} M$ is a homomorphism of $\{\pm 1\}$-modules. \item The map $f^*_{\bK',\bK} : \prod_{\nu \in X'_{\bK,i}} M \to \prod_{\nu' \in X'_{\bK',i}} M$ sends an element in the subgroup $\bigoplus_{\nu \in X'_{\bK,i}} M \subset \prod_{\nu \in X'_{\bK,i}} M$ to an element in $\bigoplus_{\nu \in X'_{\bK',i}} M$. \item For $1 \le i \le d-1$, the diagrams $$ \begin{CD} \prod_{\nu \in X'_{\bK,i}} M @>{\wt{\partial}_{i,\prod}}>> \prod_{\nu \in X'_{\bK,i-1}} M \\ @V{f_{\bK',\bK}^*}VV @V{f_{\bK',\bK}^*}VV \\ \prod_{\nu' \in X'_{\bK',i}} M @>{\wt{\partial}_{i,\prod}}>> \prod_{\nu' \in X'_{\bK',i-1}} M \end{CD} $$ and $$ \begin{CD} \bigoplus_{\nu \in X'_{\bK,i}} M @>{\wt{\partial}_{i,\oplus}}>> \bigoplus_{\nu \in X'_{\bK,i-1}} M \\ @V{f_{\bK',\bK}^*}VV @V{f_{\bK',\bK}^*}VV \\ \bigoplus_{\nu' \in X'_{\bK',i}} M @>{\wt{\partial}_{i,\oplus}}>> \bigoplus_{\nu' \in X'_{\bK',i-1}} M \end{CD} $$ are commutative. \end{enumerate} \end{lem} \section{Limits} \subsection{} For an abelian group $M$, we set \[ H_*(X_{\lim,\bullet},M)=H_*(X_{\GL_d,\lim,\bullet},M)=\varinjlim_{\bK}H_*(X_{\bK,\bullet},M) \] and \[ H_*^\mathrm{BM}(X_{\lim,\bullet},M)=H_*^\mathrm{BM}(X_{\GL_d,\lim,\bullet},M) = \varinjlim_{\bK}H_*^\mathrm{BM}(X_{\bK,\bullet},M). \] Here the transition maps in the inductive limits are given by $f^*_{\bK',\bK}$. \subsection{} For $g \in \GL_d(\A^\infty)$, we let $\wt{\xi}_g : \wt{X}_{\bK,\bullet} \xto{\cong} \wt{X}_{g^{-1} \bK g,\bullet}$ denote the isomorphism of simplicial complexes induced by the isomorphism $\GL_d(\A^\infty)/\bK \xto{\cong} \GL_d(\A^\infty)/g^{-1}\bK g$ that sends a coset $h \bK$ to the coset $hg\cdot g^{-1} \bK g$ and by the identity on $\cBT_{\bullet}$. The isomorphism $\wt{\xi}_g$ induces an isomorphism $\xi_{g} :X_{\bK,\bullet} \xto{\cong} X_{g^{-1} \bK g,\bullet}$ of simplicial complexes. For two elements $g, g' \in \GL_d(\A^\infty)$, we have $\xi_{gg'}=\xi_{g'}\circ \xi_g$. \subsection{} The isomorphisms $\xi_g$ for $g\in \GL_d(\A^\infty)$ give rise to a smooth action (i.e., the stabilizer of each vector is a compact open subgroup) of the group $\GL_d(\A^\infty)$ on these inductive limits. If $M$ is a torsion free abelian group, then for each compact open subgroup $\bK \subset \GL_d(\A^\infty)$, the homomorphism $H_*(X_{\bK,\bullet},M) \to H_*(X_{\lim,\bullet},M)$ is injective and its image is equal to the $\bK$-invariant part $H_*(X_{\lim,\bullet},M)^{\bK}$ of $H_*(X_{\lim,\bullet},M)$. Similar statement holds for $H_*^\mathrm{BM}$. \section{the Steinberg representation and harmonic cochains} In this section, we consider coefficients of $\Q$-vector spaces. We show how the limit of the Borel-Moore homology/homology corresponds to a sub $\GL_d(\A^\infty)$ representation of the space of automorphic forms. \subsection{} Let $\St_{d, \C}$ denote the Steinberg representation as defined, for example, in \cite[p.193]{Laumon1}. It is defined with coefficients in $\C$, but it can also be defined with coefficients in $\Q$ in a similar manner. We let $\St_{d, \Q}$ denote the corresponding representation. \subsection{} \begin{definition} A harmonic cochain with values in a $\Q$-vector space $M$ is defined as an element of $\Hom(H^{d-1}_c(\cBT_\bullet,\Q),M)$. \end{definition} \begin{lem}\label{lem:Steinberg} For a $\Q$-vector space $M$, there is a canonical, $\GL_d(F_\infty)$-equivariant isomorphism between the module of $M$-valued harmonic $(d-1)$-cochains and the module $\Hom_{\Q}(\St_{d,\Q},M)$. \end{lem} \begin{proof} It is shown in~\cite[6.2,6.4]{Borel} that $\St_{d,\C}$ is canonically isomorphic to $H^{d-1}_c(\cBT_\bullet,\C)$ as a representation of $\GL_d(F_\infty)$. One can check that this map is defined over $\Q$. This proves the claim. \end{proof} \subsection{} \label{sec:6.1.2} We let $\cBT_{j,*}$ denote the quotient $\wt{\cBT}_j/F_{\infty}^{\times}$. This set is identified with the set of pairs $(\sigma, v)$ with $\sigma \in \cBT_j$ and $v \in \cBT_0$ a vertex of $\sigma$, which we call a pointed $j$-simplex. Here the element $(L_i)_{i\in \Z} \mod K^\times$ of $\wt{\cBT_j}/K^\times$ corresponds to the pair $((L_i)_{i\in \Z}, L_0)$ via this identification. \subsection{} We identify the set $\wt{\cBT}_{0}$ with the coset $\GL_d(K)/\GL_d(\cO)$ by associating to an element $g \in \GL_d(K)/\GL_d(\cO)$ the lattice $\cO_{V}g^{-1}$. Let $\cI=\{(a_{ij})\in \GL_d(\cO) \,|\, a_{ij}\,\mathrm{mod}\, \varpi =0 \ \text{if}\ i>j\}$ be the Iwahori subgroup. Similarly, we identify the set $\wt{\cBT}_{d-1}$ with the coset $\GL_d(K)/\cI$ by associating to an element $g\in \GL_d(K)/\cI$ the chain of lattices $(L_i)_{i \in \Z}$ characterized by $L_i=\cO_{V}\Pi_ig^{-1}$ for $i=0,\dots,d$. Here, for $i=0,\dots,d$, we let $\Pi_i$ denote the diagonal $d\times d$ matrix $\Pi_i=\diag(\varpi,\ldots,\varpi,1,\ldots,1)$ with $\varpi$ appearing $i$ times and $1$ appearing $d-i$ times. \subsection{} Let $\bK \subset \GL_d(\A)$ be an open compact subgroup. Let $M$ be a $\Q$-vector space. Let $\cC^\bK(M)$ denote the ($\Q$-vector) space of locally constant $M$-valued functions on $\GL_d(F)\bsl \GL_d(\A)/(\bK \times F_\infty^\times)$. Let $\cC_c^\bK(M) \subset C^\bK(M)$ denote the subspace of compactly supported functions. \begin{lem}\label{lem:Steinberg8} \begin{enumerate} \item There is a canonical isomorphism $$ H_{d-1}^\mathrm{BM}(X_{\bK,\bullet},M) \cong \Hom_{\GL_d(F_\infty)} (\St_{d,\Q}, \cC^\bK(M)), $$ where $\cC^\bK(M)$ denotes the space of locally constant $M$-valued functions on $\GL_d(F)\bsl \GL_d(\A)/(\bK\times F_\infty^\times)$. \item Let $v \in \St_{d, \Q}^\cI$ be a non-zero Iwahori-spherical vector. Then the image of the evaluation map $$ \begin{array}{l} \Hom_{\GL_d(F_\infty)}(\St_{d,\Q}, \cC^\bK(M)) \\ \to \Map(\GL_d(F)\bsl \GL_d(\A)/(\bK \times F_\infty^\times \cI),M) \end{array} $$ at $v$ is identified with the image of the map $$ \begin{array}{rl} H_{d-1}^\mathrm{BM}(X_{\bK,\bullet},M) & \to \Map(\GL_d(F)\bsl (\GL_d(\A^\infty)/\bK \times \cBT_{d-1,*}),M) \\ & \cong \Map(\GL_d(F)\bsl \GL_d(\A)/(\bK \times F_\infty^\times \cI),M). \end{array} $$ \end{enumerate} \end{lem} \begin{proof} For a $\C$-vector space $M$, (1) is proved in \cite[Section 5.2.3]{KY:Zeta elements}, and (2) is \cite[Corollary 5.7]{KY:Zeta elements}. The proofs and the argument in loc. cit. work for a $\Q$-vector space $M$ as well. \end{proof} \begin{cor} \label{cor:Steinberg8} Under the isomorphism in (1), the subspace $$ H_{d-1}(X_{\bK,\bullet},M) \subset H_{d-1}^\mathrm{BM}(X_{\bK,\bullet},M) $$ corresponds to the subspace $$ \Hom_{\GL_d(F_\infty)}(\St_{d,\Q}, \cC_c^\bK(M)) \subset \Hom_{\GL_d(F_\infty)}(\St_{d,\Q}, \cC^\bK(M)). $$ \end{cor} \begin{proof} This follows from Lemma~\ref{lem:Steinberg8} (2) and the definition of the homology group $H_{d-1}(X_{\bK, \bullet}, M)$. \end{proof} \subsection{} The proof of the following lemma is straightforward and is left to the reader. \begin{lem} Let the notation be as above. \begin{enumerate} \item Suppose that $\bK'$ is a normal subgroup of $\bK$. Then the homomorphism $f^*_{\bK',\bK}$ induces an isomorphism $H^\mathrm{BM}_*(X_{\bK,\bullet},M) \cong H^\mathrm{BM}_*(X_{\bK',\bullet},M)^{\bK/\bK'}$ and a similar statement holds for $H_*$. \item Let $M$ be a $\Q$-vector space. Then the diagrams $$ \begin{CD} H^\mathrm{BM}_{d-1} (X_{\bK,\bullet}, M) @>{\cong}>> \Hom_{\GL_d(F_\infty)}(\St_{d,\Q}, \cC^\bK(M)) \\ @V{f^*_{\bK',\bK}}VV @VVV \\ H^\mathrm{BM}_{d-1} (X_{\bK',\bullet}, M) @>{\cong}>> \Hom_{\GL_d(F_\infty)}(\St_{d,\Q}, \cC^{\bK'}(M)) \end{CD} $$ and $$ \begin{CD} H_{d-1} (X_{\bK,\bullet}, M) @>{\cong}>> \Hom_{\GL_d(F_\infty)}(\St_{d,\Q}, \cC_c^\bK(M)) \\ @V{f^*_{\bK',\bK}}VV @VVV \\ H_{d-1} (X_{\bK',\bullet}, M) @>{\cong}>> \Hom_{\GL_d(F_\infty)}(\St_{d,\Q}, \cC_c^{\bK'}(M)) \end{CD} $$ are commutative. Here the horizontal arrows are the isomorphisms given in Lemma~\ref{lem:Steinberg8} and Corollary~\ref{cor:Steinberg8}, and the right vertical arrows are the map induced by the quotient map $\GL_d(F) \bsl \GL_d(\A) /(\bK' \times F_\infty^\times) \to \GL_d(F) \bsl \GL_d(\A) /(\bK\times F_\infty^\times)$. \end{enumerate} \end{lem} \begin{cor} \label{prop:homology autom} We have isomorphisms \[ H_{d-1}(X_{\lim},\C) \cong \cA_{\C,cusp,\St} \] \[ H^\BM_{d-1}(X_{\lim},\C) \cong \cA_{\C,\St} \] \end{cor} \begin{proof} This follows from the previous lemma and the definitions. \end{proof} \chapter{Proof for universal modular symbols} \label{ch:pf for ums} In this chapter, we prove Propositions~\ref{prop:thm13(2) AR}, \ref{prop:thm13(3) AR}, \ref{prop:thm13(4) AR}, \ref{prop:thm13(5) AR} and Corollary \ref{cor:thm13(1) AR}. These are the statements of our main theorem (Theorem~\ref{lem:apartment}) with our modular symbols replaced by (the image of) the universal modular symbols. We show that our modular symbols coincide with the universal modular symbols in Chapter~\ref{ch:compare ms}. The content of this chapter is outlined in Section~\ref{sec:ums outline}. \section{Modular symbols for automorphic forms} Modular symbols are elements of $H_{d-1}(\Gamma \backslash \cBT_\bullet, \Z)$ for some arithmetic subgroup $\Gamma$. For a possible application to automorphic forms, we may look at all the connected components. Recall the the space $\cA_{\Z,\St}^\bK$ of $\bK$-invariant $\Z$-valued automorphic forms is isomorphic to $H_{d-1}^\BM(X_{\bK, \bullet}, \Z)$. We have a non-adelic description of $X_{\bK, \bullet}$. We set $J_\bK=\GL_d(F)\backslash \GL_d(\A^\infty)/\bK$, take a set $\{g_j\}$ of representatives of $J_\bK$, and set $\Gamma_j=\GL_d(F) \cap g_j \bK g_j^{-1}$. Then \[ X_{\bK,\bullet} \cong \coprod_{j \in J_\bK} \Gamma_j\backslash \cBT_\bullet. \] For a compact open subgroup $\bK \subset \GL_d(\A_F)$, we set \[ \MS(\bK)= \bigoplus_j \MS(\Gamma_j) \subset \bigoplus_j H_{d-1}(\Gamma_j \backslash \cBT_\bullet,\Z) \cong H_{d-1}(X_{\bK,\bullet}, \Z) = \cA^\bK_{\St, \Z}. \] This is the space of modular symbols for $\bK$-invariant automorphic forms $\cA^\bK_{\St, \Z}$. We are interested in the size of the cokernel of the map $\MS(\bK) \subset \cA^\bK_{\St, \Z}$. For this, it suffices to study the cokernel of $\MS(\Gamma) \subset H_{d-1}^\BM(\Gamma \backslash \cBT_\bullet, \Z)$ for an arithmetic group $\Gamma$. (Because each $\Gamma_j$ is an arithmetic subgroup.) \section{Connection with the Tits building} \label{sec:Tits} Let $\Gamma$ be an arithmetic subgroup. By definition, there exists a compact open subgroup $\bK \subset \GL_d(\A)$ such that $\Gamma=\bK \bigcap \GL_d(F)$. Recall the non-adelic description from Section~\ref{subsec:X_K}. We set $J_\bK=\GL_d(F)\backslash \GL_d(\A^\infty)/\bK$, take a set $\{g_j\}$ of representatives of $J_\bK$, and set $\Gamma_j=\GL_d(F) \cap g_j \bK g_j^{-1}$. Then \[ X_{\bK,i} \cong \coprod_{j \in J_\bK} \Gamma_j\backslash \cBT_i. \] Let $e \in \Gamma \subset \GL_d(\A)$ be the identity element. Then $\Gamma=\Gamma_e$. We will identify $\Gamma \backslash \cBT_\bullet$ with a connected component $\Gamma_e \backslash \cBT_\bullet \subset X_{\bK, \bullet}$. We also consider $\cBT_\bullet$ as a subset $\cBT_\bullet \subset \widetilde{X}_{\bK, \bullet}$ by identifying it with $\{e\} \times \cBT_\bullet$. We have a map \[ \cBT_\bullet \subset \widetilde{X}_{\bK,\bullet} \to X_{\bK, \bullet} \] which factors through $\Gamma \backslash \cBT_\bullet$. Let $\alpha$ be a positive real number. Let $\cD \subset \{1, \dots, d-1\}$ be a subset. Let $f=[0\subset V_1 \subset \dots \subset V_{r-1} \subset F^{\oplus d}] \in \Flag_\cD$ be a flag. We defined a subsimplex $\tilde{X}_{\bK, \bullet}^{(\alpha), \cD, f}$ in Section~\ref{sec:Xsigma}. We define \[ \cBT_\bullet^{(\alpha), \cD, f} =\cBT_\bullet \cap \widetilde{X}_{\bK, \bullet}^{(\alpha), \cD, f} \subset \widetilde{X}_{\bK,\bullet}^{(\alpha)}. \] We set \[ \cBT_\bullet^{(\alpha)} = \bigcup_{\cD, f} \cBT_\bullet^{(\alpha), \cD, f}. \] \begin{lem} \label{lem:contractible single} The space $\cBT_\bullet^{(\alpha), \cD, f_0}$ is non-empty and contractible. \end{lem} \begin{proof} This is Lemma \ref{7_contract} stated for $\cBT_\bullet$. \end{proof} Recall we defined $T_{F^{\oplus d}}$ to be the Tits building of $\mathrm{SL}_d$ over $F$. \begin{cor}[see Corollary 4.2 \cite{Gra}] \label{cor:Grayson htpy eq} There is a $\Gamma$-equivariant homotopy equivalence \[ |\cBT_{\bullet}^{(\alpha)}| \cong |T_{F^{\oplus d}}| \] \end{cor} \begin{proof} The proof is the same as that in loc.\ cit. using Lemma~\ref{lem:contractible single} above. \end{proof} \section{the Solomon-Tits theorem} \begin{thm}[Solomon-Tits] \label{thm:ST} If $d \ge 2$, the Tits building $T_{F^{\oplus d}}$ has the homotopy type of bouquet of $d-2$ spheres. \end{thm} \begin{proof} See \cite[Thm 5.1]{Gra}. \end{proof} \begin{lem} \label{lem:Solomon} We have \[ H_{i}(\cBT_\bullet, \cBT_\bullet^{(\alpha)}; \Z) \cong \left\{ \begin{array}{ll} H_{d-2}(T_{F^{\oplus d}}, \Z) & i=d-1, \\ 0 & i\neq d-1. \end{array} \right. \] \end{lem} \begin{proof} Since $\cBT_\bullet$ is contractible (see \cite[Thm 2.1]{Gra}), the claim follows from Corollary~\ref{cor:Grayson htpy eq} and the Solomon-Tits theorem (Theorem \ref{thm:ST}). \end{proof} \section{the Lyndon-Hochschild-Serre spectral sequence} \label{sec:LHS ss} We have the following spectral sequence. \[ E^2_{p,q}= H_p(\Gamma, H_q(\cBT_\bullet, \cBT_\bullet^{(\alpha)}; \Z)) \Rightarrow H_{p+q}^\Gamma(\cBT_\bullet, \cBT_\bullet^{(\alpha)}; \Z) \] By Lemma \ref{lem:Solomon}, we have $E^2_{p,q}=0$ unless $q=d-1$. Hence the spectral sequence collapses at $E^2$ and we have moreover \[ E^2_{0,d-1} \cong E_{d-1}. \] Composing with the canonical surjection to the coinvariants, we obtain a surjection \[ H_{d-1}(\cBT_\bullet, \cBT_\bullet^{(\alpha)}; \Z) \rightarrow H_0(\Gamma, H_{d-1}(\cBT_\bullet, \cBT_\bullet^{(\alpha)}; \Z) ) \cong H_{d-1}^\Gamma(\cBT_\bullet, \cBT_\bullet^{(\alpha)}; \Z). \] \section{the second spectral sequence} \label{sec:2nd ss} Let us write $X=\cBT_\bullet$ and $X'=\cBT_\bullet^{(\alpha)}$ for short. \subsection{} We use the following spectral sequence (\cite[\S VII 7]{Brown}, see also Section~\ref{sec:another ss}) \[ E_{p,q}^1= \bigoplus_{\sigma \in \Sigma_p} H_q(\Gamma_\sigma, \chi_\sigma) \Rightarrow H_{p+q}^\Gamma(X, X'; \Z) \] where $\Sigma_p$ is the set of $p$-simplices of $X \setminus X'$. Note that by definition \[ E_{i,0}^2=H_{i}(X,X'; \Z) \] for $0 \le i \le d$. Because $E_{p,q}^1=0$ for $p \le -1$, we have \[ E_{d-1,0}^\infty=\cdots=E_{d-1,0}^d. \] Because $E_{p,q}^1=0$ for $q \le -1$, we have \[ E_{d-1,0}^k=\Ker\, d^{k-1} \] where $d^{k-1}: E_{d-1,0}^k \to E^k_{d-1+k,k-1}$ is the differential. Composing with the canonical map $E_{d-1} \to E_{d-1,0}^2$, we obtain a homomorphism \[ H^\Gamma_{d-1}(X, X'; \Z) \to H_{d-1}(\Gamma \backslash X,\Gamma \backslash X'; \Z) \] whose image is $E_{d-1,0}^d$. \subsection{} \label{sec:d argument} Let us give some estimate on the size of the image $E_{d-1,0}^d$. Let $e_{p,q}^r$ denote the exponent of $E_{p,q}^r$. For each $k$, we have an exact sequence \[ E_{d-1,0}^{k+1} \to E_{d-1,0}^k \xto{d^{k-1}} E_{d-1+k,k-1}^k \] hence \[ e_{d-1+k,k-1}^k E_{d-1,0}^k \subset \Ker\, d^{k-1}=E_{d-1,0}^{k+1} \subset E_{d-1,0}^k \] Therefore \[ \prod_k e_{d-1+k,k-1}^k E_{d-1,0}^2 \subset E_{d-1}^d \subset E_{d-1,0}^2. \] It is clear that $e_{p,q}^r$ divides $e_{p,q}^1$ for all $p,q,r$, hence \[ \prod_k e_{d-1+k,k-1}^1 E_{d-1,0}^2 \subset E_{d-1}^d \subset E_{d-1,0}^2. \] It follows from Corollary \ref{cor:finite p-group homology} that $E_{s,t}^1$ is killed by $p^{1+t(d-2)}$ for $t\ge 1$. That is, $e_{s,t}^1$ divides $p^{1+t(d-2)}$. We therefore have \[ p^{e(d)}E_{d-1}^2 \subset E_{d-1}^d \subset E_{d-1}^2 \] where $e(d)=(d-2)\left(1+\frac{(d-1)(d-2)}{2}\right)$. We arrive at the following lemma. \begin{lem} \label{lem:exponent prop} Suppose $\bK \subset \GL_d(\A^\infty)$ be a pro-$p$ compact open subgroup. Let $\Gamma=\GL_d(F) \cap \bK$. Then \[ H_{d-1}^\Gamma(X, X';\Z) \to H_{d-1}(\Gamma \backslash X, \Gamma\backslash X';\Z)\] is injective and the cokernel is annhilated by $p^{e(d)}$. \end{lem} \section{Corestriction and transfer} \subsection{} Let $\Gamma \supset \Gamma'$ be arithmetic subgroups. For a $\Z[\Gamma]$-module $M$, we denote by $\cores_M : M_{\Gamma'} \to M_{\Gamma}$ the map induced by the identity map of $M$. We define the transfer map $\tr_M : M_{\Gamma} \to M_{\Gamma'}$ (cf.\ \cite[III, 9.\ (B), p.\ 81]{Brown}) by \[ \tr(\overline{m}^\Gamma) = \sum_{g\in \Gamma' \bsl \Gamma} \overline{gm}^{\Gamma'} \] where $\overline{m}^\Gamma$ and $\overline{m}^{\Gamma'}$ denote the class of $m \in M$ in the coinvariants $M_\Gamma$ and $M_{\Gamma'}$ respectively. By definition, the composite $\cores_M \circ \tr_M$ is equal the map given by the multiplication by $[\Gamma:\Gamma']$. \subsection{} Let $M_{2, \bullet} \to M_{1, \bullet}$ be a morphism of complexes of $\Z[\Gamma]$-modules. Since $\cores_M$ and $\tr_M$ are functorial in $M$, we have a commutative diagram $$ \begin{CD} (M_{2,\bullet})_{\Gamma} @>{\tr_{M_{2,\bullet}}}>> (M_{2,\bullet})_{\Gamma'} @>{\cores_{M_{2,\bullet}}}>> (M_{2,\bullet})_{\Gamma} \\ @VVV @VVV @VVV \\ (M_{1,\bullet})_{\Gamma} @>{\tr_{M_{1,\bullet}}}>> (M_{1,\bullet})_{\Gamma'} @>{\cores_{M_{1,\bullet}}}>> (M_{1,\bullet})_{\Gamma} \end{CD} $$ of complexes of abelian groups. \subsection{} Let $Y_\bullet \supset Z_\bullet$ be simplicial complexes with (compatible) $\Gamma$-actions. Set $M_{1,\bullet} = C_\bullet^\cell(Y_\bullet,\Z)/ C_\bullet^\cell(Z_\bullet,\Z)$ and $M_{2,\bullet}=C_\bullet^\cell(|Y_\bullet| \times |E\Gamma_\bullet|,\Z)/ C_\bullet^\cell(|Z_\bullet| \times |E\Gamma_\bullet|,\Z)$. The quotient maps of spaces give a morphism $M_{2, \bullet} \to M_{1, \bullet}$ of complexes. By taking the $(d-1)$-st homology, we obtain the following commutative diagram of abelian groups: $$ \begin{CD} H_{d-1}^\Gamma(Y_\bullet, Z_\bullet; \Z) @>>> H_{d-1}^{\Gamma'}(Y_\bullet, Z_\bullet; \Z) @>>> H_{d-1}^\Gamma(Y_\bullet, Z_\bullet; \Z) \\ @V{\beta_\Gamma}VV @V{\beta_{\Gamma'}}VV @VV{\beta_\Gamma}V \\ H_{d-1}(\Gamma \bsl Y_\bullet, \Gamma \bsl Z_\bullet; \Z) @>>> H_{d-1}(\Gamma \bsl Y_\bullet, \Gamma \bsl Z_\bullet;\Z) @>>> H_{d-1}(\Gamma \bsl Y_\bullet, \Gamma \bsl Z_\bullet;\Z). \end{CD} $$ Here $\beta_{\Gamma}$ and $\beta_{\Gamma'}$ are the maps induced by the quotient maps for the groups $\Gamma$ and $\Gamma'$ respectively. By taking the cokernels of the vertical arrows, we obtain the maps $$ \Coker\, \beta_{\Gamma} \xto{\alpha_1} \Coker\, \beta_{\Gamma'} \xto{\alpha_2} \Coker\, \beta_{\Gamma}. $$ By construction, the composite $\alpha_2 \circ \alpha_1$ is equal to the map given by the multiplication by $[\Gamma:\Gamma']$. \subsection{} \begin{cor} \label{cor:exponent alpha} Let $v \neq \infty$ be a prime of $F$, and let $F_v$ denote the completion of $F$ at $v$. Let $\bK_v$ be a pro-$p$ open compact subgroup of $\GL_d(F_v)$. Let us consider the intersection $\Gamma' = \Gamma \cap \bK_v$ in $\GL_d(F_v)$. Then \[ H_{d-1}^\Gamma(X, X';\Z) \to H_{d-1}(\Gamma \backslash X, \Gamma\backslash X';\Z)\] is injective and the cokernel is annhilated by $p^{e(d)} [\Gamma:\Gamma']$. \end{cor} \begin{proof} It follows from Lemma~\ref{lem:exponent prop} that $\Coker \beta_\Gamma'$ is killed by $p^{e(d)}$. Hence by the discussion above, $\Coker \beta_\Gamma$ is killed by $p^{e(d)} [\Gamma:\Gamma']$. This implies the claim. \end{proof} \subsection{} We prove the analogue of Theorem~\ref{lem:apartment}(4). \begin{cor} \label{cor:pedNd} Let $v_0 \neq \infty$ be a prime of $F$ such that the cardinality $q_0$ of the residue field $\kappa(v_0)$ at $v_0$ is smallest among those at the primes $v \neq \infty$. Set $N(d) = \prod_{i=1}^{d} (q_0^i-1)$. Then the cokernel of the injective map \[ H_{d-1}^\Gamma(X, X';\Z) \to H_{d-1}(\Gamma \backslash X, \Gamma\backslash X';\Z) \] is annihilated by $p^{e(d)} N(d)$. \end{cor} \begin{proof} Let $\alpha$ be a positive real number. We use the discussion above with $Y_\bullet=X_\bullet$ and $Z_\bullet=X^{(\alpha)}_\bullet$. Since $\Gamma$ is an arithmetic subgroup, $\Gamma$ is contained, as a subgroup of $\GL_d(F_{v_0})$, in a compact open subgroup of $\GL_d(F_{v_0})$. Let $\cO_{v_0}$ denote the ring of integers in $F_{v_0}$. Since any maximal compact subgroup of $\GL_d(F_{v_0})$ is a conjugate of $\GL_d(\cO_{v_0})$, there exists $g \in \GL_d(F_{v_0})$ such that $g^{-1} \Gamma g$ is contained in $\GL_d(\cO_{v_0})$. Let $\kappa(v_0)$ denote the residue field at $v_0$ and choose a $p$-Sylow subgroup $P$ of $\GL_d(\kappa(v_0))$. Let $\Gamma'$ denote the inverse image of $P$ under the composite $$ \Gamma \xto{f_1} \GL_d(\cO_{v_0}) \xto{f_2} \GL_d(\kappa(v_0)), $$ where $f_1$ is the map that sends $\gamma \in \Gamma$ to $g^{-1} \gamma g$ and $f_2$ is the map induced by the ring homomorphism $\cO_{v_0} \surj \kappa(v_0)$. Since $g f_2^{-1}(P) g^{-1}$ is a pro-$p$ compact open subgroup of $\GL_d(F_{v_0})$, it follows from Corollary~\ref{cor:exponent alpha} that the cokernel of the map $\beta_\Gamma$ above is killed by $p^{e(d)} [\Gamma:\Gamma']$. Since $[\Gamma:\Gamma']$ divides $[\GL_d(\kappa(v_0)):P] = N(d)$, the claim follows. \end{proof} \section{the Mittag-Leffler condition} \label{sec:Mittag-Leffler} Let $\alpha \in \Z$ be a positive integer. We write $X=\cBT_\bullet$ and $X^{(\alpha)}=\cBT_\bullet^{(\alpha)}$ for short. Let \[ A^{(\alpha)} = \mathrm{Image} (H_{d-1}(X,X^{(\alpha)}) \to H_0(\Gamma, H_{d-1}(X, X^{(\alpha)})) \to H_{d-1}(\Gamma\backslash X, \Gamma\backslash X^{(\alpha)} )) \] For $\alpha < \alpha'$, the diagram \[ \begin{CD} H_{d-1}(X, X^{(\alpha)}) @>>> H_{d-1}(\Gamma\backslash X, \Gamma\backslash X^{(\alpha)} )) \\ @VVV @VVV \\ H_{d-1}(X, X^{(\alpha')}) @>>> H_{d-1}(\Gamma\backslash X, \Gamma\backslash X^{(\alpha')})) \end{CD} \] is commutative (the maps are those induced by the inclusion map $X^{(\alpha')} \subset X^{(\alpha)}$). Since the left vertical map is an isomorphism (both are isomorphic to $H_{d-2}(T_{F^{\oplus d}}, \Z)$), we obtain that $A^{(\alpha)}$ surjects to $A^{(\alpha')}$. Hence the projective system \[ \{A^{(\alpha)}\}_{\alpha} \] satisfies the Mittag-Leffler condition. We set \[ C^{(\alpha)} = \mathrm{Coker} (H_{d-1}(X,X^{(\alpha)}) \to H_0(\Gamma, H_{d-1}(X, X^{(\alpha)}) \to H_{d-1}(\Gamma\backslash X, \Gamma\backslash X^{(\alpha)}) )) \] so that we have an exact sequence \[ 0 \to A^{(\alpha)} \to H_{d-1}(\Gamma\backslash X, \Gamma\backslash X^{(\alpha)}) \to C^{(\alpha)} \to 0 \] for each $\alpha$. Note that each $C^{(\alpha)}$ is a finite abelian group. Because the Mittag-Leffler condition is satisfied, the following sequence \[ 0 \to \varprojlim_\alpha A^{(\alpha)} \to \varprojlim_\alpha H_{d-1}(\Gamma\backslash X, \Gamma\backslash X^{(\alpha)}) \to \varprojlim_\alpha C^{(\alpha)} \to 0 \] is also exact (see \cite[p.83]{WeHA}). It follows that the cokernel of the map \begin{eqnarray} \label{eqn:almost all} H_{d-2}(T_{F^{\oplus d}},\Z) \to \varprojlim_\alpha H_{d-1}(\Gamma \backslash \cBT_\bullet, \Gamma\backslash \cBT_\bullet^{(\alpha)};\Z) \end{eqnarray} is annihilated by the least common multiple of the exponents of $C^{(\alpha)}$. We turn the corollaries above into the following propositions. \section{Proof} \subsection{} \begin{lem} \label{lem:BM limit isom} We have \[ H_{d-1}^\BM (\Gamma \backslash \cBT_\bullet, \Z) \cong \varprojlim_\alpha H_{d-1}(\Gamma \backslash \cBT_\bullet, \Gamma\backslash \cBT_\bullet^{(\alpha)};\Z). \] \end{lem} \begin{proof} There is a canonical map $H_{d-1}^\BM(\Gamma \backslash \cBT_\bullet, \Z) \to H_{d-1}(\Gamma \backslash \cBT_\bullet, \Gamma\backslash \cBT_\bullet^{(\alpha)}; \Z)$ which assemble to give a map to the limit. Then the claim follows from Lemma~\ref{7_lem:finiteness}. \end{proof} \subsection{} \label{sec:AR ms} We consider the composite of the map \eqref{eqn:almost all} and the map in Lemma \ref{lem:BM limit isom}: \[ H_{d-2}(T_{F^{\oplus d}},\Z) \to H_{d-1}^\BM(\Gamma\backslash \cBT_\bullet, \Z). \] Let us write $\MS(\Gamma)_{AR}$ for the image of the map. By Proposition \ref{prop:MS Z-basis}, this is the submodule generated by the images of the universal modular symbols. We write $AR$ to mean Ash-Rudolph modular symbols. \subsection{} Let us prove the following proposition, which will imply Theorem~\ref{lem:apartment}(2) later. \begin{prop} \label{prop:thm13(2) AR} Suppose $\bK \subset \GL_d(\A^\infty)$ be a pro-$p$ compact open subgroup. Let $\Gamma=\GL_d(F) \cap \bK$. Then \[ p^{e(d)} H^\BM_{d-1}(\Gamma \bsl \cBT_\bullet, \Z) \subset \MS(\Gamma)_{AR}. \] \end{prop} \begin{proof} This follows from Lemmas~\ref{lem:exponent prop}, \ref{lem:Solomon}, the surjection at the end of Section~\ref{sec:LHS ss}, and the discussion in Section~\ref{sec:Mittag-Leffler} on the inverse limit. \end{proof} \subsection{} Let us prove the following proposition, which will imply Theorem~\ref{lem:apartment}(3) later. \begin{prop} \label{prop:thm13(3) AR} Let $v \neq \infty$ be a prime of $F$, and let $F_v$ denote the completion of $F$ at $v$. Let $\bK_v$ be a pro-$p$ open compact subgroup of $\GL_d(F_v)$. Let us consider the intersection $\Gamma' = \Gamma \cap \bK_v$ in $\GL_d(F_v)$. Then \[ p^{e(d)} [\Gamma:\Gamma'] H^\BM_{d-1}(\Gamma \bsl \cBT_\bullet, \Z) \subset \MS(\Gamma)_{AR}. \] \end{prop} \begin{proof} This follows from Corollary~\ref{cor:exponent alpha}, Lemma~\ref{lem:BM limit isom} and the discussion in Section~\ref{sec:Mittag-Leffler} on the inverse limit. \end{proof} \subsection{} We prove the analogue of Theorem~\ref{lem:apartment}(4). \begin{prop} \label{prop:thm13(4) AR} Let $v_0 \neq \infty$ be a prime of $F$ such that the cardinality $q_0$ of the residue field $\kappa(v_0)$ at $v_0$ is smallest among those at the primes $v \neq \infty$. Set $N(d) = \prod_{i=1}^{d} (q_0^i-1)$. Then \[ p^{e(d)} N(d) H^\BM_{d-1}(\Gamma \bsl \cBT_\bullet, \Z) \subset \MS(\Gamma)_{AR} \]. \end{prop} \begin{proof} This follows from Corollary~\ref{cor:pedNd}, Lemma~\ref{lem:BM limit isom} and the discussion in Section~\ref{sec:Mittag-Leffler} on the inverse limit. \end{proof} \begin{cor} \label{cor:thm13(1) AR} We have \[ H^\BM_{d-1}(\Gamma \bsl \cBT_\bullet, \Q) = \MS(\Gamma)_{AR}\otimes \Q \] \end{cor} \begin{proof} This follows immediately from the proposition. \end{proof} \section{the case $d=2$} When $d=2$, we do not need the computation of the homology groups of the stabilizer groups. Hence we obtain the best result that the exponent is 1. \begin{prop} \label{prop:thm13(5) AR} When $d=2$, we have \[ H^\BM_{1}(\Gamma \bsl \cBT_\bullet, \Z) =\MS(\Gamma)_{AR}. \] \end{prop} \begin{proof} When $d=2$, in the argument given in Section~\ref{sec:d argument}, we only need to look at one edge map in the spectral sequence. It is easy to see that the edge map is surjective. Hence the claim follows. \end{proof} \chapter{Some spectral sequences} In this section, we mention two spectral sequences that we use in Chapter~\ref{ch:pf for ums}. They are ordinary ones that are found in textbooks such as \cite{Brown} or \cite{WeHA}. Because there are minor differences, we record them here. The reader may safely skip this chapter. \section{Cellular structure on some products} \label{seq:CW_prod} Let $Y_\bullet$ be a simplicial complex and $Y'_\bullet$ a simplicial set. The geometric realizations $|Y_\bullet|$ and $|Y'_\bullet|$ have canonical structures of CW complexes, induced by the triangulations given by the simplices in $Y_\bullet$ and $Y'_\bullet$, respectively. The complex computing the cellular (co)homology groups of the CW complex $|Y_\bullet|$ is identical to the complex in Section \ref{sec:def homology} computing the (co)homology groups of $Y_\bullet$. Suppose that $|Y_\bullet|$ is locally compact. Then, by \cite[Theorem A.6, p.525]{Hatcher}, the structures of CW complexes on $|Y_\bullet|$ and $|Y'_\bullet|$ naturally induce a structure of CW complex on the direct product $|Y_\bullet| \times |Y'_\bullet|$ of topological spaces, where the cells of dimension $i$ are indexed by the disjoint union $$ \coprod_{j=0}^i Y_j \times Y'^{\nd}_{i-j} $$ for any integer $i \ge 0$, where $Y'^{\nd}_{i-j} \subset Y'_{i-j}$ denotes the set of non-degenerate $(i-j)$-simplices in $Y'_\bullet$, and the closure of any cell whose index is in $Y_j \times Y'^\nd_{i-j}$ is isomorphic to the direct product of a $j$-simplex and an $(i-j)$-simplex equipped with a natural structure of CW complex. We call this structure of CW complex on $|Y_\bullet| \times |Y'_\bullet|$ the product cellular structure on $|Y_\bullet| \times |Y'_\bullet|$. For a CW complex $Z$, we denote by $C_\bullet^\cell(Z,\Z)$ the complex computing the cellular homology groups of $Z$. Since the two projections $|Y_\bullet| \times |Y'_\bullet| \to |Y_\bullet|$ and $|Y_\bullet| \times |Y'_\bullet| \to |Y'_\bullet|$ are cellular maps, they induce maps $C_\bullet^\cell(|Y_\bullet| \times |Y'_\bullet|,\Z) \to C_\bullet(|Y_\bullet|,\Z)$ and $C_\bullet^\cell(|Y_\bullet| \times |Y'_\bullet|,\Z) \to C_\bullet(|Y'_\bullet|,\Z)$ of complexes. We will later use the terminology in this paragraph when $Y_\bullet = \cB\cT_\bullet$ and $Y'_\bullet = E\Gamma_\bullet$. In this case, we note that $Y_\bullet = \cB\cT_\bullet$ is locally compact. \section{Equivariant homology} \label{Equivariant homology} Let $\Gamma \subset \GL_d(K)$ be an arithmetic subgroup. We define the simplicial set (not a simplicial complex) $E\Gamma_\bullet$ as follows. We define $E\Gamma_n=\Gamma^{n+1}$ to be the $(n+1)$-fold direct product of $\Gamma$ for $n\ge 0$. The set $\Gamma^{n+1}$ is naturally regarded as the set of maps of sets $\Map(\{0,\dots, n\}, \Gamma)$ and from this one obtains naturally the structure of a simplicial set. We let $|E\Gamma_\bullet|$ denote the geometric realization of $E\Gamma_\bullet$. Then $|E\Gamma_\bullet|$ is contractible. We let $\Gamma$ act diagonally on each $E\Gamma_n$ $(n\ge 0)$. The induced action on $|E\Gamma_\bullet|$ is free. Let $M$ be a topological space on which $\Gamma$ acts. The diagonal action of $\Gamma$ on $M\times |E\Gamma_\bullet|$ is free. We let $H_*^\Gamma(M, B)= H_*(\Gamma\bsl(M \times |E\Gamma_\bullet|), B)$ where $B$ is a coefficient ring, and call it the equivariant homology of $M$ with coefficients in $B$. We also use the relative version, and define equivariant cohomology in a similar manner. \section{the Lyndon-Hochschild-Serre spectral sequence} Let $Y_\bullet \subset Z_\bullet$ be simplicial compexes with compatible $\Gamma$-action. We have (see \cite[p.172, VII 7]{Brown}) the following spectral sequence. \[ E^2_{p,q}= H_p(\Gamma, H_q(Z_\bullet, Y_\bullet; \Z)) \Rightarrow H_{p+q}^\Gamma(Z_\bullet, Y_\bullet; \Z) \] \section{Another spectral sequence} \label{sec:another ss} We use another spectral sequence which also converges to equivariant homology groups. We make a slight change from \cite[VII.7]{Brown} because we use particular cell structures. The reader may skip this section. \subsection{} Let $\Gamma$ be an arithmetic subgroup. We let $Y_\bullet$ be a simplicial complex with $\Gamma$-action. We consider the product $|Y_\bullet| \times |E\Gamma_\bullet|$ as in Section \ref{seq:CW_prod}. Since $\Gamma$ acts freely on the set $E\Gamma_i$ for every $i \ge 0$, it also acts freely on the set of $i$-dimensional cells with respect to the product cellular structure on $|Y_\bullet| \times |E\Gamma_\bullet|$, where $\Gamma$ acts diagonally on the product $|Y_\bullet| \times |E\Gamma_\bullet|$. Hence the product cellular structure on $Y_\bullet \times |E\Gamma_\bullet|$ induces a structure of CW complex on the quotient $\Gamma \bsl |Y_\bullet| \times |E\Gamma_\bullet|$, which enable us to consider the complex $C_\bullet^\cell(\Gamma \bsl |Y_\bullet| \times |E\Gamma_\bullet|)$ computing the cellular homology groups of $\Gamma \bsl |Y_\bullet| \times |E\Gamma_\bullet|$. \subsection{} Consider the quotient maps \[ |Y_\bullet| \to \Gamma\bsl |Y_\bullet| \] and \[ |Y_\bullet|\times |E\Gamma_\bullet| \to \Gamma\bsl |Y_\bullet|\times |E\Gamma_\bullet|. \] These induce morphisms of complexes \[ C_\bullet^\cell(|Y_\bullet|, \Z) \to C_\bullet^\cell(\Gamma\bsl|Y_\bullet|, \Z) \] and \[ C_\bullet^\cell(|Y_\bullet| \times |E\Gamma|, \Z) \to C_\bullet^\cell(\Gamma \bsl |Y_\bullet| \times |E\Gamma|, \Z). \] It is easy to see that the induced maps \[ C_\bullet^\cell(|Y_\bullet|, \Z)_\Gamma \cong C_\bullet^\cell(\Gamma\bsl|Y_\bullet|, \Z) \] and \[ C_\bullet^\cell(|Y_\bullet| \times |E\Gamma|, \Z)_\Gamma \cong C_\bullet^\cell(\Gamma \bsl |Y_\bullet| \times |E\Gamma|, \Z), \] where the subscript $\Gamma$ denotes $\Gamma$-coinvariants, are isomorphisms. \subsection{} Since $|E\Gamma_\bullet|$ is contractible, the complex $C_\bullet^\cell(|E\Gamma_\bullet|,\Z)$ of $\Z[\Gamma]$-modules is quasi-isomorphic to $\Z$ with trivial $\Gamma$-action, regarded as a complex concentrated at degree $0$. Since $\Gamma$ acts freely on the set of $i$-dimensional cells in $|E\Gamma_\bullet|$ for every $i \ge 0$, the complex $C_\bullet^\cell(|E\Gamma_\bullet|,\Z)$ gives a $\Z[\Gamma]$-free resolution of $\Z$ with trivial $\Gamma$-action. (We note that $C_\bullet^\cell(|E\Gamma_\bullet|,\Z)$ is not identical to the standard $\Z[\Gamma]$-free resolution of $\Z$, since the degenerate simplices in $E\Gamma_\bullet$ are removed in $C_\bullet^\cell(|E\Gamma_\bullet|,\Z)$.) \subsection{} By construction, we can identify the complex $C_\bullet^\cell(|Y_\bullet| \times |E\Gamma_\bullet|,\Z)$ with the simple complex associated with the double complex $$ C_{\bullet,\bullet'} = C_\bullet^\cell(|Y_\bullet|, \Z) \otimes_\Z C_{\bullet'}(|E\Gamma_\bullet|,\Z) $$ of $\Z[\Gamma]$-modules (with respect to a suitable sign convention). \subsection{} Let $C_{\bullet, 0}$ denotes the double complex whose entry at the bidegree $(i,j)$ is equal to $C_{i,0}$ if $j=0$, and is zero otherwise. The projection map $|Y_\bullet| \times |E\Gamma_\bullet| \to |Y_\bullet|$ induces the map of complexes of $\Z[\Gamma]$-modules: $$ C_\bullet^\cell(|Y_\bullet| \times |E\Gamma_\bullet|, \Z) \to C_\bullet^\cell(|Y_\bullet|, \Z). $$ This map is induced by the projection map \[ C_{\bullet,\bullet'} \surj C_{\bullet, 0} \] of double complexes. \subsection{} As we remarked in the previous paragraph, the complex $C_\bullet^\cell(|E\Gamma_\bullet|,\Z)$ gives a $\Z[\Gamma]$-free resolution of $\Z$. Hence for each $j \ge 0$, the complex $C_j^\cell(|Y_\bullet|, \Z) \otimes_\Z C_\bullet^\cell(|E\Gamma_\bullet|,\Z)$ gives a $\Z[\Gamma]$-free resolution of the $\Z[\Gamma]$-module $C_j^\cell(Y_\bullet, \Z)$. Hence the double complex $(C_{\bullet,\bullet'})_\Gamma$ induces a spectral sequence \begin{equation} \label{eq:ss1} E^{1}_{s,t} = H_t(\Gamma,C_s^\cell(|Y_\bullet|,\Z)) \Longrightarrow H_{s+t}^\Gamma(Y_\bullet, \Z) \end{equation} where \[ \begin{array}{rl} H_{s+t}^\Gamma(Y_\bullet, \Z) =& H_{s+t}(C_\bullet^\cell(|Y_\bullet| \times |E\Gamma_\bullet|,\Z)_\Gamma, \Z)) \\ =& H_{s+t}(C_\bullet^\cell(\Gamma \bsl |Y_\bullet| \times |E\Gamma_\bullet|, \Z)). \end{array} \] By definition, we have \[ E^2_{s,0} = H_s((C_\bullet^\cell(|Y_\bullet|, \Z))_\Gamma) \cong H_s(C_\bullet^\cell(\Gamma \bsl |Y_\bullet|, \Z))= H_s(\Gamma \bsl Y_\bullet, \Z). \] \subsection{} For each $i \ge 0$ and for each $\sigma \in \Gamma\bsl |Y_i|$, let us choose a representative $\wt{\sigma} \in Y_i$ of $\sigma$. Let $\Gamma_{\wt{\sigma}} \subset \Gamma$ denote the stabilizer of $\wt{\sigma}$. As we mentioned in Section \ref{sec:def Gamma}, it follows from the conditions (1) and (3) that $\Gamma_{\wt{\sigma}}$ is a finite subgroup of $\Gamma$. The group $\Gamma_{\wt{\sigma}}$ may act non-trivially on the set $O(\wt{\sigma})$ of orientations of $\wt{\sigma}$. Hence we obtain a character $\chi_{\wt{\sigma}} : \Gamma_{\wt{\sigma}} \to \{\pm 1\}$ where an element $\gamma \in \Gamma_{\wt{\sigma}}$ is sent to $-1$ under $\chi$ if and only if $\gamma$ acts non-trivially on $O(\wt{\sigma})$. Then it follows from the construction that for each $i \ge 0$, the $\Z[\Gamma]$-module $C_i^\cell(|Y_\bullet|,\Z)$ is isomorphic to the direct sum $$ C_i^\cell(|Y_\bullet|, \Z) \cong \bigoplus_{\sigma \in \Gamma\bsl Y_i} \Z[\Gamma] \otimes_{\Z[\Gamma_{\wt{\sigma}}]} \chi_{\wt{\sigma}} $$ where $\chi_{\wt{\sigma}} = \Z$ with the action of $\Gamma_{\wt{\sigma}}$ given by $\chi_{\wt{\sigma}}$. By Shapiro's lemma, this induces an isomorphism $$ E^1_{s,t} \cong \bigoplus_{\sigma \in \Gamma\bsl Y_s} H_t(\Gamma_{\wt{\sigma}}, \chi_{\wt{\sigma}}) $$ of abelian groups. \subsection{} Let $Y_\bullet$ be as above and let $Z_\bullet$ be a subsimplicial complex with $\Gamma$-action. Then we obtain the following spectral sequence in a similar manner: \[ E^{1}_{s,t} = H_t(\Gamma,C_s^\cell(|Y_\bullet|,\Z)/C_s^\cell(|Z_\bullet|,\Z)) \Longrightarrow H^\Gamma_{s+t}(Y_\bullet, Z_\bullet; \Z). \] The $E^1$ terms are \[ E_{s,t}^1 \cong \bigoplus_{\sigma \in \Gamma \bsl Y_s \setminus \Gamma \bsl Z_s} H_t(\Gamma_{\wt{\sigma}},\chi_{\wt{\sigma}}). \] \chapter{Introduction} \section{Modular forms and automorphic forms} \label{sec:modauto} Automorphic forms are fundamental objects of study in number theory. We obtain some very basic results concerning automorphic forms (satisfying some condition at a fixed prime) for $\GL_d$ (where $d$ is a positive integer) over the function field of a curve over a finite field (or a global field of positive characteristic). Automorphic forms are defined for a reductive algebraic group $G$ over a global field $F$ (either a finite extension of $\Q$ or of $\mathbb{F}_q(t)$ for a finite field $\mathbb{F}_q$ of $q$ elements). They are functions on the adele points $G(\A_F)$ which are invariant by the translation by $G(F)$, satisfying certain conditions at some place(s) of $F$. In practice, for characteristic 0 fields $F$, they are realized as functions on some quotient of a symmetric space, satisfying certain (real analytic) conditions. For example, (elliptic) modular forms are automorphic forms for $\GL_2$ over the rationals $\Q$. They are functions on the upper half plane (or, equally, on the quotient $\mathrm{SL}_2(\R)/\mathrm{SO}_2(\R)$), and certain conditions at the real place amounts to the modular condition of some weight. In this article, we study certain automorphic forms for $\GL_d$ for $d \ge 1$ over a global field of positive characteristic. We fix a place $\infty$ of $F$. The automorphic forms that we consider are functions on $\GL_d(\A_F)$, which are invariant under $\GL_d(F)$, satisfying a condition at the place $\infty$. The slogan here is that we consider ``automorphic forms whose associated automorphic representation is the Steinberg representation at $\infty$". This will be made precise later. As an analogue of the symmetric space in positive characteristic, there are the Bruhat-Tits buildings. We use the Bruhat-Tits building of $\PGL_d$ over $F_\infty$, where $F_\infty$ is the local field (completion) at the place $\infty$. It is a simplicial complex of dimension $d-1$ whose set of simplices are quotients of $\PGL_d(F_\infty)$. For example, the set of zero simplices is isomorphic to $\GL_d(F_\infty)/\GL_d(O_\infty)$ where $O_\infty \subset F_\infty$ is the ring of integers, and the set of $(d-1)$-dimensional simplices is isomorphic to $\GL_d(F_\infty)/\cI$ where $\cI \subset \GL_d(\cO_\infty)$ is the Iwahori subgroup consisting of those matrices that are congruent to an upper triangular matrix modulo the maximal ideal of $\cO_\infty$. There are many interpretations of the simplices (e.g. in terms of $\cO_\infty$-lattices, of norms, and, for quotients, of vector bundles over the proper smooth curve $C$ whose function field is $F$) which we we will use. For the dictionary between our function field setup and the setup for modular forms, see the table in Section~\ref{sec:dictionary}. \section{Classical modular symbols} In the study of modular forms, especially of weight 2 (automorphic forms for $\GL_2$ over $\Q$), one useful tool is modular symbols, as invented by Shimura and Eichler and developed by Manin \cite{Manin1}. To introduce them, recall the following geometric setup for modular forms. We consider some arithmetic subgroup $\Gamma \subset \mathrm{SL}_2(\Z)$. Main examples include congruence subgroups such as $\Gamma_1(N)$ for a positive integer $N$. Then modular forms appear as 1-forms on the analytic space $\Gamma \backslash \cH$ or on $\Gamma\backslash\cH^*$ where $\cH$ is the upper half space and $\cH^*=\cH \cup \mathbb{P}^1(\Q)$. The quotients have algebraic models defined over some number field, and the $\Gamma\backslash\cH^*$ is a smooth compactification of $\Gamma\backslash\cH$. The set $\mathbb{P}^1(\Q)$ or the quotient $\Gamma\backslash\mathbb{P}^1(\Q)$ is called the set of cusps. The Eichler-Shimura isomorphism states that 1-forms on $\Gamma\backslash\cH^*$ are exactly the cusp forms of weight 2 (automorphic forms with some condition near the boundary in a compactification; non-cusp forms are easier to study in that, for one thing, they come from a smaller algebraic group). A modular symbol in this geometric setup is a path from $a$ to $b$ in $\cH^*$ for cusps $a,b$, or its class in the relative homology $H_1(\Gamma\backslash\cH^*, \{\text{cusps}\}; \C)$. The relation with modular forms is given by integration, integrating a 1-form (a modular form) on the path from $a$ to $b$. We may regard modular symbols as elements in the dual of cusp forms. One main theorem is that the modular symbols generate the dual space of cusp forms. This property enables us to study cusp forms using modular symbols, which are amenable to computation. We refer to Manin's fairly recent introductory article \cite{Manin} for more information. The reader is also referred to \cite{FKSms} for applications in Iwasawa theory. \section{Higher dimensional modular symbols of Ash and Rudolph} \label{sec:AR ms intro} Ash and Rudolph consider higher dimensional modular symbols in \cite{AR}. The automorphic forms they treat are for $\GL_d$ over the rationals $\Q$, and they are functions on the symmetric space $X=\mathrm{SL}_d(\R)/\mathrm{SO}_d(\R)$. They use the Borel-Serre bordification (a partial compacitification) $\overline{X}$ of $X$ (as a generalization of $\cH^*$). Let $\Gamma$ be a torsion free arithmetic subgroup and set $M=\Gamma\backslash X$ and $\overline{M}=\Gamma\backslash\overline{X}$. Then their modular symbols are elements of the relative homology group $H_{d-1}(M, \overline{M}; \Z)$. By Poincar\'e duality, this group is dual to $H^N(M;\Z)=H^N(\Gamma;\Z)$ where $N=d(d-1)/2$, which is closely related to the space of cusp forms. To construct elements in the relative homology, they introduce universal modular symbols as elements of the homology groups $H_{d-2}(T_d, \Z)$ of the Tits building $T_d$ (the simplicial complex associated with the poset of flags in $\Q^d$). (The Solomon-Tits theorem says that $T_d$ is homotopy equivalent to the bouquet of $(d-2)$-spheres.) This $T_d$ is homotopy equivalent to $\overline{X} \setminus X$ and the projection map to the quotient give elements in $H_{d-1}(M, \overline{M}; \Z)$. Given an ordered basis $q_1, \dots, q_d$ of $\Q^d$, they construct a map from a $(d-2)$-sphere to $T_d$. They call the homology class of this sphere a universal modular symbol and denote it by $[q_1, \dots, q_d]$. It is known that universal modular symbols generate the homology group $H_{d-2}(T_d, \Z)$. Their main theorem is that the ordered bases $q_1, \dots, q_d$ such that the matrix $(q_1, \dots, q_d)$ regarded as a matrix lying in $\mathrm{SL}_d$ (not $\mathrm{GL}_d$) span the homology group $H_{d-1}(\overline{M}, M;\Z)$ when $\Gamma$ is torsion free. If $\Gamma$ is not torsion free, there always exists a torsion free subgroup $\Gamma' \subset \Gamma$ of finite index. Their result in this case is that the homology group divided by the space of modular symbols is killed by the index $[\Gamma:\Gamma']$. They also give an algorithm in expressing a general modular symbol in terms of such ``unimodular" modular symbols. We do not consider the analogue of this theorem. One reason is that their proof make use of the fact that $\Z$ is a Euclidean domain. The analogue of $\Z$, namely $A$, in our setup need not be a Euclidean domain. \section{Our result on modular symbols} \label{sec:result ms} We take $F$ to be a global field of positive characteristic and fix a place $\infty$ as above. The space of automorphic forms we consider can be defined as follows. Let $C_\C=\Hom(\GL_d(F) \backslash \GL_d(\A), \C)$ be the space of $\C$-valued functions on the adele points that are invariant by the action of $\GL_d(F)$. We let $C_\C^\infty=\bigcup_\bK C_\C^\bK$ where $\bK$ runs over the open compact subgroups of $\GL_d(\A)$ where the superscript $()^\bK$ means the invariants. This is the space of smooth vectors. Let $\St_\C$ denote the Steinberg representation of $\GL_d(F_\infty)$ and $\cI \subset \GL_d(O_\infty)$ be the Iwahori subgroup. It is known that the Iwahori fixed part $\St_{d,\C}^\cI$ is one dimensional. Take a non-zero vector $v \in \St_{d,\C}^\cI$. Then we define \[ \cA_{\St_\C,\C}=\mathrm{Image}(\Hom_{\GL_d(F_\infty)} (\St_{d,\C}, C_\C^\infty) \to C_\C^\infty) \] where the arrow is the evaluation at $v$. This is the space of automorphic forms with Steinberg at infinity, as mentioned in Section~\ref{sec:modauto}, and is the main object of our study. We use the Bruhat-Tits building $\cBT_\bullet$ of $\PGL_d(F_\infty)$ as an analogue of the symmetric space $\mathrm{SL}_d(\R)/\mathrm{SO}_d(\R)$. We set $X_{\bK, \bullet} =\GL_d(F) \backslash \GL_d(\A^\infty) \times \cBT_\bullet/\bK$ for the ring of finite adeles $\A^\infty$ and for a compact open subgroup $\bK \subset \GL_d(\A^\infty)$. (This is the analogue of the double coset description of the $\C$-valued points of a Shimura variety.) We show that there is a canonical isomorphism \[ \varinjlim_\bK H_{d-1}^{\BM} (X_{\bK, \bullet}, \C) \cong \cA_{\St_\C, \C} \] and study the geometry of $X_{\bK, \bullet}$. Let us assume for simplicity that $X_{\bK, \bullet}=\Gamma \backslash \cBT_\bullet$ for some arithmetic subgroup $\Gamma \subset \GL_d(F)$. (In general, it is a finite disjoint union of such quotients. This is analogous to that $\coprod_i \Gamma_i\backslash\cH$, where $\Gamma_i$ are some arithmetic groups, is isomorphic to the $\C$-valued points of a modular curve.) Let us give the definition of our modular symbols. Our definition of modular symbols is analogous to the paths in $\cH^*$. By the definition of Bruhat-Tits buildings, $\cBT_\bullet$ is the union of subsimplicial complexes called apartments. The apartments are indexed by the set of $F_\infty$-bases of $F_\infty^{\oplus d}$. Let $A_{q_1,\dots, q_d, \bullet} \subset \cBT_\bullet$ denote the apartement corresponding to the basis $q_1, \dots, q_d$. We have a map \[ A_{q_1, \dots, q_d, \bullet} \subset \cBT_\bullet \to \Gamma \backslash \cBT_\bullet. \] For an arithmetic subgroup $\Gamma$ and an $F$-basis (that is, a basis of $F^{\oplus d}$ regarded as a basis of $F_\infty^{\oplus d}$), this map is locally finite (in the sense that the inverse image of a simplex is a finite set). The image of the fundamental class of the apartment $H_{d-1}^{\BM}(A_{q_1, \dots, q_d, \bullet}, \Z)$ in $H_{d-1}^\BM(\Gamma \backslash \cBT_\bullet, \Z)$ is well defined in this case, where $H^\BM$ means the Borel-Moore homology. Our modular symbols are defined to be the elements of this form as $q_1, \dots, q_d$ runs over all $F$-bases. Our modular symbols are a priori different from the modular symbols coming from universal modular symbols (the analogue of those of Ash and Rudolph). We prove that they actually coincide. Our main theorem computes a bound of the index of the subgroup generated by modular symbols inside $H_{d-1}^\BM(\Gamma \backslash \cBT_\bullet, \Z)$. We have a uniform bound which is independent of the choice of $\Gamma$. The prime-to-$p$ (the characteristic of $F$) part depends only on the base field, and divides the order of $\GL_d(\mathbb{F}_{q'})$ for some explicitly given $q'$. The exponent of the $p$-part is given explicitly in terms of $d$. \section{Outline of proof for universal modular symbols} \label{sec:ums outline} In Chapter~\ref{ch:pf for ums}, we prove our main theorem on modular symbols but for universal modular symbols. We give an outline of the proof in this section. \subsection{} Let $\Gamma \subset \GL_d(F)$ be an arithmetic subgroup. We construct (as described in Section~\ref{sec:result ms} above) modular symbols in the Borel-Moore homology $H^\BM(\Gamma \backslash \cBT_\bullet, \Z)$ as the (fundamental) classes of the apartments corresponding to $F$-bases. The goal is to describe the size of cokernel of the (injective) map the ($\Z$-module of) modular symbols to the Borel-Moore homology of $\Gamma \backslash \cBT_\bullet$. To achieve this goal, we use the universal modular symbols of Ash and Rudolph. That is, we construct a map \[ H_{d-2}(T_d) \to H_{d-1}^\BM(\Gamma \backslash \cBT_\bullet) \] from the homology of the Tits building $T_d$. This homology group is the space generated by universal modular symbols. We compute the cokernel of this map. Then we show that the image coincides with the space of (our) modular symbols. \subsection{} The analogue of the map \[ H_{d-2}(T_d) \to H_{d-1}^\BM(\Gamma \backslash \cBT_\bullet) \] appears in Ash and Rudolph as described briefly in Section~\ref{sec:AR ms intro}, however, the construction is different. We believe our method is simpler and might apply to their case as well. There is a remark in Section~\ref{sec:compare AR}. First, we express the Borel-Moore homology as an inverse limit: \[ H_{d-1}^\BM(\Gamma \backslash \cBT_\bullet) \cong \varprojlim_\alpha H_{d-1}(\Gamma \backslash \cBT_\bullet, \Gamma \backslash \cBT_\bullet^{(\alpha)}). \] Here, $\alpha$ runs over positive real numbers, and $\cBT_\bullet^{(\alpha)}$ is a subsimplicial complex consisting of ``more unstable than $\alpha$" vector bundles. \subsection{} The following is the key sequence in our proof: \[ \begin{array}{ll} H_{d-2}(T_d) \cong H_{d-1}(\cBT_\bullet, \cBT_\bullet^{(\alpha)}) \twoheadrightarrow H_0(\Gamma, H_{d-1}(\cBT_\bullet, \cBT_\bullet^{(\alpha)})) \\ \cong H_{d-1}^\Gamma(\cBT_\bullet, \cBT_\bullet^{(\alpha)}) \to H_{d-1}(\Gamma \backslash \cBT_\bullet, \Gamma \backslash \cBT_\bullet^{(\alpha)}). \end{array} \] The second map is the canonical surjection to coinvariants. Let us describe the other three maps. \subsection{} The first map (isomorphism) is obtained by following the method of Grayson's (see \cite[Cor 4.2]{Gra}). Let us explain this in this section. Recall that each 0-simplex of $\Gamma \backslash \cBT_\bullet$ can be interpreted as a locally free sheaf (vector bundle) on the curve $C$ whose function field is $F$. It can be seen from the work of Grayson that the semi-stable ones lie in the ``middle", whereas the unstable ones are closer to the ``boundary". The picture to have in mind is in Serre's book (near \cite[p.106, II.2, Thm 9]{Trees}), where the quotient of the building of dimension 1 (tree) is discussed in detail. Only the finite graph in the middle consist of semi-stable ones and the halflines, after a few steps, consist of unstable ones only. We define a subsimplicial complexes $\cBT_\bullet^{(\alpha)}$ by using the Harder-Narasimhan function which measures how unstable a vector bundle is. In particular, all the simplices of $\cBT_\bullet^{(\alpha)}$ for sufficiently large $\alpha$ correspond to unstable ones. (If $d=2$, $\cBT_\bullet^{(\alpha)}$ consists of the halflines which become shorter as $\alpha$ grows bigger.) Recall on the other hand that a simplex of the Tits building $T_d$ corresponds to a flag in $F^d$. Suppose that a $0$-simplex of $\cBT_\bullet^{(\alpha)}$ corresponds to an unstable vector bundle. Then there is a nontrivial Harder-Narasimhan filtration, and by taking the generic fiber, we obtain a filtration or a flag of $F^{\oplus d}$. This is how the two spaces $\cBT_\bullet^{(\alpha)}$ and $T_d$ are related. Using Grayson's method, we see that they are homotopy equivalent. Then, using that $\cBT_\bullet$ is contractible, we obtain the first isomorphism. \subsection{} Let us look at the third map which is an isomorphism. There is a Lyndon-Hochshild-Serre spectral sequence (see Section~\ref{sec:LHS ss}) for a pair of spaces converging to the equivariant homology: \[ E^2_{p,q}= H_p(\Gamma, H_q(\cBT_\bullet, \cBT_\bullet^{(\alpha)}; \Z)) \Rightarrow H_{p+q}^\Gamma(\cBT_\bullet, \cBT_\bullet^{(\alpha)}; \Z). \] We can compute the $E_2$-page. The Solomon-Tits theorem says that the homotopy type of $T_d$ is a bouquet of $(d-2)$-spheres. This means that many of the $E_2$-terms are zero and the relevant terms are $E^2_{0,t}$ which are the homology groups of the relative space $(\cBT_\bullet, \cBT_\bullet^{(\alpha)})$. Hence we obtain the third map. \subsection{} For the fourth map, we use the following spectral sequence \[ E_{p,q}^1= \bigoplus_{\sigma \in \Sigma_p} H_q(\Gamma_\sigma, \chi_\sigma) \Rightarrow H_{p+q}^\Gamma(\cBT_\bullet, \cBT_\bullet^{(\alpha)}; \Z) \] where $\Sigma_p$ is the set of $p$-simplices of $\cBT_\bullet \setminus \cBT_\bullet^{(\alpha)}$, $\Gamma_\sigma$ is the stabilizer group, and $\chi_\sigma$ is the representation associated with orientation. Because it is a first quadrant spectral sequence and the $E_1$-terms $E_{s,t}^1$ vanish for $t>d$, we obtain the fourth map as the composition of (injective) differential maps. Thus the cokernel of the composition is equal to the cokernel of the fourth map. To compute it, we bound the order of the $E^1$ terms, or the order of the stabilizer groups. \subsection{Torsion in arithmetic subgroups} It is common in characteristic zero case (see e.g. Borel-Serre, Ash-Rudolph) to assume that the arithmetic subgroup is without torsion. In that case, all the stabilizer groups are trival since finite. Then the sought cokernel turns out to be trivial. The general case is reduced to the torsion free case because given an arithmetic subgroup there always exists a torsion free arithmetic subgroup of finite index. The exponent of the cokernel in this case is killed by this index. However, in positive characteristic, an arithmetic subgroup always has a nontrivial $p$-torsion subgroup so we do not expect to do as well as in the characteristic zero case. Instead, we will see that given an arithmetic subgroup $\Gamma$, there always exists an arithmetic subgroup $\Gamma' \subset \Gamma$ of finite index which is $p'$-torsion free (each torsion element has order prime to $p$). We can bound the sought cokernel in the $p'$-torsion free case in terms of some power of $p$ depending only on the dimension $d$ by inspecting the shape of $p$-torsion subgroups of an arithmetic subgroup. We can also bound the index $[\Gamma: \Gamma']$ in terms of $d$ and $q$. We arrive at the bound of the cokernel in general in terms of $d$, $p$, and $q$. \section{Outline for comparing modular symbols} \label{sec:outline comparison} After computing the cokernel for the image of the universal modular symbols, there remains the task of comparing the image with the space of our modular symbols. The strategy is summarized in the following diagram (in Section~\ref{sec:CD comparison}): \\ \begin{tikzcd} H_{d-1}^\BM(\Gamma \backslash \cBT_\bullet) \arrow[d,"\sim", "(1)"'] & H_{d-1}^\BM(A_\bullet) \arrow[dd, "\sim"] \arrow[l, "(10)"] \\ \varprojlim_\alpha H_{d-1}(\Gamma \backslash \cBT_\bullet, \Gamma\backslash \cBT_\bullet^{(\alpha)}) & \\ \varprojlim_\alpha H_{d-1}(\cBT_\bullet, \cBT_\bullet^{(\alpha)}) \arrow[u, "(2)"'] \arrow[d, "\sim", "(3)"'] & \varprojlim_\alpha H_{d-1}(A_\bullet, A_\bullet^{(\alpha)}) \arrow[l] \arrow[d, "\sim", "(6)"'] \\ H_{d-1}(\cBT_\bullet, \cBT_\bullet^{(\alpha)}) \arrow[d, "\sim", "(4)"'] & H_{d-1}(A_\bullet, A_\bullet^{(\alpha)}) \arrow[l] \arrow[d, "\sim", "(7)"'] \\ H_{d-2}(\cBT_\bullet^{(\alpha)}) \arrow[d, "\sim", "(5)"'] & H_{d-2}(A_\bullet^{(\alpha)}) \arrow[d, "\sim", "(8)"'], \arrow[l] \\ H_{d-2}(T_d) & H_{d-2}(|\cP'(B)|) \arrow[l, "(9)"] \\ & H_{d-2}(S^{d-2}) \arrow[u,"\sim"'] \end{tikzcd} We first take an ordered basis $v_1,\dots,v_d$ of $F^{\oplus d}$. We then have an embedding $\varphi_{v_1, \dots, v_d}: A_\bullet \to \cBT_\bullet$. (The image of this was denoted $A_{v_1, \dots, v_d, \bullet}$ earlier.) This gives the top arrow as the pushforward by this embedding composed with the projection map $\cBT_\bullet \to \Gamma \backslash \cBT_\bullet$. We know (show) that the Borel-Moore homology $H_{d-1}^\BM(A_\bullet)$ of an apartment is isomorphic to $\Z$ (the fundamental class is a generator). The image of $1 \in \Z$ by this pushforward is the definition of our modular symbol. The image of $1 \in \Z$ at the right bottom via the horizontal map followed by the left vertical map is the definition of the Ash-Rudolph modular symbol (or the modular symbol coming from the universal modular symbol). To prove that the two modular symbols coincide, we add the right vertical column and show that the diagram commutes. The commutativity of the squares except for the last one is not difficult. We follow Grayson \cite{Gra} for the construction of the map (5). We need to look at the proof (which uses \cite{Quillen}) and see that it comes from a zigzag of morphisms of posets. We interpret the map (8) in a similar manner in terms of posets and then the commutativity of the last square follows. \section{On the structure of the space of automorphic forms} We study automorphic forms $\cA_{\St, \C}$ with Steinberg at infinity and the intersection $\cA_{\St, \mathrm{cusp}, \C}$ with the cusp forms. Using known results, we verify (Proposition~\ref{prop:66_3}) that $\cA_{\St, \mathrm{cusp}, \C}$ is the direct sum of irreducible cuspidal automorphic representations that are cuspidal and whose local representation at infinity is the Steinberg representation. We also obtain (Theorem~\ref{7_PROP1}) a similar description for the space $\cA_{\St, \C}$. While the space of $L^2$ automorphic forms is well studied, not all subrepresentations of $\cA_{\St, \C}$ are $L^2$ and the $L^2$ methods do not apply. Let us give some ideas and the outline of proof. We first show that the space of automorphic forms is isomorphic to the Borel-Moore homology of (a finite union of) the quotient $X_{\bK, \bullet}$ of the Bruhat-Tits building by an arithmetic subgroup. We take the dual and work with the cohomology with compact support. Now, if there existed some good compactification $\overline{X}$ of $X=X_{\bK, \bullet}$ then we would be looking at an exact sequence \[ H^{d-2}(\overline{X}\setminus X) \to H^{d-1}_c(X) \to H^{d-1}(\overline{X}). \] (The analogy in the case of modular forms is $X=Y_0(N), \overline{X}=X_0(N), \overline{X}\setminus X=\{cusps\}$, and $d=2$. Then $H^1(\overline{X})$ is (roughly) the space of cusp forms, and the remaining task is to compute $H^0(\overline{X}\setminus X)$ as an automorphic representation.) While the compactifications given in \cite{FKS} might be helpful, we do not use them. There are two steps. The first step is to regard the cohomology as the limit as the boundary becomes smaller (again using the spaces $\cBT_\bullet^{(\alpha)}$). The second step is to express the limit as induced representation using a covering spectral sequence. \section{Remarks} We give miscellaneous remarks. \subsection{Automorphic forms in positive characteristic} We study automorphic forms in positive characteristic. Because there are no archimedean primes, there is no difficulty coming from complex or real analysis. In terms of automorphic forms, this means that all the smooth automorphic forms are admissible automorphic forms (we refer to Cogdell's Lectures 3,4 in \cite{CKM} for the definitions). Then the theory of automorphic representations is equivalent to the (simpler) theory of representatations of Hecke algebras. For example, decomposition into irreducible automorphic representations is simpler in positive characteristic. This also means that it is meaningful to consider the automorphic forms with values in $\Z$, and this gives one natural $\Z$-structure ($\Z$-submodule which spans the space of ($\C$-valued) automorphic forms). In the modular form case, there is the cohomology with coefficients in $\Z$ of the topological space $\Gamma \backslash \cH$, which spans the space of modular forms. However, this does not have an interpretation as $\Z$-valued functions. One simplification occurs. The Borel-Moore homology, or the relative homology, in the Ash-Rudolph case is not readily related to the space of automorphic forms, but its Poincar\'e dual is. The pairing, in terms of automorphic forms, involves integration and hence periods of automorphic forms. For example, in the modular form case, the pairing involves values such as $\int_0^{i\infty} f(z)dz$ where $f$ is a modular form. In our case, the Borel-Moore homology is equal to the space of automorphic forms, hence our results are stated directly in terms of automorphic forms, with no reference to any pairing. However, in our application \cite{KY:Zeta elements}, we do take a pairing of modular symbols and cusp forms, much analogous to the integration. The second author does not know what the natural formulation is. \subsection{} The space $\cA_{\St, \C}$ of automorphic forms with Steinberg at infinity arises as the \'etale cohomology of Drinfeld modular varieties. Drinfeld modular varieties may be regarded as a Shimura variety for $\PGL_d$ over $F$. (Shimura varieties are defined only in characteristic zero. Note also that there is no Shimura variety for $\GL_d$ if $d \ge 3$.) Thus by studying the cohomology, Laumon \cite{Laumon1}, \cite{Laumon2} obtains a result in the Langlands program for $GL_d$ over a global field in positive characteristic. One drawback is that, via this method, one can treat only those automorphic representations whose local representation is the Steinberg representation at the fixed prime infinity. \subsection{Our old preprint} This article may be regarded as a revised version of our preprint \cite{KY:preprint}. The main result of loc.cit. is that the space of $\Q$-valued cusp forms with Steinberg at infinity is contained in the space generated by modular symbols tensored with $\Q$. The result in this article implies that. We took a very different approach there. In \cite{KY:preprint}, we used the Werner compactification \cite{We1}, \cite{We2} and used the duality twice. One key observation there was that the Werner compactification was best suited when studying the group cohomology of arithmetic subgroups (not the homology of the quotient spaces), which in turn is closely related to the space of cusp forms. We no longer use this observation here. \section{Comparing with the Ash-Rudolph method} \label{sec:compare AR} Let us compare our method with that of Ash and Rudolph. \subsection{} Ash and Rudolph \cite[p.245]{AR} use the following sequence: \[ H_{d-2}(T_d) \cong H_{d-2}(\partial \overline X) \cong H_{d-1}(\overline X, \partial \overline X) \xto{\pi_*} H_{d-1}(M, \partial M). \] Here $X=\SL_n(\R)/\SO_n(\R)$, $\overline{X}$ is the Borel-Serre bordification, $M=\Gamma \backslash \overline{X}$ is a manifold with boundary for some arithmetic subgroup $\Gamma$, and $M\setminus \partial M=\Gamma \backslash X$. The map $\pi_*$ is the canonical projection, which is shown to be surjective if $\Gamma$ is torsion free. In the proof of their result \cite[Prop 3.2]{AR}, however, they use Poincar\'e duality and show $H_c^N(\overline{X}) \to H_c^N(M)$ is injective where $N=d(d-1)/2$. Then they translate these groups into group cohomology of $\Gamma$ using that the action is free and the contractibility of spaces $X$ and $\overline{X}$. They apply the Borel-Serre duality (of group cohomology) to prove the claimed injectivity. \subsection{} Let us point out some differences with our method. There is a difficulty in using a (partial) compactification of $\cBT_\bullet$. There exists a compactification of Borel-Serre of $\cBT_\bullet$ (which is in the same spirit as the Borel-Serre bordification of $X$). (There are also other compactifications. See \cite{FKS} for overview.) One difference is that an arithmetic subgroup does not act freely on (the boundary of) the compactification, even when restricted to some subgroup. Thus there is some difficulty in connecting the homology groups of spaces to group cohomology (as was done by Ash and Rudolph). We also did not find the analogue of the Borel-Serre duality in the literature. There may also be some difficulty arising from the fact that an arithmetic subgroup is usually not torsion free. These considerations suggest that their method may not apply directly to our case. On the other hand, we believe that our method applies to the case treated by Ash and Rudolph. Our method is more straightforward in that we do not use the two dualities. \section{Dictionary} \label{sec:dictionary} The following table is a dictionary between our function field setup (the right column) and the classical modular forms setup (the left column). See also Sections~\ref{sec:modauto}. \begin{center} \begin{tabular}{| p{2cm} | p{45mm} | p{45mm} |} \hline base field & $\Q$ & $F$ (a global field in positive characteristic; a function field of a curve over a finite field) \\ \hline place & the real place $\infty$ & a fixed place $\infty$ \\ \hline integers & $\Z$ & $A$ (integral at all but $\infty$) \\ \hline completion & $\R$ & $F_\infty$, ($\cO_\infty$ the ring of integers) \\ \hline rank, dimension & $d=2$ & $d\ge 1$ \\ \hline algebraic group & $\mathrm{SL}_2$ over $\Q$ & $\PGL_d$ over $F$ \\ \hline symmetric space & $\mathfrak{H}=\mathrm{SL}_2(\R)/\mathrm{SO}_2(\R)$ the upper half plane (real manifold) & $\cB\cT_\bullet$, $\cB\cT_0=\PGL_d(F_\infty)/\PGL_d(\cO_\infty)$ the Bruhat-Tits building (simplicial complex) \\ \hline modular symbol & geodesic from $a$ to $b$ ($a,b \in \mathbb{P}^1(\Q)$) & apartment corresponding to a basis of $F^{\oplus d}$ \\ \hline arithmetic subgroup & $\Gamma \subset \mathrm{SL}_d(\Z)$ & $\Gamma \subset \GL_d(F)$ \\ \hline automorphic form & modular form of weight 2 level $\Gamma$ & $\Gamma$-invariant harmonic function (cochain) on $\cB\cT_{d-1}$ \\ \hline automorphic representation & Of $\GL_2(\A_\Q)$, discrete series (for weight 2) at $\infty$ & Of $\GL_d(\A_F)$, Steinberg at $\infty$ \\ \\ \hline homology & $H^{\BM}_1(\Gamma \backslash \cH, \Z)$ & $H^\BM_{d-1}(\Gamma \backslash\cB\cT_\bullet, \Z)$ \\ \hline cusp & $\mathbb{P}^1(\Q)$ & Several choices \\ \hline (partial) compactification & $\cH \cup \mathbb{P}^1(\Q)$ & Several choices \\ \hline \end{tabular} \end{center} \chapter{Comparison of modular symbols} \label{ch:compare ms} We defined two kinds of modular symbols. Our modular symbols come from fundamental classes of apartments, whereas the modular symbols of Ash-Rudolph (or those coming from the universal modular symbols) originate in the homology of the Tits building. In this chapter, we show that these two definitions coincide. We refer to Section~\ref{sec:outline comparison} for the outline. We have a proof of our main theorem (Theorem~\ref{lem:apartment}). \section{Statement of comparison proposition} The goal of this chapter is to prove the following proposition. Let $q_1, \dots, q_d$ be an ordered basis of $F^{\oplus d}$. In Section~\ref{sec:def modular symbol}, we defined (our) modular symbol in the Borel-Moore homology of an arithmetic quotient. Let us write $[q_1, \dots, q_d]_A$ for it. In Section~\ref{sec:univ ms}, we defined universal modular symbol $[q_1, \dots, q_d]$ in the homology of the Tits building. In Section~\ref{sec:AR ms}, we defined the modular symbol of Ash-Rudolph to be the image of the universal modular symbol in the Borel-Moore homology of an arithmetic quotient. Let us write $[q_1,\dots, q_d]_{AR}$ for it. The main task of this chapter is to show that they coincide. \begin{prop} \label{PROP:COINCIDENCE MS} Let the notation be as above. We have \[ [q_1, \dots, q_d]_A=[q_1, \dots, q_d]_{AR}. \] In particular, we have $\MS(\Gamma)=\MS(\Gamma)_{AR}$. \end{prop} \section{{Proof of Theorem~\ref{lem:apartment}}} Using this proposition, we are now able to prove our main theorem. \begin{proof}(Proof of Theorem~\ref{lem:apartment}) Using Proposition~\ref{PROP:COINCIDENCE MS}, the statements follow from Propositions~\ref{prop:thm13(2) AR}, \ref{prop:thm13(3) AR}, \ref{prop:thm13(4) AR}, \ref{prop:thm13(5) AR} and Corollary \ref{cor:thm13(1) AR}. \end{proof} \section{Outline} \label{sec:CD comparison} \subsection{} Let $\alpha$ be a sufficiently large real number. We defined a subsimplicial complex $\cBT_\bullet^{(\alpha)} \subset \cBT_\bullet$ in Section~\ref{sec:Tits}. Given an ordered basis $q_1, \dots, q_d$, we defined an injective map $\phi=\phi_{q_1,\dots, q_d}: A_\bullet \to \cBT_\bullet$ in Section~\ref{sec:521}. We regard $A_\bullet$ as a subsimplicial complex via $\phi$. We set $A_\bullet^{(\alpha)}=A_\bullet \cap \cBT_\bullet^{(\alpha)}$. \subsection{} The $S^{d-2}$ is the $(d-2)$-sphere. We use the notation from Section~\ref{sec:Tits building}, We let $W=F^{\oplus d}$ and write $T_d=T_W$. The notation $\cP'(B)$ appeared in Section~\ref{sec:barycentric subdivision}. We use $|X|$ to denote the geometric realization of a poset $X$. (See below for precise definition.) \subsection{} To prove the proposition, we look at the following diagram which appeared in Section~\ref{sec:outline comparison}. All the coefficients are $\Z$. The map (1) is from Lemma~\ref{lem:BM limit isom}. The map (2) is the limit of the pushfoward by the canonical projection for each $\alpha$. The map (3) is the canonical projection to one of the entries in the limit. It is an isomorphism because the maps $(4)(5)$ are isomorphisms. The map (5) is from Corollary~\ref{cor:Grayson htpy eq}. The map (4) is the boundary map in the long exact sequence for a pair of spaces. It is an isomorphism because $\cBT_\bullet$ is contractible. The map (6) is the canonical projection. The map (7) is the boundary map, which is an isomorphism because $A_\bullet$ is contractible. The map (8) will be constructed below. The map (9) is the pushforward by the map induced by the morphism of posets $\cP'(B) \to Q(W)$ associated with the ordered basis. The map (10) appeared in Section~\ref{sec:def modular symbol}. The inclusions $A_\bullet \subset \cBT_\bullet$ and $A_\bullet^{(\alpha)} \subset \cBT_\bullet^{(\alpha)}$ give the rest of the horizontal maps. \begin{equation} \label{main diagram} \begin{tikzcd} H_{d-1}^\BM(\Gamma \backslash \cBT_\bullet) \arrow[d,"\sim", "(1)"'] & H_{d-1}^\BM(A_\bullet) \arrow[dd, "\sim"] \arrow[l, "(10)"] \\ \varprojlim_\alpha H_{d-1}(\Gamma \backslash \cBT_\bullet, \Gamma\backslash \cBT_\bullet^{(\alpha)}) & \\ \varprojlim_\alpha H_{d-1}(\cBT_\bullet, \cBT_\bullet^{(\alpha)}) \arrow[u, "(2)"'] \arrow[d, "\sim", "(3)"'] & \varprojlim_\alpha H_{d-1}(A_\bullet, A_\bullet^{(\alpha)}) \arrow[l] \arrow[d, "\sim", "(6)"'] \\ H_{d-1}(\cBT_\bullet, \cBT_\bullet^{(\alpha)}) \arrow[d, "\sim", "(4)"'] & H_{d-1}(A_\bullet, A_\bullet^{(\alpha)}) \arrow[l] \arrow[d, "\sim", "(7)"'] \\ H_{d-2}(\cBT_\bullet^{(\alpha)}) \arrow[d, "\sim", "(5)"'] & H_{d-2}(A_\bullet^{(\alpha)}) \arrow[d, "\sim", "(8)"'], \arrow[l] \\ H_{d-2}(T_d) & H_{d-2}(|\cP'(B)|) \arrow[l, "(9)"] \\ & H_{d-2}(S^{d-2}) \arrow[u,"\sim"'] \end{tikzcd} \end{equation} The commutativity of the rectangles except the bottom one (with $(5)(8)(9)$) is easy to see. \subsection{} Recall the definition of $[q_1,\dots, q_d]_{AR} \in H_{d-1}^\BM(\Gamma \backslash \cBT_\bullet)$ from Section~\ref{sec:AR ms}. We have the universal modular symbol $[q_1, \dots, q_d]$ in $H_{d-2}(T_d)$ of the left bottom corner, and $[q_1, \dots, q_d]_{AR}$ is the image via the left vertical maps of $[q_1, \dots, q_d]$. By construction, $|\cP'(B)|$ is canonically homotopy equivalent to $S^{d-2}$. By the definition (Section~\ref{sec:univ ms}) of universal modular symbol, $[q_1,\dots, q_d]$ is the image of the fundamental class of $H_{d-1}(|\cP'(B)|)$ via the map (9). Our modular symbol $[q_1,\dots, q_d]_{A}$ is the image by (10) of the fundamental class of the apartment (Section~\ref{sec:def modular symbol}). Thus to prove the proposition, the strategy is to construct the map (8) such that the bottom square is commutative. \section{Quillen's lemma} We put our emphasis on posets in what follows. For some terminology and results, we refer to \cite[p.102--103]{Quillen} \subsection{} For a poset $W$, we defined the classifying space as the simplicial complex $(W, \cP_\tot(W))$ where $\cP_\tot(W)$ is the set of finite totally ordered subsets of $W$. We let $|W|$ denote the geometric realization of the classifying space. We call $W$ contractible if $|W|$ is contractible. We say that a morphism of posets $W_1 \to W_2$ is a homotopy equivalence if the induced map $|W_1| \to |W_2|$ of geometric realizations is a homotopy equivalence. \subsection{} We use the following lemma of Quillen. \begin{lem}[Cor 1.8 {\cite{Quillen}}] \label{lem:Quillen poset} Let $X, Y$ be posets and $Z \subset X \times Y$ be a closed subset (i.e., $z' \leq z \in Z$ implies $z' \in Z$). Let $p_1:Z \to X, p_2:Z \to Y$ be the maps induced by the projections. We set $Z_x=\{ y \in Y \,|\, (x,y) \in Z\}$ and $Z_y=\{ x \in X \,|\, (x,y) \in Z\}$. If $Z_x$ (resp. $Z_y$) contractible for all $x \in X$ (resp. all $y \in Y$) then $p_1$ (resp. $p_2$) are homotopy equivalences. \end{lem} \section{A key diagram of posets} \subsection{} For a simplicial complex $U_\bullet$, we define $(U_\bullet)$ to be the poset of simplicies of $U_\bullet$. The classifying space of $(U_\bullet)$ is the barycentric subdivision of $U_\bullet$. (See proof of Lemma 1.9 of \cite{Gra}.) \subsection{} We construct the following commutative diagram of posets: \begin{equation} \label{key diagram} \begin{CD} (\cBT_\bullet^{(\alpha)}) @<<{h_1}< (A(B)_\bullet^{(\alpha)}) \\ @A{f_1}AA @A{g_1}AA \\ U @<<{h_2}< T \\ @V{f_2}VV @V{g_2}VV \\ Q(F^{\oplus d}) @<<{h_3}< \cP'(B) \end{CD} \end{equation} We will see that the maps $f_1, f_2, g_1, g_2$ are homotopy equivalences. Upon taking the geometric realizations, the diagram gives rise to the bottom square of the diagram \eqref{main diagram}. \section{The left column} We construct the left column of the key diagram. This is a slight generalization of the construction in Grayson \cite{Gra}. \subsection{} We recall some notation from Section~\ref{subsec:X_K}. Let $d \ge 1$ and $\bK=\GL_d(\widehat{A})$. We defined $\widetilde{X}_{\bK,\bullet}$ and $X_{\bK, \bullet}$. The nonadelic description is $X_{\bK, \bullet}=\coprod_{j\in J_\bK} \Gamma_j \backslash \cBT_\bullet$. We consider the maps $\phi_j: \cBT_\bullet \to \Gamma_j \backslash \cBT_\bullet$ for $j \in J_{\bK, \bullet}$. We set $\cBT_\bullet^{(\alpha,j)}=\phi_j^{-1}(X_{\bK, \bullet}^{(\alpha)})$. We note that this does not depend on the choice of $\bK$. We defined $\cBT_\bullet^{(\alpha)}$ earlier. We have $\cBT_\bullet^{(\alpha)}=\cBT_\bullet^{(\alpha,e)}$ where $e$ is the identity element. \subsection{} Let us write $F_d=Q(F^{\oplus d})$ for short. We define a map \[ S_j:\cBT_0 \to F_d \cup \{\emptyset\} \] as follows. We have the map $\phi_{j,0}: \cBT_0 \to X_{\bK,0}$ of 0-simplices of $\phi$. Let $\sigma \in \cBT_0$. Then by Section~\ref{sec:locfree}, $\phi_{j,0}(\sigma) \in X_{\bK,0}$ is represented by a chain of locally free $\cO_C$-submodule of rank $d$ of $\eta_*\eta^*\cO_C^{\oplus d}$. Take one of the sheaves, say $E$, in the chain and consider its Harder-Narasimhan filtration \[ 0 \subsetneqq E_0 \subsetneqq E_1 \subsetneqq \dots \subsetneqq E_r=E \subset \eta_*F^{\oplus d} \] The restriction to the generic fiber $\Spec F \subset C$ gives a flag of $F^{\oplus d}$. We let $S_j(\sigma)$ denote the corresponding element in $F_d\cup \{\emptyset\}$. \subsection{} For sufficiently large $\alpha$, the $0$-simplices of $\cBT^{(\alpha)}_\bullet$ consist of unstable ones only. Thus the map $S$ gives $S_j: \cBT_0^{(j, \alpha)} \to F_d$. Let $f \in F_d$ be a flag. We let $Z_{f, \bullet}^{(j, \alpha)} \subset \cBT_\bullet^{(j, \alpha)}$ be the subsimplicial complex consisting of simplices $\sigma$ such that \[ S(v) \ge f \] for all vertices $v$ of $\sigma$. \begin{lem} \label{lem:Z contractible} $Z_{f,\bullet}^{(j, \alpha)}$ is contractible. \end{lem} \begin{proof} (We use the proof of Lemma~\ref{7_contract} almost word-for-word.) We set $X=Z_{f, \bullet}^{(j,\alpha)}$. We proceed by induction on $d$. Let $f$ be the flag \[ 0 \subset V_1 \subset \cdots \subset V_{r-1} \subset F^{\oplus d} \] with $\dim V_j=i_j$ for $1 \le j \le r-1$ so that $\cD=\{i_1, \dots, i_{r-1}\}$ with $i_1<\cdots <i_{r-1}$. Set $d'=d-i_1$ and $\cD'= \{i'-i_1 \,|\, i'\in \cD \setminus \{i_1\}\} \subset \{1,\dots, d'-1\} $. We define $f_1 \in \Flag_{\cD'}$ as the image of the flag $f_1$ with respect to the projection \[ F^{\oplus d} \to F^{\oplus d}/ V_1 \] We take an isomorphism $F^{\oplus d}/V_1 \cong F^{\oplus (d-i_1)}$ so that $f_1$ corresponds to the standard flag $f_0'$: \[ 0 \subset F^{\oplus (i_2-i_1)} \subset F^{\oplus (i_3-i_1)} \subset \dots \subset F^{\oplus (i_{r-1}-i_1)} \subset F^{\oplus (d-i_1)} \] Take a representative $g_j \in \GL_d(\A^\infty)$ of $j \in J_\bK$. Consider the map that sends an $\cO_C$-submodule $\cF[g_j, L_\infty] \subset \eta_* F^{\oplus d}$ to the $\cO_C$-submodule $\cF[g_j, L_\infty]/\cF[g_j, L_\infty]_{(i_1)} \subset \eta_* F^{\oplus d'}$. There exists $j' \in J_{\GL_{d-i_1}(\widehat{A})}$ such that the above map gives a morphism $h: X \to X'$ where we set $X'=Z_{f'_0, \bullet}^{(j', \alpha)}$ if $\cD'\neq \emptyset$, and $X'=\cBT_{\GL_{d'}, \bullet}$ if $\cD'=\emptyset$. By inductive hypothesis, $|X'|$ is contractible. Let $\epsilon:\mathrm{Vert}(X) \to \Z$ and $\epsilon':\mathrm{Vert}(X') \to \Z$ denote the maps that send a locally free $\cO_C$-module $\cF$ to the integer $[p_\cF(1)/\det(\infty)]$. We fix an $\cO_C$-submodule $\cF_0$ of $\eta_*F^{\oplus d}$ whose equivalence class belongs to $X$. By twisting $\cF_0$ by some power of $\cO_C(\infty)$ if necessary, we may assume that $p_{\cF_0}(i_1)-p_{\cF_0}(i_1-1)>\alpha$. We fix a splitting $\cF_0=\cF_{0,(i_1)}\oplus \cF_0'$. This splitting induces an isomorphism $\varphi:\eta_*\eta^*\cF_0' \cong \eta_*F^{\oplus d}$. Let $h':X' \to X$ denote the morphism that sends an $\cO_C$-submodule $\cF' \subset \eta_*\eta^*F^{\oplus d'}$ to the $\cO_C$-submodule $\cF_{0,(i_1)}(\epsilon'(\cF')\infty)\oplus \varphi(\cF')$ of $\eta_*F^{\oplus d}$. For each $n \in \Z$, define a morphism $G_n:X\to X$ by sending an $\cO_C$-submodule $\cF$ of $\eta_*\eta^*F^{\oplus d}$ to the $\cO_C$-submodule $\cF_{0,(i_1)}((n+\epsilon(\cF))\infty)+\cF$ of $\eta_*F^{\oplus d}$. Then the argument in~\cite[p. 85--86]{Gra} shows that $f$ and $|h'|\circ|h| \circ f$ are homotopic for any map $f:Z \to |X|$ from a compact space $Z$ to $|X|$. Since the map $|h'|\circ|h| \circ f$ factors through the contractible space $|X'|$, $f$ is null-homotopic. Hence $|X|$ is contractible. \end{proof} \subsection{} We have $(Z_{f, \bullet}^{(j, \alpha)}) \subset (\cBT_\bullet^{(\alpha)})$ for each flag $f \in F_d$ and $j \in J_\bK$. \subsection{} We set \[ U=\{(\beta,\gamma) \in (\cBT_\bullet^{(\alpha)}) \times F_d \,|\, \beta \in (Z_{\gamma, \bullet}^{(j, \alpha)}) \}. \] It is a poset. It is a closed subset of $ (\cBT_\bullet^{(\alpha)}) \times F_d$. (i.e., $u' \leq u \in U$ implies $u' \in U$). We define each of the maps \[ f_1:U \to (\cBT_\bullet^{(\alpha)})\times F_d \to (\cBT_\bullet^{(\alpha)}) \] \[ f_2:U \to (\cBT_\bullet^{(\alpha)})\times F_d \to F_d \] as the inclusion followed by the projection. Set \[ \begin{array}{rl} U_\sigma &= \{f \in F_d \,|\, (\sigma, f) \in U\} \\ &=\{f \in F_d \,|\, S(v) \ge f \text{ for all vertices $v$ of $\sigma$} \}. \end{array} \] \begin{lem} For sufficiently large $\alpha$, the poset $U_\sigma$ is contractible. \end{lem} \begin{proof} We actually prove that if $\alpha>3 \deg(\infty)$ then $U_\sigma$ has a maximal element, hence contractible. Let us show that $\bigcap_v S(v)$ is a maximal element, where the intersection is taken by regarding each $S_j(v)$ as a subset of $Q(F^{\oplus d})$. The maximality is clear. We need to show that $\bigcap_v S_j(v)$ is nonempty. For a locally free $\cO_C$-submodule $\mathcal{G} \subset \eta_* F^{\oplus d}$ of rank $d$ and a vector subspace $B \subset F^{\oplus d}$ we denote by $\mathcal{G} \cap \eta_* B$ the intersection of $\mathcal{G}$ and $\eta_* B$ in $\eta_* F^{\oplus d}$. Let us choose a vertex $v_0$ of $\sigma$. Let $\cF_0 \subset \eta_* F^{\oplus d}$ be a locally free $\cO_C$-submodule of rank $d$ that represents $v_0$. Since $v_0$ belongs to $\cBT_\bullet^{(\alpha,j)}$, there exists a nonzero vector subspace $W_0 \subsetneqq F^{\oplus d}$ such that $\cF_0 \cap \eta_* W_0$ is a part of the Harder-Narasimhan filtration of $\cF_0$ and the minimal slope of the successive quotients of the Harder-Narasimhan filtration of $\cF_0 \cap \eta_* W_0$ is greater than the sum of $3 \deg(\infty)$ and the maximal slope of the successive quotients of the Harder-Narasimhan filtration of $\cF_0/(\cF_0 \cap \eta_* W_0)$. Let $v$ be a vertex of $\sigma$. Let us choose a locally free $\cO_C$-submodule $\cF_1 \subset \eta_* F^{\oplus d}$ of rank $d$ that represents a vertex of $\sigma$ in such a way that $\cF_0 \subset \cF_1 \subset \cF_0(\infty)$. Then for any vector subspace $W' \subset F^{\oplus d}$, we have $\cF_0 \cap \eta_* W' \subset \cF_1 \cap \eta_* W' \subset (\cF_0 \cap \eta_* W')(\infty)$. Hence we have $$ 0 \le \deg(\cF_1 \cap \eta_* W') - \deg(\cF_0 \cap \eta_* W') \le \dim_F W'. $$ This implies that $\cF_1 \cap \eta_* W_0$ is a part of the Harder-Narasimhan filtration of $\cF_1$. Hence $\bigcap_v S_j(v)$ contains $W_0$. In particular $\bigcap_v S_j(v)$ is nonempty. This proves the contractiblity. \end{proof} This implies that $f_2$ is a homotopy equivalence by Lemma~\ref{lem:Quillen poset}. \subsection{} Set \[ \begin{array}{rl} U_f &= \{ \sigma \in (\cBT_\bullet^{(\alpha)}) \,|\, (\sigma, f) \in U \} \\ &=\{ \sigma \in (\cBT_\bullet^{(\alpha)}) \,|\, S(v) \ge f \text{ for all vertices $v$ of $\sigma$} \}. \end{array} \] This is $Z_{f, \bullet}^{(j, \alpha)}$, which is contractible. Hence by Lemma~\ref{lem:Quillen poset}, $f_2$ is a homotopy equivalence. \section{The right column} \subsection{} Now, take an ordered basis $q_1, \dots, q_d$ of $F^{\oplus d}$. We define $h_3: \cP_\tot(\cP'(B)) \to F(F^{\oplus d})$ to be the map that appeared in Section~\ref{sec:univ ms}. \subsection{} We have a map (from Section~\ref{sec:521}) \[ \varphi_{q_1,\dots, q_d}: A(B)_\bullet =A_\bullet \to \cBT_\bullet. \] We identify $A(B)_\bullet$ and $A_\bullet$ as subspaces of $\cBT_\bullet$ via this map. We set \[ A_\bullet^{(\alpha)} =\cBT_\bullet^{(\alpha)} \cap A_\bullet \] and define \[ h_1: (A_\bullet^{(\alpha)}) \to (\cBT_\bullet^{(\alpha)}) \] to be the morphism of posets induced by the inclusion $A_\bullet^{(\alpha)} \subset \cBT_\bullet^{(\alpha)}$ of simplicial complexes. \subsection{} Note that the image of the restriction map \[ S|_{A_0^{(\alpha)}}: A_0^{(\alpha)} \to F_d \] is contained in $\cP'(B)$, which is regarded as a subset of $F_d$ via $h_3$. \subsection{} We set \[ T= \{ (\beta, \gamma) \in (A_\bullet^{(\alpha)}) \times \cP'(B) \,|\, S(v) \ge \gamma \text{ for all vertices $v$ of $\beta$} \}. \] It is a closed subset of $(A_\bullet^{(\alpha)}) \times \cP'(B)$. \subsection{} We define each of the maps \[ g_1: T \subset (A_\bullet^{(\alpha)}) \times \cP'(B) \to (A_\bullet^{(\alpha)}) \] \[ g_2: T \subset (A_\bullet^{(\alpha)}) \times \cP'(B) \to \cP'(B) \] as the inclusion followed by the projection. \subsection{} Let \[ \begin{array}{rl} T_f &= \{ \sigma \in (A_\bullet^{(\alpha)}) \,|\, (\sigma,f) \in T \} \\ &= \{ \sigma \in (A_\bullet^{(\alpha)}) \,|\, S(v) \ge f \text{ for all vertices $v$ of $\sigma$} \}. \end{array} \] \begin{lem} $T_f$ is contractible \end{lem} \begin{proof} We use the setup in the proof of Lemma~\ref{lem:Z contractible}. We proceed by induction on $d$. We let $A_\bullet^{(\alpha)} \subset X$. Take $d',i_1, \cD'$ as before. We set $Y'=A_\bullet^{(\alpha)} \subset X'$ if $\cD' \neq \emptyset$, $Y'~A_\bullet \subset X'$ if $\cD'=\emptyset$. The inclusions are those corresponding to the ordered basis $q_{i_1+1},\dots, q_d$ of $F^{\oplus d}/(F^{\oplus i_1}\oplus \{0\}^{\oplus d-i_1}$). By the inductive hypothesis, $|Y'|$ is contractible. Notice that the restriction of the map $h:X \to X'$ to $Y$ has its image inside $Y'$. We can also check that the image of the restriction of $h'$ to $Y'$ is in $Y$. Then the same argument as in the proof of Lemma~\ref{lem:Z contractible} proves that $|Y'|$ is contractible. \end{proof} This implies $g_1$ is a homotopy equivalence. \subsection{} Let \[ \begin{array}{rl} T_\sigma &=\{f \in T_d \,|\, (\sigma, f) \in T\} \\ &= \{ f \in T_d \,|\, S(v) \ge f \text{ for all vertices $v$ of $\sigma$} \}. \end{array} \] This is contractible because it has a maximal element. This implies $g_2$ is a homotopy equivalence. \section{Proof of Proposition \ref{PROP:COINCIDENCE MS}} The map (5) in the diagram is defined to be the isomorphism of homology groups induced by $f_1 \circ f_2^{-1}$. The map (8) is defined to be the isomorphism of homology groups induced by $g_1 \circ g_2^{-1}$. By the commutativity of the diagram~\eqref{key diagram}, we have the commutativity of the bottom square of the diagram~\eqref{main diagram}. This proves Proposition~\ref{PROP:COINCIDENCE MS}. \chapter{Universal Modular Symbols} We recall here the definition of universal modular symbols following Ash and Rudolph \cite[\S 2]{AR}. Let $F$ be a field and $d \ge 1$. Let $q_1, \dots, q_d$ be an ordered basis (a basis with the order fixed) of $F^{\oplus d}$. A universal modular symbol $[q_1, \dots, q_d]$ associated with the ordered basis is then a $(d-2)$-nd homology class of the Tits building $T_{F^{\oplus d}}$ of the algebraic group $\mathrm{SL}_d$ over $F$. The treatment here may look slightly different from the paper \cite{AR} because we put our emphasis on posets. By definition, the Tits building $T_{F^{\oplus d}}$ is (the classifying space of) the poset of $F$-subspaces of $F^{\oplus d}$. Its barycentric subdivision is then the classifying space of the poset of flags in $F^{\oplus d}$. On the other hand, the boundary of the barycentric subdivision of the standard $(d-1)$-simplex $\Delta_{d-1}$ is the classifying space of the poset of subsets of $\{1,\dots, d\}$, excluding $\{1,\dots, d\}$ and $\emptyset$. A choice of an ordered basis will give a morphism from this poset to the poset of flags. This in turn induces a map of homology and the image of the fundamental class of the homology of the boundary of the standard simplex is defined as the universal symbol corresponding to the ordered basis. We mention some results of \cite{AR} in Section~\ref{sec:AR results}. Of course, our main aim is to consider the analogue of their Proposition 3.2; this will be covered in Chapters~\ref{ch:pf for ums}. We use that the universal modular symbols generate $H_{d-2}(T_{F^{\oplus d}})$. Their main result (see Theorem~\ref{thm:AR main}) is applicable when our base ring $A$ is a Euclidean domain. \section{the Tits building for the special linear groups} \label{sec:Tits building} Let us recall the definition of the Tits building. \subsection{} Let $F$ be a field. (We will take $F$ to be our ground global field of positive characteristic for our application.) Let $d \ge 1$ be a positive integer, and $W$ be a $d$-dimensional vector space over $F$. We denote by $Q(W)$ the poset of nonzero proper vector subspaces $0 \subsetneqq W_0 \subsetneqq W$. Let $\cP_\tot(Q(W))$ denote the poset of totally ordered finite subsets of $Q(W)$. By definition, the Tits building $T_W$ is the simplicial complex $(Q(W), \cP_\tot(Q(W)))$ associated with the poset $Q(W)$. In the terminology of \cite[\S 1]{Quillen}, $T_W$ is the classifying space of $Q(W)$. \subsection{} We use the poset of flags in $W$, which gives rise to the barycentric subdivision of $T_W$. A flag in $W$ is a sequence, for some $1 \le i \le d$, \[ 0=W_0 \subsetneq W_1 \subsetneq W_2 \subsetneq \dots \subsetneq W_i \subsetneq W \] of $F$-subvector spaces. Let $F(W)$ denote the set of flags in $W$. For two flags $F_1, F_2$, we set $F_1 \le F_2$ if $F_2$ is a refinement of $F_1$. With this order, $F(W)$ is a poset. We have a canonical identification $F(W)=\cP_\tot(Q(W))$ of two posets. We denote by $T'_W$ the simplicial complex $(F(W), \cP_\tot(F(W)))$ where $\cP_\tot(F(W))$ is the set of nonempty totally ordered finite subsets of $F(W)$. Then $T'_W$ is the barycentric subdivision of $T_W$. (See proof of Lemma 1.9 in \cite{Gra}.) \subsection{} Let us take an isomorphism $W\cong F^{\oplus d}$ and regard $W$ as the set of column vectors. Then $\GL_d(F)$ acts on $W$ by multiplication from the left, on the poset $Q(W)$, on $T_W$ and on $T'_W$. \subsection{} As mentioned in \cite[p.165, 2.2.3]{FKS}, $F(W)$ is isomorphic as a poset to the set of parabolic subgroups (ordered by inclusion) of any of $GL_d$, $PGL_d$, and $SL_d$ over $F$. This gives the description of $T_W$ as the Tits building of the semisimple algebraic group $SL_d$ over $F$. (See \cite[p.242, Section 2]{AR}.) \section{the boundary of the first barycentric subdivision of a standard simplex} \label{sec:barycentric subdivision} We now give the simplicial complex which describes the boundary of the first barycentric subdivision of the standard $(d-1)$-simplex. This description is essentially the same as that in \cite[p.243]{AR}. \subsection{} Let $B$ be a nonempty finite set. Let $\cP'(B)$ denote the set of subsets $J \subset B$ satisfying $J \neq \emptyset, B$. We regard $\cP'(B)$ as a partially ordered set with respect to the inclusions. Let $\cP_\tot(\cP'(B))$ denote the set of non-empty totally ordered subsets of $\cP'(B)$. Then the pair $(\cP'(B), \cP_\tot(\cP'(B)))$ forms a finite simplicial complex. It is the classifying space of the poset $\cP'(B)$. Let $d=|B|$ denote the cardinality of $B$. Then this simplicial complex is isomorphic to the boundary of the first barycentric subdivision of the standard $(d-1)$-simplex. The set of vertices of this standard $(d-1)$-simplex $\Delta$ is the set $B$. \subsection{} The following is the picture of the boundary of the first barycentric subdivision of the standard $2$-simplex: \noindent \begin{tikzpicture} \path (90:3cm) coordinate (a) node[above] {$\{\{1\}\}$} (210:3cm) coordinate (b) node[below left] {$\{\{2\}\}$} (-30:3cm) coordinate (c) node[below right] {$\{\{3\}\}$}; \draw [ultra thick] (a) -- (b) -- (c) --cycle; \draw [ultra thick, x=(a),y=(b),z=(c)] (1,0,0) -- (0,.5,.5); \draw [ultra thick, x=(a),y=(b),z=(c)] (0,1,0) -- (.5,0,.5); \draw [ultra thick, x=(a),y=(b),z=(c)] (0,0,1) -- (.5,.5,0); \filldraw[black, x=(a),y=(b),z=(c)] (0,.5,.5) circle (2pt) node[anchor=north] {$\{\{2,3\}\}$}; \filldraw[black, x=(a),y=(b),z=(c)] (.5,0,.5) circle (2pt) node[anchor=west] {$\{\{1,3\}\}$}; \filldraw[black, x=(a),y=(b),z=(c)] (.5,.5,0) circle (2pt) node[anchor=east] {$\{\{1,2\}\}$}; \filldraw[black, x=(a),y=(b),z=(c)] (.75,0,.25) node[anchor=west] {$\{\{1\},\{1,3\}\}$}; \filldraw[black, x=(a),y=(b),z=(c)] (.25,0,.75) node[anchor=west] {$\{\{3\},\{1,3\}\}$}; \filldraw[black, x=(a),y=(b),z=(c)] (0,.8,.2) node[anchor=north] {$\{\{2\},\{2,3\}\}$}; \filldraw[black, x=(a),y=(b),z=(c)] (0,.2,.8) node[anchor=north] {$\{\{3\},\{2,3\}\}$}; \filldraw[black, x=(a),y=(b),z=(c)] (.75,.25,0) node[anchor=east] {$\{\{1\},\{1,2\}\}$}; \filldraw[black, x=(a),y=(b),z=(c)] (.25,.75,0) node[anchor=east] {$\{\{2\},\{1,2\}\}$}; \end{tikzpicture} \subsection{an orientation} If $B$ is totally ordered, then the order gives an orientation of $\Delta$. This orientation in turn gives an orientation of the subdivision, and of the boundary of the barycentric subdivision. Let us describe this explicitly. The number of $(d-1)$-dimensional simplices in $(\cP'(B), \cP_\tot(\cP'(B)))$ is equal to $d!$. They are the simplices $\sigma_g$ whose vertices are \[ \{ g(1)\}, \{g(1), g(2)\}, \dots, \{g(1), g(2), \dots, g(d-1)\} \] for each $g \in S_d$ in the $d$-th symmetric group. The orientation of $\sigma_g$ that we use below (which is the one determined from the orientation of $\Delta$) is as follows. If $g$ is an even permutation, then the orientation is to be given by the increasing order $\{ g(1)\}, \{g(1), g(2)\}, \dots, \{g(1), g(2), \dots, g(d-1) \}$. If $g$ is odd, then the orientation is the opposite of the order above. This choice of orientation (or the total order) gives an element (the fundamental class) in \[H_{d-2}((\cP'(B), \cP_\tot(\cP'(B))), \Z).\] \section{the universal modular symbols} \label{sec:univ ms} We define the universal modular symbols of Ash-Rudolph (\cite[Definition 2.1]{AR}). \subsection{} Let $d \ge 1$ be an integer. Let $B=\{1,\dots, d\}$ be a totally ordered finite set of cardinality $d$. Let $F$ be a field and $W$ be a $d$-dimensional $F$-vector space. Let $q_1, \dots, q_d \in W$ be a basis of $W$. We define a morphism of posets \[ \phi=\phi_{q_1, \dots, q_d}: \cP'(B) \to Q(W) \] as follows. Let $1 \le i \le d$ and $J=\{j_1,\dots, j_i\} \subset B$ belonging to $\cP'(B)$. Then we set \[ \phi(J)=\langle q_{j_1}, \dots, q_{j_i} \rangle \subsetneqq W \] to be the vector subspace spanned by $\{q_{j_1}, \dots, q_{j_i}\}$. The morphism of posets induces a morphism of classifying spaces which we also denote by $\phi=\phi_{q_1,\dots, q_d}$: \[ \phi=\phi_{q_1, \dots, q_d}: ((\cP'(B), \cP_\tot(\cP'(B)))) \to T_W= (Q(W), \cP_\tot(Q(W))). \] \subsection{} We write $[q_1,\dots, q_d]$ for the image of the fundamental class in \[ H_{d-2}((F(W), \cP_\tot(F(W))), \Z) \] by the pushforward by $\phi_{q_1, \dots, q_d}$. For convenience, we set $[q_1, \dots, q_d]=0$ if $q_1, \dots, q_d$ do not form a basis. These elements are called universal modular symbols. When $W=F^{\oplus d}$, for $Q \in \GL_d(F^{\oplus d})$, we write $[Q]$ for the universal modular symbol $[q_1,\dots, q_d]$ where $q_i$ is the $i$-th row for $1 \le i \le d$. \section{Some results of Ash and Rudolph} \label{sec:AR results} We record here some results of Ash and Rudolph on universal modular symbols which may or may not hold in our case. The first statements in the case of $\GL_2$ were given by Manin \cite{Manin1}. \subsection{} We have a description of a $\Z$-basis of the homology. \begin{prop} [Prop 2.3, p.244, \cite{AR}] \label{prop:MS Z-basis} Let $U$ denote the subgroup of $\GL(W)$ consisting of unipotent upper triangular matrices. Then the symbols $[Q]$ as $Q$ runs through $U$, make up a $\Z$-basis of $H_{d-2}(T_W, \Z)$. \end{prop} \begin{proof} See loc.\ cit. \end{proof} \subsection{} The following proposition gives some basic properties of the universal modular symbols. \begin{prop}[{\cite[Prop 2.2, p.243]{AR}}] The universal modular symbols enjoy the following properties: \noindent 1. It is anti-symmetric. \\ 2. $[aq_1, q_2 \dots, q_d]=[q_1, \dots, q_d]$ for any nonzero $a \in F$. \\ 3. If $q_1, \dots, q_{d+1}$ are all nonzero, then \[ \sum_{i=1}^{d+1} (-1)^{i+1} [q_1, \dots, \widehat{q}_i, \dots, q_{d+1}]=0. \] \\ 4. If $A \in \GL_d(F)$, then $[AQ]=A \cdot [Q]$, where the dot denotes the action of $\GL_F(W)$ on the homology of $T_W$. \end{prop} \begin{proof} See {\cite[Prop 2.2, p.243]{AR}}. \end{proof} \subsection{} We mention their main theorem. Here $T_d$ is the Tits building for subspaces in $L^n$ with $L$ the field of fractions of a Euclidean domain $\Lambda$. \begin{thm}[{\cite[Thm 4.1, p.247]{AR}}] \label{thm:AR main} As $Q$ runs over $\mathrm{SL}(d, \Lambda)$, the universal modular symbols $[Q]$ generate $H_{d-2}(T_d; \Z)$. \end{thm} \begin{proof} See \cite[Thm 4.1]{AR} \end{proof} Ash and Rudolph use this theorem for $\Lambda=\Z$. For our setup, the ring $A$ is the analogue of $\Z$ but this is in general not a Euclidean domain. We do not know to what extent this type of statement holds true. \chapter{On finite $p$-subgroups of arithmetic subgroups} \section{Introduction} The result of this chapter will be used in Section~\ref{sec:d argument}. \subsection{} We are interested in the group homology of stabilizer groups of arithmetic subgroup action on the Bruhat-Tits building $\cBT_\bullet$. These homology groups appear as the $E_1$-terms of the spectral sequence in Section~\ref{sec:2nd ss}. In order to obtain the estimate of the exponent, we do not study the differentials of the spectral sequence directly, but we give an estimate of the exponent of the $E_1$-terms, using the fact that group homology of a finite group is killed by the order of the group. In this chapter, we treat the $p$-part. \subsection{} Let $\Gamma$ be an arithmetic subgroup acting on $\cBT_\bullet$. Suppose that $\Gamma$ is a pro-$p$ group, where $p$ is the characteristic of $F$. Let $\sigma$ be a simplex of $\cBT_\bullet$ and let $\Gamma_\sigma$ be the stabilizer of $\sigma$. Then $\Gamma_\sigma$ is a finite $p$-subgroup of $\GL_d(F)$. We will bound the exponent of the homology groups of $\Gamma_\sigma$. We do not use the fact that it is a stabilizer group, but merely the fact that it is a $p$-subgroup of $\GL_d(F)$. In Lemma~\ref{lem:existence of flag} below, it is shown that there exists a flag in $F^{\oplus d}$ which is stabilized by $\Gamma_\sigma$. This in turn gives a filtration of the stabilizer group by normal subgroups such that successive quotients are either trivial or an elementary abelian $p$-group (Corollary~\ref{cor:length d-1}). Since the group homology of an elementary abelian $p$-group is killed by $p$, we obtain a certain bound of the order of the homology groups by the length of the filtration. \section{Lemmas} \begin{lem} \label{lem:p-group} Let $H$ be a finite $p$-group. Suppose that there exists a decreasing filtration $$ H=H^0 \supset H^1 \supset \cdots \supset H^{\ell-1} \supset H^{\ell} = \{1\} $$ of $H$ by normal subgroups of $H$ such that $H^{i-1}/H^i$ is an elementary abelian $p$-group for $i=1,\ldots, \ell$. Let $R$ be a principal ideal domain and let $\chi: H \to R^\times$ be a character of $H$. Then for any integer $s \ge 1$, the $s$-th homology group $H_s(H,\chi)$ is an abelian group killed by $p^{1+s(\ell-1)}$. \end{lem} \begin{proof} First we prove the claim for $\ell=1$. In this case $H$ is an elementary abelian $p$-group. Since the claim follows from the K\"unneth theorem if $\chi$ is trivial, we may assume that $\chi$ is non-trivial. Let $H'$ denote the kernel of $\chi$. Since $H$ is an elementary abelian $p$-group, there exists a cyclic subgroup $H'' \subset H$ of order $p$ such that the map $H' \times H'' \to H$ given by the multiplication in $H$ is an isomorphism of groups. Via this isomorphism, the character $\chi$ is regarded as the external tensor product over $R$ of the trivial character of $H'$ and the restriction $\chi|_{H''}$ of $\chi$ to $H''$. Hence it follows from the splitting of the short exact sequence in the K\"unneth theorem that $H_s(H,\chi)$ is killed by $p$. We prove the claim for $\ell>1$ by induction on $\ell$. Let us consider the Hochschild-Serre spectral sequence (cf. \cite[p.171, VII, (6.3)]{Brown}) $$ E^2_{s,t} = H_s(H/H^{\ell-1},H_t(H^{\ell-1},\chi)) \Longrightarrow H_{s+t}(H,\chi). $$ The claim for the elementary abelian $H^{\ell-1}$ shows that $E^2_{s,t}$ is killed by $p$ for $t \ge 1$. Since the claim is known for $H/H^{\ell-1}$ by the inductive hypothesis, $E^2_{s,0}$ is killed by $p^{1+s(\ell-2)}$. Hence $E^r_{q,0}$ is killed by $p^{1+q(\ell-2)}$ and $E^r_{s-i,i}$ are killed by $p$ for $i=1,\ldots,s$ and for any $r \ge 2$. Thus the spectral sequence above shows that $H_s(H,\chi)$ is killed by $p^{1+s(\ell-2)} \cdot \prod_{i=1}^s p = p^{1+s(\ell-1)}$. \end{proof} \begin{lem} \label{lem:existence of flag} Let $V$ be a non-zero, finite dimensional $F$-vector space, and $H$ a finite $p$-group which acts $F$-linearly on $V$ from the right. Then there exists a flag $0 = V_0 \subsetneqq V_1 \subsetneqq \cdots \subsetneqq V_\ell = V$ in $V$ by $F$-linear subspaces such that $V_i \cdot H = V_i$ and $H$ acts trivially on $V_i/V_{i-1}$ for $i=1,\ldots,\ell$ \end{lem} \begin{proof} First we prove that the $H$-invariant part $V^H$ is non-zero by induction on the order $p^m$ of $H$. If $m \le 1$, then $H$ is generated by an element $h \in H$. Let $\rho(h) \in \GL_F(V)$ denote the action of $h$ on $V$. Since $h^p =1$, we have $(\rho(h)-\id_V)^p=0$. This implies that $1$ is an eigenvalue of $\rho(h)$. Hence $V$ has a non-zero $H$-invariant vector. Suppose that $m \ge 2$. Since any nontrivial finite $p$-group has a non-trivial center, there exists an element $h$ of order $p$ in the center of $H$. Let $W \subset V$ denote the subspace of $h$-invariant vectors. Then $W \neq 0$ and $H/\langle h \rangle$ acts on $W$. Hence by inductive hypothesis, $V^H = W^{H/\langle h \rangle}$ is non-zero. Next we prove the claim by induction on $d = \dim_F V$. The claim is clear when $d=1$ since $V^H \neq \{0\}$ implies that $H$ trivially acts on $V$. Suppose that $d \ge 2$. Set $W = V/V^H$. If $W=\{0\}$, then $H$ acts trivially on $V$ and the claim is clear. Suppose otherwise. Then the action of $H$ on $V$ induces a right action of $H$ on $W$. By inductive hypothesis, there exists a flag $0 = W_0 \subsetneqq W_1 \subsetneqq \cdots \subsetneqq W_{\ell'} = W$ in $W$ by $F$-linear subspaces such that $W_i \cdot H = W_i$ and $H$ acts trivially on $W_i/W_{i-1}$ for $i=1,\ldots,\ell'$. Then the preimage $V_i$ of $W_{i-1}$ under the quotient map $V \surj W$ for $i=1,\ldots,\ell=\ell'+1$ gives a desired flag in $V$. \end{proof} \begin{cor} \label{cor:length d-1} Let $H \subset \GL_d(F)$ be a finite subgroup that is a finite $p$-group. Then there exists an integer $\ell \le d$ and a decreasing filtration $$ H=H^0 \supset H^1 \supset \cdots \supset H^{\ell-2} \supset H^{\ell-1} = \{1\} $$ of $H$ by normal subgroups of $H$ such that $H^{i-1}/H^i$ is either trivial or an elementary abelian $p$-group for $i=1,\ldots, \ell-1$. \end{cor} \begin{proof} It follows from Lemma \ref{lem:existence of flag} that there exists a flag $0 = V_0 \subsetneqq V_1 \subsetneqq \cdots \subsetneqq V_\ell = F^d$ in $F^d$ by $F$-linear subspaces such that $V_i \cdot H = V_i$ and $H$ acts trivially on $V_i/V_{i-1}$ for $i=1,\ldots,\ell$. Since $F^d$ is $d$-dimensional, we have $\ell \le d$. For $i=0,\ldots,\ell-1$, set $H^i = \Ker(H \to \GL_F(V_{i+1})) \subset H$. Then $H^i$ is a normal subgroup of $H$ and we have $H=H^0 \supset H^1 \supset \cdots \supset H^{\ell-1}= \{1\}$. Let $0 \le i \le \ell-2$. Then for $h \in H^i$, the $F$-linear map $V_{i+2} \to V_{i+1}$ that sends $v \in V$ to $vh -v$ induces an $F$-linear map $\varphi_h : V_{i+2}/V_{i+1} \to V_{i+1}$. Let $\varphi : H^i \to \Hom_F(V_{i+2}/V_{i+1},V_{i+1})$ denote the map that sends $h \in H^i$ to $\varphi_h$. Then one can check easily that the map $\varphi$ is a homomorphism of groups and that $H^{i+1}$ is equal to the kernel of $\varphi$. This shows that $H^i/H^{i+1}$ is an elementary abelian $p$-group since it is isomorphic to a subgroup of $\Hom_F(V_{i+2}/V_{i+1},V_{i+1})$. Hence the groups $H^0, \ldots, H^{\ell-1}$ have the desired property. \end{proof} \begin{cor} \label{cor:finite p-group homology} Let $H \subset \GL_d(F)$ be a finite subgroup that is a finite $p$-group. Let $\chi:H \to \Z^\times$ be a character. Then the homology groups $H_s(H, \chi)$ is killed by $p^{1+s(d-2)}$ for $s \geqslant 1$. \end{cor} \begin{proof} This follows from Lemma~\ref{lem:p-group} and Corollary~\ref{cor:length d-1}. \end{proof} \chapter{Simplicial complexes and their (co)homology} \label{sec:2} The material of this chapter (except for the remark in Section~\ref{sec:cellular}) already appeared in Sections 3 and 5 of \cite{KY:Zeta elements}. We collected them for the convenience of readers. We define generalized simplicial complexes in Section~\ref{sec:simplicial complex}, define their 4 (co)homology theories (homology, cohomology, Borel-Moore homology, cohomology with compact support), and mention the universal coefficient theorem and geometric realization. Later, we will consider quotients of the Bruhat-Tits building by arithmetic subgroups (to be defined). While the Bruhat-Tits building is canonically a simplicial complex, the arithmetic quotient is in general not a simplicial complex. This issue was addressed by Prasad in a paper of Harder \cite[p.140, Bemerkung]{Harder2}. We introduce this notion because the quotients are naturally (generlized) simplicial complexes. An example of a generalized simplicial complex which is not a strict simplicial complex consists of two vertices with two edges between the vertices. When defining the 4 (co)homology theories of simplicial complexes, usually we fix an orientation of each simplex and then construct suitable complexes of abelian groups and compute the (co)homology groups. We give a slightly different complex, where we do not fix a choice of orientation of each simplex. This is because the Bruhat-Tits building is not canonically oriented (even though they look like they are canonically oriented). We end this chapter with a remark in Section~\ref{sec:cellular}. \section{Generalized simplicial complexes} \label{sec:simplicial complex} \subsection{} Let us recall the notion of (abstract) simplicial complex. \begin{definition} A (strict) simplicial complex is a pair $(Y_0,\Delta)$ of a set $Y_0$ and a set $\Delta$ of finite subsets of $Y_0$ which satisfies the following conditions: \begin{itemize} \item If $S \in \Delta$ and $T\subset S$, then $T \in \Delta$. \item If $v \in Y_0$, then $\{v \} \in \Delta$. \end{itemize} \end{definition} In this paper we call a simplicial complex in the sense above a strict simplicial complex, and use the terminology ``simplicial complex" in a little broader sense as defined below. We recall that in a strict (abstract) simplicial complex, as recalled above, each simplex is uniquely determined by the set of its vertices. \subsection{} The definition of (generalized) simplicial complex is as follows. For a set $S$, let $\cP^\fin(S)$ denote the category whose objects are the non-empty finite subsets of $S$ and whose morphisms are the inclusions. \begin{definition} A simplicial complex is a pair $(Y_0,F)$ of a set $Y_0$ and a presheaf $F$ of sets on $\cP^\fin(Y_0)$ such that $F(\{\sigma \}) = \{\sigma\}$ holds for every $\sigma \in Y_0$. \end{definition} \subsection{} The definition above is too abstract in practice. We now give a working definition which is equivalent to the definition above. \begin{definition} A (generalized) simplicial complex is a collection $Y_\bullet = (Y_i)_{i \ge 0}$ of sets indexed by non-negative integers, equipped with the following additional data: \begin{itemize} \item a subset $V(\sigma) \subset Y_0$ with cardinality $i+1$, for each $i \ge 0$ and for each $\sigma \in Y_i$ (we call $V(\sigma)$ the set of vertices of $\sigma$), and \item an element in $Y_j$, for each $i \ge j \ge 0$, for each $\sigma \in Y_i$, and for each subset $V' \subset V(\sigma)$ with cardinality $j+1$ (we denote this element in $Y_j$ by the symbol $\sigma \times_{V(\sigma)} V'$ and call it the face of $\sigma$ corresponding to $V'$) \end{itemize} which satisfy the following conditions: \begin{itemize} \item For each $\sigma \in Y_0$, the equality $V(\sigma) = \{\sigma\}$ holds, \item For each $i \ge 0$, for each $\sigma \in Y_i$, and for each non-empty subset $V' \subset V(\sigma)$, the equality $V(\sigma \times_{V(\sigma)} V') = V'$ holds. \item For each $i \ge 0$ and for each $\sigma \in Y_i$, the equality $\sigma \times_{V(\sigma)} V(\sigma) = \sigma$ holds, and \item For each $i \ge 0$, for each $\sigma \in Y_i$, and for each non-empty subsets $V', V'' \subset V(\sigma)$ with $V'' \subset V'$, the equality $(\sigma \times_{V(\sigma)} V')\times_{V'} V'' = \sigma \times_{V(\sigma)} V''$ holds. \end{itemize} Let us call the element of the form $\sigma\times_{V(\sigma)} V'$ for $j$ and $V'$ as above, the $j$-dimensional face of $\sigma$ corresponding to $V'$. We remark here that the symbol $\times_{V(\sigma)}$ does not mean a fiber product in any way. \end{definition} \subsection{} This equivalence of the two definitions is explicitly described as follows. Suppose we are given a simplicial complex $Y_\bullet$. Then the corresponding $F$ is the presheaf which associates, to a non-empty finite subset $V \subset Y_0$ with cardinality $i+1$, the set of elements $\sigma \in Y_i$ satisfying $V(\sigma)=V$. This alternative definition of a simplicial complex is much closer to the definition of a strict simplicial complex. \subsection{} Any strict simplicial complex gives a simplicial complex in the sense above in the following way. Let $(Y_0,\Delta)$ be a strict simplicial complex. We identify $Y_0$ with the set of subsets of $Y_0$ with cardinality $1$. For $i \ge 1$ let $Y_i$ denote the set of the elements in $\Delta$ which has cardinality $i+1$ as a subset of $Y_0$. For $i \ge 1$ and for $\sigma \in Y_i$, we set $V(\sigma)= \sigma$ regarded as a subset of $Y_0$. For a non-empty subset $V \subset V(\sigma)$, of cardinality $i'+1$, we set $\sigma \times_{V(\sigma)} V = V$ regarded as an element of $Y_{i'}$. Then it is easily checked that the collection $Y_\bullet = (Y_i)_{i \ge 0}$ together with the assignments $\sigma \mapsto V(\sigma)$ and $(\sigma, V) \mapsto \sigma \times_{V(\sigma)} V$ forms a simplicial complex. \subsection{} An example of a (generalized) simplicial complex which is not a strict simplicial complex consists of two vertices with two edges between the two vertices. \subsection{} The Bruhat-Tits building (for PGL as introduced in Definition~\ref{def:BT}) is a strict simplicial complex. However, its arithmetic quotients are generally merely a (generalized) simplicial complex. \subsection{} Let $(Y_0, F)$ and $(Z_0, G)$ be simplicial complexes. \begin{definition} A morphism $(Y_0, F) \to (Z_0, G)$ is a map of sets $f_0: Y_0 \to Z_0$ and a morphism of presheaves $F \to G \circ P_{\mathrm{fin}}(f_0)$ on $P_{\mathrm{fin}}(f_0)$. Here $P_{\mathrm{fin}}(f_0): P_{\mathrm{fin}}(Y_0) \to P_{\mathrm{fin}}(Z_0)$ is the functor induced by $f_0$. \end{definition} \subsection{} The definition above in terms of the working definition is as follows. Let $Y_\bullet$ and $Z_\bullet$ be simplicial complexes. We define a map (morphism) from $Y_\bullet$ to $Z_\bullet$ to be a collection $f=(f_i)_{i \ge 0}$ of maps $f_i : Y_i \to Z_i$ of sets which satisfies the following conditions: \begin{itemize} \item for any $i \ge 0 $ and for any $\sigma \in Y_i$, the restriction of $f_0$ to $V(\sigma)$ is injective and the image of $f|_{V(\sigma)}$ is equal to the set $V(f_i(\sigma))$, and \item for any $i \ge j \ge 0$, for any $\sigma \in Y_i$, and for any non-empty subset $V' \subset V(\sigma)$ with cardinality $j+1$ we have $f_j(\sigma \times_{V(\sigma)} V') = f_i(\sigma) \times_{V(f_i(\sigma))} f_0(V')$. \end{itemize} \section{Orientation} Usually the (co)homology groups of $Y_\bullet$ are defined to be the (co)homology groups of a complex $C_\bullet$ whose component in degree $i$ is the free abelian group generated by the $i$-simplices of $Y_\bullet$. For a precise definition of the boundary homomorphism of the complex $C_\bullet$, we need to choose an orientation of each simplex. In this paper we adopt an alternative, equivalent definition of homology groups which does not require any choice of orientations. The latter definition seems a little complicated at first glance, however it will soon turn out to be a better way for describing the (co)homology of the arithmetic quotients Bruhat-Tits building, which seems to have no canonical, good choice of orientations. \subsection{} \label{sec:orientation} We introduce the notion of orientation of a simplex. It will be a $\{\pm 1\}$-torsor (where $\{\pm 1\}$ is the abelian group $\Z/2\Z$) associated with each simplex. Let $Y_\bullet$ be a simplicial complex and let $i \ge 0$ be a non-negative integer. For an $i$-simplex $\sigma \in Y_i$, we let $T(\sigma)$ denote the set of all bijections from the finite set $\{1,\ldots, i+1 \}$ of cardinality $i+1$ to the set $V(\sigma)$ of vertices of $\sigma$. The symmetric group $S_{i+1}$ acts on the set $\{1,\ldots, i+1 \}$ from the left and hence on the set $T(\sigma)$ from the right. Through this action the set $T(\sigma)$ is a right $S_{i+1}$-torsor. \begin{definition} We define the set $O(\sigma)$ of orientations of $\sigma$ to be the $\pmone$-torsor $O(\sigma) = T(\sigma) \times_{S_{i+1},\sgn} \pmone$ which is the push-forward of the $S_{i+1}$-torsor $T(\sigma)$ with respect to the signature character $\sgn: S_{i+1} \to \pmone$. \end{definition} We give the basic properties which we need in order to consider (co)homology. \subsection{} When $i \ge 1$, the $\pmone$-torsor $O(\sigma)$ is isomorphic, as a set, to the quotient $T(\sigma)/A_{i+1}$ of $T(\sigma)$ by the action of the alternating group $A_{i+1} = \Ker\, \sgn \subset S_{i+1}$. When $i=0$, the $\pmone$-torsor $O(\sigma)$ is isomorphic to the product $O(\sigma) = T(\sigma) \times \pmone$, on which the group $\pmone$ acts via its natural action on the second factor. \subsection{} Let $i \ge 1$ and let $\sigma \in Y_i$. For $v \in V(\sigma)$ let $\sigma_v$ denote the $(i-1)$-simplex $\sigma_v = \sigma \times_{V(\sigma)} (V(\sigma) \setminus \{v\})$. There is a canonical map $s_v : O(\sigma) \to O(\sigma_v)$ of $\pmone$-torsors defined as follows. Let $\nu \in O(\sigma)$ and take a lift $\wt{\nu}:\{1,\ldots,i+1\} \xto{\cong} V(\sigma)$ of $\nu$ in $T(\sigma)$. Let $\wt{\iota}_v : \{1,\ldots,i\} \inj \{1,\ldots,i+1\}$ denote the unique order-preserving injection whose image is equal to $\{1,\ldots,i+1\} \setminus \{\wt{\nu}^{-1}(v)\}$. It follows from the definition of $\wt{\iota}_v$ that the composite $\wt{\nu} \circ \wt{\iota}_v: \{1,\ldots,i\} \to V(\sigma)$ induces a bijection $\wt{\nu}_v : \{1,\ldots,i\} \xto{\cong} V(\sigma) \setminus \{v\} = V(\sigma_v)$. We regard $\wt{\nu}_v$ as an element in $T(\sigma_v)$. We define $s_v : O(\sigma) \to O(\sigma_v)$ to be the map which sends $\nu \in O(\sigma)$ to $(-1)^{\wt{\nu}^{-1}(v)}$ times the class of $\wt{\nu}_v$. It is easy to check that the map $s_v$ is well-defined. \subsection{} Let $i \ge 2$ and $\sigma \in Y_i$. Let $v, v' \in V(\sigma)$ with $v \neq v'$. We have $(\sigma_v)_{v'} = (\sigma_{v'})_v$. Let us consider the two composites $s_{v'} \circ s_v : O(\sigma) \to O((\sigma_v)_{v'})$ and $s_v \circ s_{v'} : O(\sigma) \to O((\sigma_{v'})_v)$. It is easy to check that the equality \begin{equation} \label{formula1} s_{v'} \circ s_v (\nu) = (-1) \cdot s_v \circ s_{v'} (\nu) \end{equation} holds for every $\nu \in O(\sigma)$. \section{Cohomology and homology} \label{sec:def homology} \begin{definition} We say that a simplicial complex $Y_\bullet$ is locally finite if for any $i \ge 0$ and for any $\tau \in Y_i$, there exist only finitely many $\sigma \in Y_{i+1}$ such that $\tau$ is a face of $\sigma$. \end{definition} We give the four notions of homology or cohomology for a locally finite (generalized) simplicial complex. \subsection{} Let $Y_\bullet$ be a simplicial complex (\resp a locally finite simplicial complex). For an integer $i\ge 0$, we set $Y_i'=\coprod_{\sigma \in Y_i} O(\sigma)$ and regard it as a $\pmone$-set. Given an abelian group $M$, we regard the abelian groups $\bigoplus_{\nu \in Y_i'} M$ and $\prod_{\nu \in Y_i'}M$ as $\pmone$-modules in such a way that the component at $\eps\cdot \nu$ of $\eps \cdot (m_\nu)$ is equal to $\eps m_\nu$ for $\eps \in \pmone$ and for $\nu \in Y_i'$. \subsection{} For $i \ge 1$, we let $\wt{\partial}_{i,\oplus}: \bigoplus_{\nu \in Y_i'}M \to \bigoplus_{\nu \in Y_{i-1}'}M$ (\resp $\wt{\partial}_{i,\prod}: \prod_{\nu \in Y_i'}M \to \prod_{\nu \in Y_{i-1}'}M$) denote the homomorphism of abelian groups that sends $m = (m_\nu)_{\nu \in Y_i'}$ to the element $\wt{\partial}_i(m)$ whose coordinate at $\nu' \in O(\sigma') \subset Y_{i-1}'$ is equal to \begin{eqnarray}\label{eqn:boundary} \wt{\partial}_i(m)_{\nu'} = \sum_{(v,\sigma,\nu)} m_\nu. \end{eqnarray} Here in the sum on the right hand side $(v,\sigma,\nu)$ runs over the triples of $v \in Y_0 \setminus V(\sigma')$, an element $\sigma \in Y_i$, and $\nu \in O(\sigma)$ which satisfy $V(\sigma) = V(\sigma') \amalg \{v\}$ and $s_v(\nu) = \nu'$. Note that the sum on the right hand side is a finite sum for $\wt{\partial}_{i,\oplus}$ by definition. One can see also that the sum is a finite sum in the case of $\wt{\partial}_{i,\prod}$ using the locally finiteness of $Y_\bullet$. Each of $\wt{\partial}_{i,\oplus}$ and $\wt{\partial}_{i,\prod}$ is a homomorphism of $\pmone$-modules. Hence it induces a homomorphism $\partial_{i,\oplus} : (\bigoplus_{\nu \in Y_i'}M)_\pmone \to (\bigoplus_{\nu \in Y_{i-1}'}M)_\pmone$ (\resp $\partial_{i,\prod} : (\prod_{\nu \in Y_i'}M)_\pmone \to (\prod_{\nu \in Y_{i-1}'}M)_\pmone$) of abelian groups, where the subscript $\pmone$ means the coinvariants. \subsection{} It follows from the formula (\ref{formula1}) and the definition of $\partial_{i,\oplus}$ and $\partial_{i,\prod}$ that the family of abelian groups $((\bigoplus_{\nu \in Y_i'}M)_\pmone)_{i\ge 0}$ (resp. $((\prod_{\nu \in Y_i'}M)_\pmone)_{i\ge 0}$) indexed by the integer $i \ge 0$, together with the homomorphisms $\partial_{i,\oplus}$ (resp. $\partial_{i,\prod}$) for $i \ge 1$, forms a complex of abelian groups. The homology groups of this complex are the homology groups $H_*(Y_\bullet, M)$ (resp. the Borel-Moore homology groups $H_*^\BM(Y_\bullet, M)$) of $Y_\bullet$ with coefficients in $M$. \subsection{} We note that there is a canonical map $H_*(Y_\bullet, M) \to H_*^\BM(Y_\bullet, M)$ from homology to Borel-Moore homology induced by the map of complexes $((\bigoplus_{\nu \in Y_i'}M)_\pmone)_{i\ge 0} \to ((\prod_{\nu \in Y_i'}M)_\pmone)_{i\ge 0}$ given by inclusion at each degree. \subsection{} The family of abelian groups $(\Map_{\pmone}(Y_i', M))_{i\ge 0}$ (resp. $(\Map_{\mathrm{fin}, {\pmone}}(Y_i', M))_{i\ge 0}$ where the subscript $\mathrm{fin}$ means finite support) of the $\pmone$-equivariant maps from $Y_i'$ to $M$ forms a complex of abelian groups in a similar manner. (One uses the locally finiteness of $Y_\bullet$ for the latter.) The cohomology groups of this complex are the cohomology groups $H^*(Y_\bullet, M)$ (resp. the cohomology groups with compact support $H_c^*(Y_\bullet, M)$) of $Y_\bullet$ with coefficients in $M$. There is a canonical map from cohomology with compact support to cohomology. \section{Universal coefficient theorem} \label{univ_coeff} It follows from the definitions that the following universal coefficient theorem holds. \subsection{} For a simplicial complex $Y_\bullet$, there exist canonical short exact sequences $$ 0 \to H_i(Y_\bullet, \Z) \otimes M \to H_i(Y_\bullet, M) \to \Tor_1^\Z (H_{i-1}(Y_\bullet, \Z),M) \to 0 $$ and $$ 0 \to \Ext^1_\Z(H_{i-1}(Y_\bullet, \Z),M) \to H^i(Y_\bullet, M) \to \Hom_\Z (H_i(Y_\bullet, \Z),M) \to 0. $$ for any abelian group $M$. \subsection{} Similarly, for a locally finite simplicial complex $Y_\bullet$, we have short exact sequences $$ 0 \to \Ext^1_\Z(H_c^{i+1}(Y_\bullet, \Z),M) \to H^\BM_i(Y_\bullet,M) \to \Hom_\Z (H_c^i(Y_\bullet, \Z),M) \to 0 $$ and $$ 0 \to H^i_c (Y_\bullet, \Z) \otimes M \to H_c^i(Y_\bullet, M) \to \Tor_1^\Z (H_c^{i+1}(Y_\bullet, \Z),M) \to 0 $$ for any abelian group $M$. \section{Some induced maps} \label{sec:quasifinite} Let $f=(f_i)_{i \ge 0}: Y_\bullet \to Z_\bullet$ be a map of simplicial complexes. For each integer $i \ge 0$ and for each abelian group $M$, the map $f$ canonically induces homomorphisms $f_* : H_i(Y_\bullet, M) \to H_i(Z_\bullet,M)$ and $f^* : H^i(Z_\bullet,M) \to H^i(Y_\bullet,M)$. We say that the map $f$ is finite if the subset $f_i^{-1}(\sigma)$ of $Y_i$ is finite for any $i \ge 0$ and for any $\sigma \in Z_i$. If $Y_\bullet$ and $Z_\bullet$ are locally finite, and if $f$ is finite, then $f$ canonically induces the pushforward homomorphism $f_* : H^\BM_i(Y_\bullet, M) \to H^\BM_i(Z_\bullet,M)$ and the pullback homomorphism $f^*: H_c^i(Z_\bullet,M) \to H_c^i(Y_\bullet,M)$. \section{Geometric realization} \label{sec:CWcomplex} Let $Y_\bullet$ be a simplicial complex. We associate a CW complex $|Y_\bullet|$ which we call the geometric realization of $Y_\bullet$. Let $I(Y_\bullet)$ denote the disjoint union $\coprod_{i \ge 0} Y_i$. We define a partial order on the set $I(Y_\bullet)$ by saying that $\tau \le \sigma$ if and only if $\tau$ is a face of $\sigma$. For $\sigma \in I(Y_\bullet)$, we let $\Delta_\sigma$ denote the set of maps $f:V(\sigma) \to \R_{\ge 0}$ satisfying $\sum_{v \in V(\sigma)} f(v) =1$. We regard $\Delta_\sigma$ as a topological space whose topology is induced from that of the real vector space $\Map(V(\sigma),\R)$. If $\tau$ is a face of $\sigma$, we regard the space $\Delta_\tau$ as the closed subspace of $\Delta_\sigma$ which consists of the maps $V(\sigma) \to \R_{\ge 0}$ whose support is contained in the subset $V(\tau) \subset V(\sigma)$. We let $|Y_\bullet|$ denote the colimit $\varinjlim_{\sigma \in I(Y_\bullet)} \Delta_\sigma$ in the category of topological spaces and call it the geometric realization of $Y_\bullet$. It follows from the definition that the geometric realization $|Y_\bullet|$ has a canonical structure of CW complex. \section{Cellular versus singular} \label{sec:cellular} We give a remark on the use of the term ``Borel-Moore homology'' in this paragraph. Given a strict simplicial complex, its cohomology, homology and cohomology with compact support (for a locally finite strict simplicial complex) are usually defined as above, and called cellular (co)homology. See for example \cite{Hatcher}. On the other hand, there is also the singular (co)homology and with compact support that are defined using the singular (co)chain complex. It is well-known that the cellular (co)homology groups (with compact support) are isomorphic to the singular (co)homology groups (with compact support) of the geometric realization. The same proof applies to the generalized simplicial complexes and gives an isomorphism between the cellular and the singular theories. For the Borel-Moore homology, we did not find a cellular definition as above, except in Hattori \cite{Hattori} where he does not call it the Borel-Moore homology. He also gives a definition using singular chains and shows that the two homology groups are isomorphic. There are several definitions of Borel-Moore homology that may be associated to a (strict) simplicial complex. The definition of the Borel-Moore homology for PL manifolds is found in Haefliger \cite{Haefliger}. There is also a sheaf theoretic definition in Iversen \cite{Iversen}. More importantly, there is the general definition which concerns the intersection homology. However, we did not find a statement in the literature and we did not check that the cellular definition in Hattori (same as the one given in this article) is isomorphic to the other Borel-Moore homology theories.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,065
Promptuarii Iconum Insigniorum a Seculo Hominium – opublikowany w 1553 w Lyonie przez Guillaume'a Rouilé (ok. 1518–1589) zbiór krótkich biografii postaci historycznych i legendarnych wraz z imaginacyjnymi portretami w kształcie monet. W albumie przedstawionych jest wielu przedstawicieli średniowiecza, a także postacie biblijne i związane z historią żydowską, takie jak: królowie Izraela i Judy, patriarchowie semiccy (od Sema do Teracha), prorocy, sędziowie Hasmoneusze, dynastia Heroda. Nie brakuje również postaci antycznych, jak bohaterowie mitologii, władcy starożytnej Grecji, królowie i republikanie rzymscy. Przykładowe wizerunki: Francuskie utwory literackie Utwory literackie z 1553 Zbiory utworów literackich
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,638
Le col de Tracens est un col de montagne pédestre des Pyrénées à d'altitude dans le département français des Hautes-Pyrénées, en Occitanie. Il se trouve dans la vallée dets Coubous en pays Toy. Toponymie Géographie Le col de Tracens est situé entre le pic de la Touatère () au nord et le pic de Madaméte () au sud. Il surplombe à l'ouest le lac Nère () et le lac de Madamette () à l'est. Histoire Protection environnementale Le col fait partie d'une zone naturelle protégée, classée ZNIEFF de type 1, « massif en rive gauche du Bastan », et de type 2, « vallées de Barèges et de Luz ». Voies d'accès Le versant ouest est accessible par un sentier de randonnée depuis le lac dets Coubous et par le versant sud depuis la cabane d'Aygues Cluses. Notes et références Notes Références Voir aussi Articles connexes Liste des cols des Hautes-Pyrénées Liste des cols des Pyrénées Liens externes Tracens Tracens Massif du Néouvielle
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,576
The 686 GLCR Multi jacket gets it's name from multi-season, multi-adventure, and multi-conditions. Meaning, 686 intends on you using the Multi Jacket for more than just snowboarding or skiing. This will be the jacket you use for hiking, backpacking, biking and basically, your year-round waterproof jacket. Customers usually buy a ski or snowboard specific jacket (often heavier materials) and then a rain jacket. 686 wanted to create a jacket for all these sports and conditions. We had a chance to test it out in Colorado during some backcountry splitboarding as well as rain and casual use. Weather changes pretty fast here, so we always keep a shell with us. When we first saw the GLCR Multi at SIA 2017 show, we were curious about the suspender-like straps on the inside of the jacket. The idea there is that the user could take the jacket off, but leave the straps attached, so the jacket would rest on the back and let the user cool off. We thought that was a nifty idea, but not something we would probably use. Fast forward a few months and we got to actually test the jacket out and put it through our review process. We were pleasantly surprised by a number of features on the jacket. When the jacket came to us for testing we promptly took it into a snowstorm and toured up a mountain with it on. This is the ultimate test of any waterproof breathable jacket. No shell can handle the heat build up of touring, even the most breathable waterproof membranes can not manage the water vapor created by sweating on the uphill. We find the best way to deal with this is open the vents to the max and try to get that humidity out. The Multi has two huge vents on the front. These are also the pockets. They did a really good job of allowing air in to keep the moisture pretty well managed. The large pockets are stretchy mesh lined and have a smaller stretch pocket on the inside. The smaller pocket was a perfect size for our large iPhone. What we were most impressed with on the Multi, was the fit and small details. The fit was not too baggy, nor was it alpine tight. The hood is not helmet compatible, but that is totally ok with us. Having a hood that is closer fitting is more useful for year round use. Ever try to hike with your ski jacket hood on? Then you know what we are talking about. The collar is the perfect height and is merino wool lined for moisture management and comfort. 686 uses GoreTex Paclite for the waterproof breathable membrane. While there are other jackets out there that use Pro Shell and other 3 layer WPB membranes, they are also more expensive. For the money, you get a great value with the Multi jacket. Paclite is a 2.5 layer membrane and is just as waterproof as its big brothers. The biggest takeaway with a GoreTex membrane (aside from it obviously being waterproof) is the ease of care. Just wash and dry. With the jacket being more streamlined and minimalist, it is also very lightweight and packable. it comes with a stuff sack that you can pack the jacket into. We like the basic function of the Multi, it just works. Plus it looks pretty sharp. 686 did their homework and designed a versatile jacket that provides a good fit and value for the customer. MSRP $300 Grab it direct from 686!
{ "redpajama_set_name": "RedPajamaC4" }
4,534
I'm back at it for day 2 of trying a new technique. This time around I'm using gold linoleum which certainly cuts differently than the battleship grey. Like yesterday, a dry point needle is the tool utilized and I'm and scratching with the flow of the work. The block is 6″x 6″. Block is inked lightly and then proofed using mulberry paper. Not a great impression, but it gives me a clue where I'm heading with the work. I could have just done all this work on a scratch board and skipped the whole printing process, but I guess that's not me. The subject matter is a little Golden Crowned Kinglet that had a minor incident with our window this winter. I had to bring it inside for 20 min before releasing during our super cold-snap back in December. I knew back then he would turn into a print eventually. Hopefully the Sharp-Shinned Hawk will make it to paper too. Thanks! Using only the dry point tool is driving me nuts and I'll probably be pulling out carving tools to excavate my white zones. I'm having second thoughts about this process…. but I'll keep plugging along. Have you tried printing white on black yet? Not yet. I need to pull out the oil based ink and see what I have available. My ink might be dried up.
{ "redpajama_set_name": "RedPajamaC4" }
8,593
\section{Introduction} \label{sec:intro} Since the discovery of the $X(3872)$~\cite{Choi:2003ue}, dozens of charmonium-like $XYZ$ states have been reported~\cite{Agashe:2014kda}. They are good candidates of tetraquark states, which are made of two quarks and two antiquarks. In the past year, the LHCb Collaboration observed two hidden-charm pentaquark resonances, $P_c(4380)$ and $P_c(4450)$, in the $J/\psi p$ invariant mass spectrum~\cite{lhcb}. They are good candidates of pentaquark states, which are made of four quarks and one antiquark. They all belong to the exotic states, which can not be explained in the traditional quark model, and are of particular importance to understand the low energy behaviours of Quantum Chromodynamics (QCD)~\cite{Chen:2016qju}. Before the LHCb's observation of the $P_c(4380)$ and $P_c(4450)$~\cite{lhcb}, there had been extensive theoretical discussions on the existence of hidden-charm pentaquark states~\cite{Wu:2010jy,Yang:2011wz,Xiao:2013yca,Uchino:2015uha,Karliner:2015ina,Garzon:2015zva,Wang:2011rga,Yuan:2012wz,Huang:2013mua}. Yet, this experimental observation triggered more studies to explain their nature, such as meson-baryon molecules~\cite{Chen:2015loa,Roca:2015dva,He:2015cea,Huang:2015uda,Meissner:2015mza,Xiao:2015fia,Chen:2016heh}, diquark-diquark-antiquark pentaquarks~\cite{Maiani:2015vwa,Anisovich:2015cia,Ghosh:2015ksa,Wang:2015epa,Wang:2015ixb}, compact diquark-triquark pentaquarks~\cite{Lebed:2015tna,Zhu:2015bba}, the topological soliton model~\cite{Scoccola:2015nia}, genuine multiquark states other than molecules~\cite{Mironov:2015ica}, and kinematical effects related to the triangle singularity~\cite{Guo:2015umn,Liu:2015fea,Mikhasenko:2015vca}, etc. Their productions and decay properties are also interesting, and have been studied in Refs.~\cite{Li:2015gta,Cheng:2015cca,Wang:2015jsa,Kubarovsky:2015aaa,Karliner:2015voa,Lu:2015fva,Hsiao:2015nna,Wang:2015qlf,Feijoo:2015kts,Wang:2016vxa,Schmidt:2016cmd}, etc. For more extensive discussions, see Refs.~\cite{Chen:2016qju,Burns:2015dwa,Oset:2016lyh}. In this paper we use the method of QCD sum rule to study the mass spectrum of hidden-charm pentaquarks having spin $J = {1\over2}/{3\over2}/{5\over2}$ and quark contents $uud c \bar c$. We shall investigate the possibility of interpreting the $P_c(4380)$ and $P_c(4450)$ as hidden-charm pentaquark states. We shall also investigate other possible hidden-charm pentaquark states. The present discussion is an extension of our recent work shortly reported in Ref.~\cite{Chen:2015moa}. In the calculation we need the resonance parameters of the $P_c(4380)$ and $P_c(4450)$ measured in the LHCb experiment~\cite{lhcb}: \begin{eqnarray} \nonumber M_{P_c(4380)}&=&4380\pm 8\pm 29\, \mathrm{MeV} \, , \\ \nonumber \Gamma_{P_c(4380)}&=&205\pm18\pm86\, \mathrm{MeV} \, , \\ \nonumber M_{P_c(4450)}&=&4449.8\pm 1.7\pm 2.5 \,\mathrm{MeV} \, , \\ \nonumber \Gamma_{P_c(4450)}&=&39\pm5\pm19\, \mathrm{MeV} \, , \end{eqnarray} as well as the preferred spin-parity assignments $(3/2^-, 5/2^+)$ for the $P_c(4380)$ and $P_c(4450)$, respectively~\cite{lhcb}. This paper organized as follows. After this Introduction, we systematically construct the local pentaquark interpolating currents having spin $J = 1/2$ and quark contents $uud c \bar c$ in Sec.~\ref{sec:current}. The currents having spin $J = 3/2$ and $J = 5/2$ are similarly constructed in Appendixes~\ref{app:spin32} and \ref{app:spin52}, respectively. These currents are used to perform QCD sum rule analyses in Sec.~\ref{sec:sumrule}, and then are used to perform numerical analyses in Sec.~\ref{sec:numerical}. The results are discussed and summarized in Sec.~\ref{sec:summary}. An example applying the Fierz transformation and the color rearrangement is given in Appendix~\ref{app:example}. This paper has a supplementary file ``OPE.nb'' containing all the spectral densities. \section{Local Pentaquark Currents of Spin 1/2} \label{sec:current} In this section we systematically construct local pentaquark interpolating currents having spin $J = 1/2$ and quark contents $uud c \bar c$. There are two possible color configurations, $[\bar c_d c_d][\epsilon^{abc}q_a q_b q_c]$ and $[\bar c_d q_d][\epsilon^{abc}c_a q_b q_c]$, where $a \cdots d$ are color indices, $q$ represents the up, down or strange quark, and $c$ represents the charm quark. These two configurations, if they are local, can be related by the Fierz transformation as well as the color rearrangement: \begin{eqnarray} \delta^{de} \epsilon^{abc} &=& \delta^{da} \epsilon^{ebc} + \delta^{db} \epsilon^{aec} + \delta^{dc} \epsilon^{abe} \, . \label{eq:cr} \end{eqnarray} With this relation, the color configurations $[\bar c^d c^d][\epsilon_{abc}q_1^a q_2^b q_3^c]$, $[\bar c^d q_1^d][\epsilon_{abc}c^a q_2^b q_3^c]$, $[\bar c^d q_2^d][\epsilon_{abc}c^a q_1^b q_3^c]$ and $[\bar c^d q_3^d][\epsilon_{abc}c^a q_1^b q_2^c]$ can actually be related, where $q_{1,2,3}$ represent three light quark fields. There are several formulae related to the Fierz transformation, some of which were given in Refs.~\cite{Chen:2006hy,Chen:2008qv}. In this paper we also need to use the product of two Dirac matrices with two symmetric Lorentz indices: \begin{tiny} \begin{eqnarray} \nonumber && \left ( \begin{array}{l} g_{\mu\nu} \mathbf{1} \otimes \mathbf{1} \\ g_{\mu\nu} \gamma_\rho \otimes \gamma^\rho \\ g_{\mu\nu} \sigma_{\rho\sigma} \otimes \sigma^{\rho\sigma} \\ g_{\mu\nu} \gamma_{\rho} \gamma_5 \otimes \gamma^{\rho} \gamma_5 \\ g_{\mu\nu} \gamma_5 \otimes \gamma_5 \\ \gamma_\mu \otimes \gamma_\nu + (\mu \leftrightarrow \nu) \\ \gamma_\mu \gamma_5 \otimes \gamma_\nu \gamma_5 + (\mu \leftrightarrow \nu) \\ \sigma_{\mu \rho} \otimes \sigma_{\nu \rho} + (\mu \leftrightarrow \nu) \end{array} \right )_{a b, c d} \\ && ~~~~~~~~ = \left ( \begin{array}{llllllll} {1\over4} & {1\over4} & {1\over8} & -{1\over4} & {1\over4} & 0 & 0 & 0 \\ 1 & -{1\over2} & 0 & -{1\over2} & -1 & 0 & 0 & 0 \\ 3 & 0 & -{1\over2} & 0 & 3 & 0 & 0 & 0 \\ -1 & -{1\over2} & 0 & -{1\over2} & 1 & 0 & 0 & 0 \\ {1\over4} & -{1\over4} & {1\over8} & {1\over4} & {1\over4} & 0 & 0 & 0 \\ {1\over2} & -{1\over2} & {1\over4} & -{1\over2} & -{1\over2} & {1\over2} & {1\over2} & -{1\over2} \\ -{1\over2} & -{1\over2} & -{1\over4} & -{1\over2} & {1\over2} & {1\over2} & {1\over2} & {1\over2} \\ {3\over2} & {1\over2} & -{1\over4} & -{1\over2} & {3\over2} & -1 & 1 & 0 \end{array} \right ) \left ( \begin{array}{l} g_{\mu\nu} \mathbf{1} \otimes \mathbf{1} \\ g_{\mu\nu} \gamma_\rho \otimes \gamma^\rho \\ g_{\mu\nu} \sigma_{\rho\sigma} \otimes \sigma^{\rho\sigma} \\ g_{\mu\nu} \gamma_{\rho} \gamma_5 \otimes \gamma^{\rho} \gamma_5 \\ g_{\mu\nu} \gamma_5 \otimes \gamma_5 \\ \gamma_\mu \otimes \gamma_\nu + (\mu \leftrightarrow \nu) \\ \gamma_\mu \gamma_5 \otimes \gamma_\nu \gamma_5 + (\mu \leftrightarrow \nu) \\ \sigma_{\mu \rho} \otimes \sigma_{\nu \rho} + (\mu \leftrightarrow \nu) \end{array} \right )_{a d, b c}. \end{eqnarray} \end{tiny} However, the detailed relations between $[\bar c_d c_d][\epsilon^{abc}q_a q_b q_c]$ and $[\bar c_d q_d][\epsilon^{abc}c_a q_b q_c]$ can not be easily obtained. We just show one example in Appendix~\ref{app:example}. We also systematically construct local pentaquark interpolating currents having spin $J = 3/2$ and $J = 5/2$, and list the results in Appendixes~\ref{app:spin32} and \ref{app:spin52}, respectively. \subsection{Currents of $[\bar c_d c_d][\epsilon^{abc}u_a d_b u_c]$} In this subsection, we construct the currents of the color configuration $[\bar c_d c_d][\epsilon^{abc}u_a d_b u_c]$. We only investigate the currents of the following type \begin{eqnarray} \eta &=& [\epsilon^{abc} (u^T_a C \Gamma_i d_b) \Gamma_j u_c] [\bar c_d \Gamma_k c_d] \, , \end{eqnarray} where $\Gamma_{i,j,k}$ are various Dirac matrices. The currents of the other types $[\bar c_d \Gamma_k c_d][\epsilon^{abc} (u_a^T C \Gamma_i u_b) \Gamma_j d_c]$ and \\ $[\bar c_d \Gamma_k c_d][\epsilon^{abc} (d_a^T C \Gamma_i u_b) \Gamma_j u_c]$ can be related to these currents by using the Fierz transformation. We can easily construct them based on the results of Ref.~\cite{Chen:2008qv} that there are three independent local light baryon fields of flavor-octet and having the positive parity: \begin{eqnarray} \nonumber N^N_1 &=& \epsilon_{abc} \epsilon^{ABD} \lambda_{DC}^N (q_A^{aT} C q_B^b) \gamma_5 q_C^c \, , \\ N^N_2 &=& \epsilon_{abc} \epsilon^{ABD} \lambda_{DC}^N (q_A^{aT} C \gamma_5 q_B^b) q_C^c \, , \label{eq:baryon} \\ \nonumber N^N_{3\mu} &=& \epsilon_{abc} \epsilon^{ABD} \lambda_{DC}^N (q_A^{aT} C \gamma_\mu \gamma_5 q_B^b) \gamma_5 q_C^c \, , \end{eqnarray} where $A \cdots D$ are flavor indices, and $q_{A}=(u\, ,d\, ,s)$ is the light quark field of flavor-triplet. Together with light baryon fields having the negative parity, $\gamma_5 N^N_{1,2}$ and $\gamma_5 N^N_{3\mu}$, and the charmonium fields: \begin{eqnarray} &\bar c_d c_d \, [0^+] \, , \bar c_d \gamma_5 c_d \, [0^-] \, ,& \\ \nonumber &\bar c_d \gamma_\mu c_d \, [1^-] \, , \bar c_d \gamma_\mu \gamma_5 c_d \, [1^+] \, , \bar c_d \sigma_{\mu\nu} c_d \, [1^\pm] \, ,& \end{eqnarray} we can construct the following currents having $J^P=1/2^+$ and quark contents $uud c \bar c$: \begin{eqnarray} \nonumber \eta_1 &=& [\epsilon^{abc} (u^T_a C d_b) \gamma_5 u_c] [\bar c_d c_d] \, , \\ \nonumber \eta_2 &=& [\epsilon^{abc} (u^T_a C d_b) u_c] [\bar c_d \gamma_5 c_d] \, , \\ \nonumber \eta_3 &=& [\epsilon^{abc} (u^T_a C \gamma_5 d_b) u_c] [\bar c_d c_d] \, , \\ \nonumber \eta_4 &=& [\epsilon^{abc} (u^T_a C \gamma_5 d_b) \gamma_5 u_c] [\bar c_d \gamma_5 c_d] \, , \\ \nonumber \eta_5 &=& [\epsilon^{abc} (u^T_a C d_b) \gamma_\mu \gamma_5 u_c] [\bar c_d \gamma_\mu c_d] \, , \\ \nonumber \eta_6 &=& [\epsilon^{abc} (u^T_a C d_b) \gamma_\mu u_c] [\bar c_d \gamma_\mu \gamma_5 c_d] \, , \\ \nonumber \eta_7 &=& [\epsilon^{abc} (u^T_a C \gamma_5 d_b) \gamma_\mu u_c] [\bar c_d \gamma_\mu c_d] \, , \\ \eta_8 &=& [\epsilon^{abc} (u^T_a C \gamma_5 d_b) \gamma_\mu \gamma_5 u_c] [\bar c_d \gamma_\mu \gamma_5 c_d] \, , \\ \nonumber \eta_9 &=& [\epsilon^{abc} (u^T_a C d_b) \sigma_{\mu\nu} \gamma_5 u_c] [\bar c_d \sigma_{\mu\nu} c_d] \, , \\ \nonumber \eta_{10} &=& [\epsilon^{abc} (u^T_a C d_b) \sigma_{\mu\nu} u_c] [\bar c_d \sigma_{\mu\nu} \gamma_5 c_d] \, , \\ \nonumber \eta_{11} &=& [\epsilon^{abc} (u^T_a C \gamma_5 d_b) \sigma_{\mu\nu} u_c] [\bar c_d \sigma_{\mu\nu} c_d] \, , \\ \nonumber \eta_{12} &=& [\epsilon^{abc} (u^T_a C \gamma_5 d_b) \sigma_{\mu\nu} \gamma_5 u_c] [\bar c_d \sigma_{\mu\nu} \gamma_5 c_d] \, . \\ \nonumber \eta_{13} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu \gamma_5 d_b) u_c] [\bar c_d \gamma_\mu c_d] \, , \\ \nonumber \eta_{14} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu \gamma_5 d_b) \gamma_5 u_c] [\bar c_d \gamma_\mu \gamma_5 c_d] \, , \\ \nonumber \eta_{15} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu \gamma_5 d_b) \gamma_\nu u_c] [\bar c_d \sigma_{\mu\nu} c_d] \, , \\ \nonumber \eta_{16} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu \gamma_5 d_b) \gamma_\nu \gamma_5 u_c] [\bar c_d \sigma_{\mu\nu} \gamma_5 c_d] \, . \label{def:scalaretacurrent} \end{eqnarray} We can verify the following relations \begin{eqnarray} \nonumber \eta_{9} &=& \eta_{10} \, , \\ \eta_{11} &=& \eta_{12} \, , \\ \nonumber \eta_{11} &=& \eta_{9} + 2i \eta_{15} - 2i \eta_{16} \, . \end{eqnarray} Hence, only 13 currents are independent in Eq. \eqref{def:scalaretacurrent}. All of them have $J^P = 1/2^+$, while their partner currents, $\gamma_5 \eta_i$, have $J^P = 1/2^-$. We shall not use all of them to perform QCD sum rule analyses, but select those containing pseudoscalar ($\bar c_d \gamma_5 c_d$) and vector ($\bar c_d \gamma_\mu c_d$) components \begin{eqnarray} \eta_2 - \eta_4 &=& [\epsilon^{abc} (u^T_a C d_b) u_c] [\bar c_d \gamma_5 c_d] \label{def:eta24} \\ \nonumber && ~~~~~~~~~~ - [\epsilon^{abc} (u^T_a C \gamma_5 d_b) \gamma_5 u_c] [\bar c_d \gamma_5 c_d] \, , \\ \eta_5 - \eta_7 &=& [\epsilon^{abc} (u^T_a C d_b) \gamma_\mu \gamma_5 u_c] [\bar c_d \gamma_\mu c_d] \label{def:eta57} \\ \nonumber && ~~~~~~~~~~ - [\epsilon^{abc} (u^T_a C \gamma_5 d_b) \gamma_\mu u_c] [\bar c_d \gamma_\mu c_d] \, , \\ \eta_{13} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu \gamma_5 d_b) u_c] [\bar c_d \gamma_\mu c_d] \, . \label{def:eta13} \end{eqnarray} Their internal structures are quite simple, suggesting that they well couple to the $[p \eta_c]$, $[p J/\psi]$, and $[N^* J/\psi]$ channels, respectively. Especially, $\eta_2 - \eta_4$ and $\eta_5 - \eta_7$ both contain the ``Ioffe's baryon current'', which couples strongly to the lowest-lying nucleon state~\cite{Belyaev:1982sa,Espriu:1983hu}. \subsection{Currents of $[\bar c_d u_d][\epsilon^{abc} u_a d_b c_c]$} In this subsection, we construct the currents of the type $[\bar c_d \Gamma_k u_d][\epsilon^{abc} (u_a^T C \Gamma_i d_b) \Gamma_j c_c]$. The currents of the other types $[\bar c_d \Gamma_k u_d][\epsilon^{abc} (u_a^T C \Gamma_i c_b) \Gamma_j d_c]$ and $[\bar c_d \Gamma_k u_d][\epsilon^{abc} (c_a^T C \Gamma_i d_b) \Gamma_j u_c]$, etc. can be related to these currents by using the Fierz transformation. We find the following currents having $J^P=1/2^+$ and quark contents $uud c \bar c$: \begin{eqnarray} \nonumber \xi_1 &=& [\epsilon^{abc} (u^T_a C d_b) \gamma_5 c_c] [\bar c_d u_d] \, , \\ \nonumber \xi_2 &=& [\epsilon^{abc} (u^T_a C d_b) c_c] [\bar c_d \gamma_5 u_d] \, , \\ \nonumber \xi_3 &=& [\epsilon^{abc} (u^T_a C \gamma_5 d_b) c_c] [\bar c_d u_d] \, , \\ \nonumber \xi_4 &=& [\epsilon^{abc} (u^T_a C \gamma_5 d_b) \gamma_5 c_c] [\bar c_d \gamma_5 u_d] \, , \\ \nonumber \xi_5 &=& [\epsilon^{abc} (u^T_a C d_b) \gamma_\mu \gamma_5 c_c] [\bar c_d \gamma_\mu u_d] \, , \\ \nonumber \xi_6 &=& [\epsilon^{abc} (u^T_a C d_b) \gamma_\mu c_c] [\bar c_d \gamma_\mu \gamma_5 u_d] \, , \\ \nonumber \xi_7 &=& [\epsilon^{abc} (u^T_a C \gamma_5 d_b) \gamma_\mu c_c] [\bar c_d \gamma_\mu u_d] \, , \\ \nonumber \xi_8 &=& [\epsilon^{abc} (u^T_a C \gamma_5 d_b) \gamma_\mu \gamma_5 c_c] [\bar c_d \gamma_\mu \gamma_5 u_d] \, , \\ \nonumber \xi_9 &=& [\epsilon^{abc} (u^T_a C d_b) \sigma_{\mu\nu} \gamma_5 c_c] [\bar c_d \sigma_{\mu\nu} u_d] \, , \\ \nonumber \xi_{10} &=& [\epsilon^{abc} (u^T_a C d_b) \sigma_{\mu\nu} c_c] [\bar c_d \sigma_{\mu\nu} \gamma_5 u_d] \, , \\ \nonumber \xi_{11} &=& [\epsilon^{abc} (u^T_a C \gamma_5 d_b) \sigma_{\mu\nu} c_c] [\bar c_d \sigma_{\mu\nu} u_d] \, , \\ \nonumber \xi_{12} &=& [\epsilon^{abc} (u^T_a C \gamma_5 d_b) \sigma_{\mu\nu} \gamma_5 c_c] [\bar c_d \sigma_{\mu\nu} \gamma_5 u_d] \, , \\ \nonumber \xi_{13} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu d_b) \gamma_\mu \gamma_5 c_c] [\bar c_d u_d] \, , \\ \nonumber \xi_{14} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu d_b) \gamma_\mu c_c] [\bar c_d \gamma_5 u_d] \, , \\ \nonumber \xi_{15} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu \gamma_5 d_b) \gamma_\mu c_c] [\bar c_d u_d] \, , \\ \nonumber \xi_{16} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu \gamma_5 d_b) \gamma_\mu \gamma_5 c_c] [\bar c_d \gamma_5 u_d] \, , \\ \nonumber \xi_{17} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu d_b) \gamma_5 c_c] [\bar c_d \gamma_\mu u_d] \, , \\ \nonumber \xi_{18} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu d_b) c_c] [\bar c_d \gamma_\mu \gamma_5 u_d] \, , \\ \nonumber \xi_{19} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu \gamma_5 d_b) c_c] [\bar c_d \gamma_\mu u_d] \, , \\ \nonumber \xi_{20} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu \gamma_5 d_b) \gamma_5 c_c] [\bar c_d \gamma_\mu \gamma_5 u_d] \, , \\ \nonumber \xi_{21} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu d_b) \sigma_{\mu\nu} \gamma_5 c_c] [\bar c_d \gamma_\nu u_d] \, , \\ \xi_{22} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu d_b) \sigma_{\mu\nu} c_c] [\bar c_d \gamma_\nu \gamma_5 u_d] \, , \\ \nonumber \xi_{23} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu \gamma_5 d_b) \sigma_{\mu\nu} c_c] [\bar c_d \gamma_\nu u_d] \, , \\ \nonumber \xi_{24} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu \gamma_5 d_b) \sigma_{\mu\nu} \gamma_5 c_c] [\bar c_d \gamma_\nu \gamma_5 u_d] \, , \\ \nonumber \xi_{25} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu d_b) \gamma_\nu \gamma_5 c_c] [\bar c_d \sigma_{\mu\nu} u_d] \, , \\ \nonumber \xi_{26} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu d_b) \gamma_\nu c_c] [\bar c_d \sigma_{\mu\nu} \gamma_5 u_d] \, , \\ \nonumber \xi_{27} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu \gamma_5 d_b) \gamma_\nu c_c] [\bar c_d \sigma_{\mu\nu} u_d] \, , \\ \nonumber \xi_{28} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu \gamma_5 d_b) \gamma_\nu \gamma_5 c_c] [\bar c_d \sigma_{\mu\nu} \gamma_5 u_d] \, , \\ \nonumber \xi_{29} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} d_b) \sigma_{\mu\nu} \gamma_5 c_c] [\bar c_d u_d] \, , \\ \nonumber \xi_{30} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} d_b) \sigma_{\mu\nu} c_c] [\bar c_d \gamma_5 u_d] \, , \\ \nonumber \xi_{31} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} \gamma_5 d_b) \sigma_{\mu\nu} c_c] [\bar c_d u_d] \, , \\ \nonumber \xi_{32} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} \gamma_5 d_b) \sigma_{\mu\nu} \gamma_5 c_c] [\bar c_d \gamma_5 u_d] \, , \\ \nonumber \xi_{33} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} d_b) \gamma_\mu \gamma_5 c_c] [\bar c_d \gamma_\nu u_d] \, , \\ \nonumber \xi_{34} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} d_b) \gamma_\mu c_c] [\bar c_d \gamma_\nu \gamma_5 u_d] \, , \\ \nonumber \xi_{35} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} \gamma_5 d_b) \gamma_\mu c_c] [\bar c_d \gamma_\nu u_d] \, , \\ \nonumber \xi_{36} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} \gamma_5 d_b) \gamma_\mu \gamma_5 c_c] [\bar c_d \gamma_\nu \gamma_5 u_d] \, , \\ \nonumber \xi_{37} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} d_b) \gamma_5 c_c] [\bar c_d \sigma_{\mu\nu} u_d] \, , \\ \nonumber \xi_{38} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} d_b) c_c] [\bar c_d \sigma_{\mu\nu} \gamma_5 u_d] \, , \\ \nonumber \xi_{39} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} \gamma_5 d_b) c_c] [\bar c_d \sigma_{\mu\nu} u_d] \, , \\ \nonumber \xi_{40} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} \gamma_5 d_b) \gamma_5 c_c] [\bar c_d \sigma_{\mu\nu} \gamma_5 u_d] \, , \\ \nonumber \xi_{41} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\rho} d_b) \sigma_{\mu\nu} \gamma_5 c_c] [\bar c_d \sigma_{\nu\rho} u_d] \, , \\ \nonumber \xi_{42} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\rho} d_b) \sigma_{\mu\nu} c_c] [\bar c_d \sigma_{\nu\rho} \gamma_5 u_d] \, , \\ \nonumber \xi_{43} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\rho} \gamma_5 d_b) \sigma_{\mu\nu} c_c] [\bar c_d \sigma_{\nu\rho} u_d] \, , \\ \nonumber \xi_{44} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\rho} \gamma_5 d_b) \sigma_{\mu\nu} \gamma_5 c_c] [\bar c_d \sigma_{\nu\rho} \gamma_5 u_d] \, . \end{eqnarray} We can verify the following relations \begin{eqnarray} \nonumber \xi_{9} &=& \xi_{10} \, , \\ \nonumber \xi_{11} &=& \xi_{12} \, , \\ \xi_{29} &=& \xi_{31} \, , \\ \nonumber \xi_{30} &=& \xi_{32} \, , \\ \nonumber \xi_{37} &=& \xi_{40} \, , \\ \nonumber \xi_{38} &=& \xi_{39} \, . \end{eqnarray} Then there are only 38 independent currents left. To perform QCD sum rule analyses, we shall use \begin{eqnarray} \xi_2 - \xi_4 &=& [\epsilon^{abc} (u^T_a C d_b) c_c] [\bar c_d \gamma_5 u_d] \label{def:xi24} \\ \nonumber && ~~~~~~~~~~ - [\epsilon^{abc} (u^T_a C \gamma_5 d_b) \gamma_5 c_c] [\bar c_d \gamma_5 u_d] \, , \\ \xi_5 - \xi_7 &=& [\epsilon^{abc} (u^T_a C d_b) \gamma_\mu \gamma_5 c_c] [\bar c_d \gamma_\mu u_d] \label{def:xi57} \\ \nonumber && ~~~~~~~~~~ - [\epsilon^{abc} (u^T_a C \gamma_5 d_b) \gamma_\mu c_c] [\bar c_d \gamma_\mu u_d] \, , \\ \xi_{14} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu d_b) \gamma_\mu c_c] [\bar c_d \gamma_5 u_d] \, , \label{def:xi14} \\ \xi_{16} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu \gamma_5 d_b) \gamma_\mu \gamma_5 c_c] [\bar c_d \gamma_5 u_d] \, , \label{def:xi16} \\ \xi_{17} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu d_b) \gamma_5 c_c] [\bar c_d \gamma_\mu u_d] \, , \label{def:xi17} \\ \xi_{19} &=& [\epsilon^{abc} (u^T_a C \gamma_\mu \gamma_5 d_b) c_c] [\bar c_d \gamma_\mu u_d] \, , \label{def:xi19} \end{eqnarray} which well couple to the $[\Lambda_c \bar D]$, $[\Lambda_c \bar D^*]$, $[\Sigma_c \bar D]$, $[\Lambda^*_c \bar D]$, $[\Sigma_c^* \bar D^*]$ and $[\Lambda^*_c \bar D^*]$ channels, respectively. \subsection{Currents of $[\bar c_d d_d][\epsilon^{abc} u_a u_b c_c]$} In this subsection, we construct the currents of the type $[\bar c_d \Gamma_k d_d][\epsilon^{abc} (u_a^T C \Gamma_i u_b) \Gamma_j c_c]$. The currents of the other types $[\bar c_d \Gamma_k d_d][\epsilon^{abc} (u_a^T C \Gamma_i c_b) \Gamma_j u_c]$ and $[\bar c_d \Gamma_k d_d][\epsilon^{abc} (c_a^T C \Gamma_i u_b) \Gamma_j u_c]$ can be related to these currents by using the Fierz transformation. We find the following currents having $J^P=1/2^+$ and quark contents $uud c \bar c$: \begin{eqnarray} \nonumber \psi_1 &=& [\epsilon^{abc} (u^T_a C \gamma_\mu u_b) \gamma_\mu \gamma_5 c_c] [\bar c_d d_d] \, , \\ \nonumber \psi_2 &=& [\epsilon^{abc} (u^T_a C \gamma_\mu u_b) \gamma_\mu c_c] [\bar c_d \gamma_5 d_d] \, , \\ \nonumber \psi_3 &=& [\epsilon^{abc} (u^T_a C \gamma_\mu u_b) \gamma_5 c_c] [\bar c_d \gamma_\mu d_d] \, , \\ \nonumber \psi_4 &=& [\epsilon^{abc} (u^T_a C \gamma_\mu u_b) c_c] [\bar c_d \gamma_\mu \gamma_5 d_d] \, , \\ \nonumber \psi_5 &=& [\epsilon^{abc} (u^T_a C \gamma_\mu u_b) \sigma_{\mu\nu} \gamma_5 c_c] [\bar c_d \gamma_\nu d_d] \, , \\ \nonumber \psi_6 &=& [\epsilon^{abc} (u^T_a C \gamma_\mu u_b) \sigma_{\mu\nu} c_c] [\bar c_d \gamma_\nu \gamma_5 d_d] \, , \\ \nonumber \psi_7 &=& [\epsilon^{abc} (u^T_a C \gamma_\mu u_b) \gamma_\nu \gamma_5 c_c] [\bar c_d \sigma_{\mu\nu} d_d] \, , \\ \nonumber \psi_8 &=& [\epsilon^{abc} (u^T_a C \gamma_\mu u_b) \gamma_\nu c_c] [\bar c_d \sigma_{\mu\nu} \gamma_5 d_d] \, , \\ \nonumber \psi_9 &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} u_b) \sigma_{\mu\nu} \gamma_5 c_c] [\bar c_d d_d] \, , \\ \nonumber \psi_{10} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} u_b) \sigma_{\mu\nu} c_c] [\bar c_d \gamma_5 d_d] \, , \\ \nonumber \psi_{11} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} \gamma_5 u_b) \sigma_{\mu\nu} c_c] [\bar c_d d_d] \, , \\ \psi_{12} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} \gamma_5 u_b) \sigma_{\mu\nu} \gamma_5 c_c] [\bar c_d \gamma_5 d_d] \, , \\ \nonumber \psi_{13} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} u_b) \gamma_\mu \gamma_5 c_c] [\bar c_d \gamma_\nu d_d] \, , \\ \nonumber \psi_{14} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} u_b) \gamma_\mu c_c] [\bar c_d \gamma_\nu \gamma_5 d_d] \, , \\ \nonumber \psi_{15} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} \gamma_5 u_b) \gamma_\mu c_c] [\bar c_d \gamma_\nu d_d] \, , \\ \nonumber \psi_{16} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} \gamma_5 u_b) \gamma_\mu \gamma_5 c_c] [\bar c_d \gamma_\nu \gamma_5 d_d] \, , \\ \nonumber \psi_{17} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} u_b) \gamma_5 c_c] [\bar c_d \sigma_{\mu\nu} d_d] \, , \\ \nonumber \psi_{18} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} u_b) c_c] [\bar c_d \sigma_{\mu\nu} \gamma_5 d_d] \, , \\ \nonumber \psi_{19} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} \gamma_5 u_b) c_c] [\bar c_d \sigma_{\mu\nu} d_d] \, , \\ \nonumber \psi_{20} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\nu} \gamma_5 u_b) \gamma_5 c_c] [\bar c_d \sigma_{\mu\nu} \gamma_5 d_d] \, , \\ \nonumber \psi_{21} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\rho} u_b) \sigma_{\mu\nu} \gamma_5 c_c] [\bar c_d \sigma_{\nu\rho} d_d] \, , \\ \nonumber \psi_{22} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\rho} u_b) \sigma_{\mu\nu} c_c] [\bar c_d \sigma_{\nu\rho} \gamma_5 d_d] \, , \\ \nonumber \psi_{23} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\rho} \gamma_5 u_b) \sigma_{\mu\nu} c_c] [\bar c_d \sigma_{\nu\rho} d_d] \, , \\ \nonumber \psi_{24} &=& [\epsilon^{abc} (u^T_a C \sigma_{\mu\rho} \gamma_5 u_b) \sigma_{\mu\nu} \gamma_5 c_c] [\bar c_d \sigma_{\nu\rho} \gamma_5 d_d] \, . \end{eqnarray} We can verify the following relations \begin{eqnarray} \nonumber \psi_9 &=& \psi_{11} \, , \\ \psi_{10} &=& \psi_{12} \, , \\ \nonumber \psi_{18} &=& \psi_{19} \, , \\ \nonumber \psi_{17} &=& \psi_{20} \, . \end{eqnarray} Then there are 20 independent currents left. To perform QCD sum rule analyses, we shall use \begin{eqnarray} \psi_2 &=& [\epsilon^{abc} (u^T_a C \gamma_\mu u_b) \gamma_\mu c_c] [\bar c_d \gamma_5 d_d] \, , \label{def:psi2} \\ \psi_3 &=& [\epsilon^{abc} (u^T_a C \gamma_\mu u_b) \gamma_5 c_c] [\bar c_d \gamma_\mu d_d] \, , \label{def:psi3} \end{eqnarray} which well couple to the $[\Sigma^*_c \bar D]$ and $[\Sigma_c^* \bar D^*]$ channels, respectively. \subsection{A short summary} In the previous subsections, we systematically construct all local pentaquark interpolating currents having $J^P=1/2^+$ and quark contents $uud c \bar c$. We find13 independent currents of the color configuration $[\bar c_d c_d][\epsilon^{abc}u_a d_b u_c]$, 38 independent currents of the color configuration $[\bar c_d u_d][\epsilon^{abc} u_a d_b c_c]$, and 20 independent currents of the color configuration $[\bar c_d d_d][\epsilon^{abc} u_a u_b c_c]$. One may wonder that there are altogether 71 independent currents. Actually, some of these currents can be related by applying the color rearrangement in Eq.~(\ref{eq:cr}). Considering there are 71 currents in all, it will be too complicate to perform thorough transformation one by one. However, we can give an estimation that at least one third of these 71 currents are not independent and can be expressed by other currents. We shall not discuss this any more, but start to perform QCD sum rule analyses using the currents selected in this section as well as those selected in Appendixes~\ref{app:spin32} and \ref{app:spin52}. \section{QCD sum rules analyses} \label{sec:sumrule} In the following, we shall use the method of QCD sum rules~\cite{Shifman:1978bx,Reinders:1984sr,Nielsen:2009uh,Chen:2010ze,Chen:2015ata} to investigate the currents selected in Sec.~\ref{sec:current} and in Appendixes~\ref{app:spin32} and \ref{app:spin52}. The results obtained using the $J^P=1/2^+$ currents $\eta_2 - \eta_4$, $\eta_5 - \eta_7$, $\eta_{13}$, $\xi_2 - \xi_4$, $\xi_5 - \xi_7$, $\xi_{14}$, $\xi_{16}$, $\xi_{17}$, $\xi_{19}$, $\psi_2$, and $\psi_3$ are listed in Table~\ref{tab:spin12}, those obtained using the $J^P=3/2^-$ currents $\eta_{5\mu} - \eta_{7\mu}$, $\eta_{18\mu}$, $\eta_{19\mu}$, $\xi_{5\mu} - \xi_{7\mu}$, $\xi_{18\mu}$, $\xi_{20\mu}$, $\xi_{25\mu}$, $\xi_{27\mu}$, $\xi_{33\mu}$, $\xi_{35\mu}$, $\psi_{2\mu}$, $\psi_{5\mu}$, and $\psi_{9\mu}$ are listed in Table~\ref{tab:spin32}, and those obtained using the $J^P=5/2^+$ currents $\eta_{11\mu\nu}$, $\xi_{13\mu\nu}$, $\xi_{15\mu\nu}$, and $\psi_{3\mu\nu}$ are listed in Table~\ref{tab:spin52}. We use $J$, $J_{\mu}$, and $J_{\mu\nu}$ to denote the currents having spin $J=1/2$, $3/2$, and $5/2$, respectively, and assume they couple to the physical states $X$ through \begin{eqnarray} \nonumber \langle 0 | J | X_{1/2} \rangle &=& f_X u (p) \, , \\ \label{eq:gamma0} \langle 0 | J_{\mu} | X_{3/2} \rangle &=& f_X u_\mu (p) \, , \\ \nonumber \langle 0 | J_{\mu\nu} | X_{5/2} \rangle &=& f_X u_{\mu\nu} (p) \, . \end{eqnarray} The two-point correlation functions obtained using these currents can be written as: \begin{eqnarray} \label{pi:spin12} \Pi\left(q^2\right) &=& i \int d^4x e^{iq\cdot x} \langle 0 | T\left[J(x) \bar J(0)\right] | 0 \rangle \\ \nonumber &=& (q\!\!\!\slash~ + M_X) \Pi^{1/2}\left(q^2\right) \, , \\ \label{pi:spin32} \Pi_{\mu \nu}\left(q^2\right) &=& i \int d^4x e^{iq\cdot x} \langle 0 | T\left[J_{\mu}(x) \bar J_{\nu}(0)\right] | 0 \rangle \\ \nonumber &=& \left(\frac{q_\mu q_\nu}{q^2}-g_{\mu\nu}\right) (q\!\!\!\slash~ + M_X) \Pi^{3/2}\left(q^2\right) + \cdots \, , \\ \label{pi:spin52} \Pi_{\mu \nu \rho \sigma}\left(q^2\right) &=& i \int d^4x e^{iq\cdot x} \langle 0 | T\left[J_{\mu\nu}(x) \bar J_{\rho\sigma}(0)\right] | 0 \rangle \\ \nonumber &=& \left(g_{\mu\rho}g_{\nu\sigma} + g_{\mu\sigma} g_{\nu\rho} \right) (q\!\!\!\slash~ + M_X) \Pi^{5/2}\left(q^2\right) + \cdots \, , \end{eqnarray} where $\cdots$ in Eq.~(\ref{pi:spin32}) contains the spin $1/2$ components of $J_{\mu}$, and $\cdots$ in Eq.~(\ref{pi:spin52}) contains the spin $1/2$ and $3/2$ components of $J_{\mu\nu}$. We note that we have assumed that $X$ has the same parity as $J$, and used the non-$\gamma_5$ coupling in Eq.~(\ref{eq:gamma0}). While, we can also use the $\gamma_5$ coupling \begin{eqnarray} \label{eq:gamma51} \langle 0 | J | X^\prime \rangle &=& f_{X^\prime} \gamma_5 u (p) \, , \end{eqnarray} when $X^\prime$ has the opposite parity from $J$. Or we can use the partner of the current $\gamma_5 J$ having the opposite parity \begin{eqnarray} \label{eq:gamma52} \langle 0 | \gamma_5 J | X \rangle &=& f_X \gamma_5 u (p) \, . \end{eqnarray} See also discussions in Refs.~\cite{Chung:1981cc,Jido:1996ia,Kondo:2005ur,Ohtani:2012ps}. These two assumptions both lead to the two-point correlation functions which are similar to Eqs.~(\ref{pi:spin12})--(\ref{pi:spin52}), but with $(q\!\!\!\slash~ + M_{X^{(\prime)}})$ replaced by $(- q\!\!\!\slash~ + M_{X^{(\prime)}})$. This difference would tell us the parity of the hadron $X^{(\prime)}$. Technically, in the following analyses we use the terms proportional to $\mathbf{1}$, $\mathbf{1} \times g_{\mu\nu}$ and $\mathbf{1} \times g_{\mu\rho} g_{\nu\sigma}$ to evaluate the mass of $X$, which are then compared with those proportional to $q\!\!\!\slash~$, $q\!\!\!\slash~ \times g_{\mu\nu}$ and $q\!\!\!\slash~ \times g_{\mu\rho} g_{\nu\sigma}$ to determine its parity. We can calculate the two-point correlation functions (\ref{pi:spin12})--(\ref{pi:spin52}) in the QCD operator product expansion (OPE) up to certain order in the expansion, which is then matched with a hadronic parametrization to extract information about hadron properties. At the hadron level, it can be written as \begin{equation} \Pi(q^2)={\frac{1}{\pi}}\int^\infty_{s_<}\frac{{\rm Im} \Pi(s)}{s-q^2-i\varepsilon}ds \, , \label{eq:disper} \end{equation} where we have used the form of the dispersion relation, and $s_<$ denotes the physical threshold. The imaginary part of the correlation function is defined as the spectral function, which is usually evaluated at the hadron level by inserting intermediate hadron states $\sum_n|n\rangle\langle n|$ \begin{eqnarray} \nonumber \rho(s) \equiv \frac{1}{\pi}{\rm Im}\Pi(s) &=& \sum_n\delta(s-M^2_n)\langle 0|\eta|n\rangle\langle n|{\eta^\dagger}|0\rangle \\ &=& f_X^2\delta(s-m_X^2)+ \mbox{continuum}\, , \label{eq:rho} \end{eqnarray} where we have adopted the usual parametrization of one-pole dominance for the ground state $X$ and a continuum contribution. The spectral density $\rho(s)$ can also be evaluated at the quark and gluon level via the QCD operator product expansion. After performing the Borel transform at both the hadron and QCD levels, the two-point correlation function can be expressed as \begin{equation} \Pi^{(all)}(M_B^2)\equiv\mathcal{B}_{M_B^2}\Pi(p^2) = \int^\infty_{s_<} e^{-s/M_B^2} \rho(s) ds \, . \label{eq:borel} \end{equation} Finally, we assume that the contribution from the continuum states can be approximated well by the OPE spectral density above a threshold value $s_0$ (duality), and arrive at the sum rule relation which can be used to perform numerical analyses: \begin{eqnarray} M^2_X(s_0, M_B) &=& {\int^{s_0}_{s_<} e^{-s/M_B^2} \rho(s) s ds \over \int^{s_0}_{s_<} e^{-s/M_B^2} \rho(s) ds} \, . \label{eq:mass} \end{eqnarray} \newcounter{mytempeqncnt1} \begin{figure*}[!hbt] \small \hrulefill \begin{eqnarray} \rho^{[N^* J/\psi]}_{\mu\nu\rho\sigma}(s) &=& \mathbf{1} \times \big( g_{\mu\rho}g_{\nu\sigma} + g_{\mu\sigma}g_{\nu\rho} \big) \times \left ( \rho^{pert}_1(s)+\rho^{\langle\bar qq\rangle}_1(s)+\rho^{\langle GG\rangle}_1(s)+\rho^{\langle\bar qq\rangle^2}_1(s)+\rho^{\langle\bar qGq\rangle}_1(s)+\rho^{\langle\bar qq\rangle\langle\bar qGq\rangle}_1(s) \right ) \label{ope:eta11} \\ \nonumber &+& q\!\!\!\slash~\times \big( g_{\mu\rho}g_{\nu\sigma} + g_{\mu\sigma}g_{\nu\rho} \big) \times \left ( \rho^{pert}_2(s)+\rho^{\langle\bar qq\rangle}_2(s)+\rho^{\langle GG\rangle}_2(s)+\rho^{\langle\bar qq\rangle^2}_2(s)+\rho^{\langle\bar qGq\rangle}_2(s)+\rho^{\langle\bar qq\rangle\langle\bar qGq\rangle}_2(s) \right ) + \cdots \, . \\ \nonumber \rho^{pert}_1(s) &=& 0 \, , \\ \nonumber \rho^{\langle\bar qq\rangle}_1(s) &=& -\frac{\langle\bar qq\rangle}{49152\pi^6} \int^{\alpha_{max}}_{\alpha_{min}}d\alpha\int^{\beta_{max}}_{\beta_{min}}d\beta \Bigg\{ \FF(s)^4 \times \frac{(1-\alpha-\beta)(11 + 11\alpha + 11\beta + 8\alpha^2 + 16\alpha\beta + 8\beta^2)}{\alpha^3\beta^3} \\ \nonumber && - m_c^2\FF(s)^3 \times \frac{4(1-\alpha-\beta)(11 - 7\alpha - 7\beta - 4\alpha^2 - 8\alpha\beta - 4\beta^2)}{\alpha^3\beta^3} \Bigg\} \, , \\ \nonumber \rho^{\langle GG\rangle}_1(s) &=& 0 \, , \\ \nonumber \rho^{\langle\bar qGq\rangle}_1(s) &=& \frac{\langle\bar qg_s\sigma\cdot Gq\rangle}{4096\pi^6} \int^{\alpha_{max}}_{\alpha_{min}}d\alpha\int^{\beta_{max}}_{\beta_{min}}d\beta \Bigg\{ \FF(s)^3 \times \frac{\alpha + \beta + 2\alpha^2 + 4\alpha\beta + 2\beta^2}{\alpha^2\beta^2} \\ \nonumber && - m_c^2\FF(s)^2 \times \frac{3(2 - \alpha - \beta - \alpha^2 - 2\alpha\beta - \beta^2)}{\alpha^2\beta^2} \Bigg\} \, , \\ \nonumber \rho^{\langle\bar qq\rangle^2}_1(s)&=& 0 \, , \\ \nonumber \rho^{\langle\bar qq\rangle\langle\bar qGq\rangle}_1(s)&=& 0 \, , \\ \nonumber \rho^{pert}_2(s) &=& - \frac{1}{6553600\pi^8}\int^{\alpha_{max}}_{\alpha_{min}}d\alpha\int^{\beta_{max}}_{\beta_{min}}d\beta \Bigg\{ \FF(s)^5 \times \frac{(1-\alpha-\beta)^3(57 + 31\alpha + 31\beta + 12\alpha^2 + 24\alpha\beta + 12\beta^2)}{\alpha^4\beta^4} \\ \nonumber && - m_c^2 \FF(s)^4 \times \frac{5(1-\alpha-\beta)^3(19 - 13\alpha - 13\beta - 6\alpha^2 - 12\alpha\beta - 6\beta^2)}{\alpha^4\beta^4} \Bigg\} \, , \\ \nonumber \rho^{\langle\bar qq\rangle}_2(s) &=& 0 \, , \\ \nonumber \rho^{\langle GG\rangle}_2(s)&=&\frac{\langle g_s^2GG\rangle}{23592960 \pi^8}\int^{\alpha_{max}}_{\alpha_{min}}d\alpha\int^{\beta_{max}}_{\beta_{min}}d\beta\Bigg\{\FF(s)^3 \times \Big( \frac{3(1-\alpha-\beta)(11 + 11\alpha + 11\beta - 4\alpha^2 - 8\alpha\beta - 4\beta^2)}{\alpha^2\beta^2} \\ \nonumber && + \frac{(1-\alpha-\beta)^3(83 + 29\alpha + 29\beta - 12\alpha^2 - 24\alpha\beta - 12\beta^2)}{\alpha^3\beta^3} \Big) \\ \nonumber && + m_c^2 \FF(s)^2 \times \Big( - \frac{3(1-\alpha-\beta)^3(57+31\alpha+31\beta+12\alpha^2+24\alpha\beta+12\beta^2)}{\alpha\beta^4} \\ \nonumber && - \frac{9(1-\alpha-\beta)(11-13\alpha-13\beta+2\alpha^2+4\alpha\beta+2\beta^2)}{\alpha^2\beta^2} + \frac{9(1-\alpha-\beta)^4(19+6\alpha+6\beta)}{\alpha^2\beta^4} \\ \nonumber && - \frac{3(1-\alpha-\beta)^3(57+31\alpha+31\beta+12\alpha^2+24\alpha\beta+12\beta^2)}{\alpha^4\beta} + \frac{9(1-\alpha-\beta)^4(19+6\alpha+6\beta)}{\alpha^4\beta^2} \Big) \\ \nonumber && + m_c^4 \FF(s) \times \Big( \frac{6(1-\alpha-\beta)^3(19-13\alpha-13\beta-6\alpha^2-12\alpha\beta-6\beta^2)}{\alpha\beta^4} \\ \nonumber && + \frac{6(1-\alpha-\beta)^3(19-13\alpha-13\beta-6\alpha^2-12\alpha\beta-6\beta^2)}{\alpha^4\beta} \Big) \Bigg\}\, , \\ \nonumber \rho^{\langle\bar qGq\rangle}_2(s) &=& 0 \, , \\ \nonumber \rho^{\langle\bar qq\rangle^2}_2(s)&=& -\frac{\langle\bar qq\rangle^2}{512\pi^4} \int^{\alpha_{max}}_{\alpha_{min}}d\alpha\int^{\beta_{max}}_{\beta_{min}}d\beta \Bigg\{ \FF(s)^2 \times \frac{5(\alpha + \beta)}{\alpha\beta} - m_c^2 \FF(s) \times \frac{10(1 - \alpha - \beta)}{\alpha\beta} \Bigg\} \, , \\ \nonumber \rho^{\langle\bar qq\rangle\langle\bar qGq\rangle}_2(s)&=& \frac{11\langle\bar qq\rangle\langle\bar qg_s\sigma\cdot Gq\rangle}{1024\pi^4} \times \int^{\alpha_{max}}_{\alpha_{min}}d\alpha \Bigg\{ \HH(s) - \int^{\beta_{max}}_{\beta_{min}}d\beta \Big (\FF(s)+ m_c^2\Big) \Bigg\} \, . \end{eqnarray} \hrulefill \vspace*{4pt} \end{figure*} \newcounter{mytempeqncnt2} \begin{figure*}[!hbt] \small \hrulefill \begin{eqnarray} \rho^{[\Sigma_c \bar D^*]}_{\mu\nu}(s) &=& \mathbf{1} \times g_{\mu\nu} \times \left ( \rho^{pert}_3(s)+\rho^{\langle\bar qq\rangle}_3(s)+\rho^{\langle GG\rangle}_3(s)+\rho^{\langle\bar qq\rangle^2}_3(s)+\rho^{\langle\bar qGq\rangle}_3(s)+\rho^{\langle\bar qq\rangle\langle\bar qGq\rangle}_3(s) \right ) \label{ope:psi9} \\ \nonumber &+& q\!\!\!\slash~\times g_{\mu\nu} \times \left ( \rho^{pert}_4(s)+\rho^{\langle\bar qq\rangle}_4(s)+\rho^{\langle GG\rangle}_4(s)+\rho^{\langle\bar qq\rangle^2}_4(s)+\rho^{\langle\bar qGq\rangle}_4(s)+\rho^{\langle\bar qq\rangle\langle\bar qGq\rangle}_4(s) \right ) + \cdots \, . \\ \nonumber \rho^{pert}_3(s)&=&\frac{m_c}{163840\pi^8}\int^{\alpha_{max}}_{\alpha_{min}}d\alpha\int^{\beta_{max}}_{\beta_{min}}d\beta \FF(s)^5 \times \frac{(1-\alpha-\beta)^3(3+\alpha+\beta)}{\alpha^5\beta^4}\, , \\ \nonumber \rho^{\langle\bar qq\rangle}_3(s)&=&-\frac{m_c^2\langle\bar qq\rangle}{512\pi^6}\int^{\alpha_{max}}_{\alpha_{min}}d\alpha\int^{\beta_{max}}_{\beta_{min}}d\beta \FF(s)^3 \times \frac{(1-\alpha-\beta)^2}{\alpha^3\beta^3}\, , \\ \nonumber \rho^{\langle GG\rangle}_3(s)&=&\frac{m_c\langle g_s^2GG\rangle}{16384\pi^8}\int^{\alpha_{max}}_{\alpha_{min}}d\alpha\int^{\beta_{max}}_{\beta_{min}}d\beta\Bigg\{\FF(s)^3 \times \Big( - \frac{(1-\alpha-\beta)(1+\alpha+\beta)}{2\alpha^3\beta^2} \\ \nonumber && - \frac{(1-\alpha-\beta)^2(4-\alpha-\beta)}{9\alpha^3\beta^3} + \frac{(1-\alpha-\beta)^2(2+\alpha+\beta)}{3\alpha^4\beta^2} + \frac{(1-\alpha-\beta)^3(3+\alpha+\beta)}{12\alpha^5\beta^2} \Big) \\ \nonumber && + m_c^2 \FF(s)^2 \times \Big( \frac{(1-\alpha-\beta)^3(3+\alpha+\beta)}{12\alpha^2\beta^4} + \frac{(1-\alpha-\beta)^3(3+\alpha+\beta)}{12\alpha^5\beta} \Big) \Bigg\}\, , \\ \nonumber \rho^{\langle\bar qGq\rangle}_3(s)&=&\frac{m_c^2\langle\bar qg_s\sigma\cdot Gq\rangle}{1024\pi^6}\int^{\alpha_{max}}_{\alpha_{min}}d\alpha\int^{\beta_{max}}_{\beta_{min}}d\beta \FF(s)^2 \times \frac{3(1-\alpha-\beta)}{\alpha^2\beta^2}\, , \\ \nonumber \rho^{\langle\bar qq\rangle^2}_3(s)&=& -\frac{m_c \langle\bar qq\rangle^2}{32\pi^4} \int^{\alpha_{max}}_{\alpha_{min}}d\alpha\int^{\beta_{max}}_{\beta_{min}}d\beta \FF(s)^2 \times \frac{(\alpha+\beta)}{\alpha^2\beta}\, , \\ \nonumber \rho^{\langle\bar qq\rangle\langle\bar qGq\rangle}_3(s)&=& \frac{m_c\langle\bar qq\rangle\langle\bar qg_s\sigma\cdot Gq\rangle}{64\pi^4} \times \int^{\alpha_{max}}_{\alpha_{min}}d\alpha \Bigg\{ \HH(s) \times \frac{2}{\alpha} - \int^{\beta_{max}}_{\beta_{min}}d\beta \FF(s) \times \frac{(3\alpha+\beta)}{\alpha^2} \Bigg\} \, , \\ \nonumber \rho^{pert}_4(s)&=&\frac{1}{81920\pi^8}\int^{\alpha_{max}}_{\alpha_{min}}d\alpha\int^{\beta_{max}}_{\beta_{min}}d\beta \FF(s)^5 \times \frac{(1-\alpha-\beta)^3(3+\alpha+\beta)}{\alpha^4\beta^4}\, , \\ \nonumber \rho^{\langle\bar qq\rangle}_4(s)&=&-\frac{m_c\langle\bar qq\rangle}{256\pi^6}\int^{\alpha_{max}}_{\alpha_{min}}d\alpha\int^{\beta_{max}}_{\beta_{min}}d\beta \FF(s)^3 \times \frac{(1-\alpha-\beta)^2}{\alpha^2\beta^3}\, , \\ \nonumber \rho^{\langle GG\rangle}_4(s)&=&\frac{\langle g_s^2GG\rangle}{8192\pi^8}\int^{\alpha_{max}}_{\alpha_{min}}d\alpha\int^{\beta_{max}}_{\beta_{min}}d\beta\Bigg\{ \FF(s)^3 \times \Big( - \frac{(1-\alpha-\beta)^2(4-\alpha-\beta)}{9\alpha^2\beta^3} + \frac{(1-\alpha-\beta)^2(2+\alpha+\beta)}{6\alpha^3\beta^2} \Big) \\ \nonumber && + m_c^2 \FF(s)^2 \times \Big( \frac{(1-\alpha-\beta)^3(3+\alpha+\beta)}{12\alpha\beta^4} + \frac{(1-\alpha-\beta)^3(3+\alpha+\beta)}{12\alpha^4\beta} \Big) \Bigg\}\, , \\ \nonumber \rho^{\langle\bar qGq\rangle}_4(s)&=&\frac{m_c\langle\bar qg_s\sigma\cdot Gq\rangle}{512\pi^6}\int^{\alpha_{max}}_{\alpha_{min}}d\alpha\int^{\beta_{max}}_{\beta_{min}}d\beta \FF(s)^2 \times \frac{3(1-\alpha-\beta)}{\alpha\beta^2}\, , \\ \nonumber \rho^{\langle\bar qq\rangle^2}_4(s)&=& -\frac{\langle\bar qq\rangle^2}{64\pi^4} \int^{\alpha_{max}}_{\alpha_{min}}d\alpha\int^{\beta_{max}}_{\beta_{min}}d\beta \FF(s)^2 \times \frac{(\alpha+\beta)}{\alpha\beta}\, , \\ \nonumber \rho^{\langle\bar qq\rangle\langle\bar qGq\rangle}_4(s)&=& \frac{\langle\bar qq\rangle\langle\bar qg_s\sigma\cdot Gq\rangle}{128\pi^4} \times \int^{\alpha_{max}}_{\alpha_{min}}d\alpha \Bigg\{ 2 \HH(s) - \int^{\beta_{max}}_{\beta_{min}}d\beta \FF(s) \times \frac{(3\alpha+\beta)}{\alpha} \Bigg\} \, . \end{eqnarray} \hrulefill \vspace*{4pt} \end{figure*} In this paper we evaluate the QCD spectral density $\rho(s)$ at the leading order on $\alpha_s$ and up to dimension eight, including the perturbative term, the quark condensate $\langle \bar q q \rangle$, the gluon condensate $\langle g_s^2 GG \rangle$, the quark-gluon mixed condensate $\langle g_s \bar q \sigma G q \rangle$, and their combinations $\langle \bar q q \rangle^2$ and $\langle \bar q q \rangle\langle g_s \bar q \sigma G q \rangle$. The results of these spectral densities are too length, so we list them in the supplementary file ``OPE.nb''. In the calculations we ignore the chirally suppressed terms with the light quark mass and adopt the factorization assumption of vacuum saturation for higher dimensional condensates ($D=6$ and $D=8$). We shall find that the $D=3$ quark condensate $\langle\bar qq\rangle$ and the $D=5$ mixed condensate $\langle\bar qg_s\sigma\cdot Gq\rangle$ are both multiplied by the charm quark mass $m_c$, which are thus important power corrections to the correlation functions. To illustrate our numerical analysis, we use the current $\eta_{11\mu\nu}$ defined in Eq.~(\ref{def:eta11munu}) as an example. It has the quantum number $J^P = 5/2^+$ and couples to the $[N^* J/\psi]$ channel (or the $P$-wave $[p J/\psi]$ channel). Its spectral density $\rho^{[N^* J/\psi]}_{\mu\nu\rho\sigma}(s)$ are listed in Eqs.~(\ref{ope:eta11}), where $m_c$ is the heavy quark mass, and the integration limits are $\alpha_{min}=\frac{1-\sqrt{1-4m_c^2/s}}{2}$, $\alpha_{max}=\frac{1+\sqrt{1-4m_c^2/s}}{2}$, $\beta_{min}=\frac{\alpha m_c^2}{\alpha s-m_c^2}$, $\beta_{max}=1-\alpha$. We only list the terms proportional to $\mathbf{1} \times \big( g_{\mu\rho}g_{\nu\sigma} + g_{\mu\sigma}g_{\nu\rho} \big)$ and $q\!\!\!\slash~ \times \big( g_{\mu\rho}g_{\nu\sigma} + g_{\mu\sigma}g_{\nu\rho} \big)$, and $\cdots$ denotes other Lorentz structures, such as $\mathbf{1} \times g_{\mu\rho} \sigma_{\nu\sigma}$, etc.. We find that the results are not useful, because many terms vanish in this spectral density: its $q\!\!\!\slash~ \times \big( g_{\mu\rho}g_{\nu\sigma} + g_{\mu\sigma}g_{\nu\rho} \big)$ part only contains the perturbative term, $\langle g_s^2 GG \rangle$, $\langle \bar q q \rangle^2$ and $\langle \bar q q \rangle\langle g_s \bar q \sigma G q \rangle$, but its $\mathbf{1} \times \big( g_{\mu\rho}g_{\nu\sigma} + g_{\mu\sigma}g_{\nu\rho} \big)$ part only contains $\langle \bar q q \rangle$ and $\langle g_s \bar q \sigma G q \rangle$. This makes bad OPE convergence and leads to unreliable results. Moreover, the parity can not be determined because these two parts are quite different. We shall not use these currents to perform QCD sum rule analyses from which the parity can not be determined. We use the current $\psi_{9\mu}$ defined in Eq.~(\ref{def:psi9mu}) as another example. It has the quantum number $J^P = 3/2^-$ and couples to the $[\Sigma_c \bar D^*]$ channel. Its spectral density $\rho^{[\Sigma_c \bar D^*]}_{\mu\nu}(s)$ is listed in Eqs.~(\ref{ope:psi9}). We find that the terms proportional to $\mathbf{1} \times g_{\mu\nu}$ are almost the same as those proportional to $q\!\!\!\slash~\times g_{\mu\nu}$. Hence, the parity of $X$ can be well determined to be negative, the same as $\psi_{9\mu}$. In the next section we shall use the terms proportional to $\mathbf{1} \times g_{\mu\nu}$ to evaluate the mass of $X$. \section{Numerical Analysis} \label{sec:numerical} In this section we still use the current $\psi_{9\mu}$ as an example to perform the numerical analysis. We use the following QCD parameters of quark masses and various condensates in our analysis~\cite{Yang:1993bp,Agashe:2014kda,Eidemuller:2000rc,Narison:2002pw,Gimenez:2005nt,Jamin:2002ev,Ioffe:2002be,Ovchinnikov:1988gk,colangelo}: \begin{eqnarray} \nonumber && \langle \bar qq \rangle = - (0.24 \pm 0.01)^3 \mbox{ GeV}^3 \, , \\ \nonumber &&\langle g_s^2GG\rangle =(0.48 \pm 0.14) \mbox{ GeV}^4\, , \\ \label{paramaters} && \langle g_s \bar q \sigma G q \rangle = M_0^2 \times \langle \bar qq \rangle\, , \\ \nonumber && M_0^2 = - 0.8 \mbox{ GeV}^2\, , \\ \nonumber && m_c = 1.23 \pm 0.09 \mbox{ GeV} \, , \end{eqnarray} where the running mass in the $\overline{MS}$ scheme is used for the charm quark. \begin{figure*}[hbt] \begin{center} \scalebox{0.6}{\includegraphics{CVGpsi9.pdf}} \scalebox{0.575}{\includegraphics{condensate.pdf}} \caption{In the left panel we show CVG, defined in Eq.~(\ref{eq_convergence}), as a function of the Borel mass $M_B$. In the right panel we show the relative contribution of each term on the OPE expansion, as a function of the Borel mass $M_B$. The current $\psi_{9\mu}$ of $J^P = 3/2^-$ ($J^{\bar D^* \Sigma_c}_\mu$ in Ref.~\cite{Chen:2015moa}) is used here.} \label{fig:cvg} \end{center} \end{figure*} There are two free parameters in Eq.~(\ref{eq:mass}): the Borel mass $M_B$ and the threshold value $s_0$. We use two criteria to constrain the Borel mass $M_B$. In order to insure the convergence of the OPE series, the first criterion is to require that the dimension eight term be less than 10\% to determine its lower limit $M_B^{min}$: \begin{equation} \label{eq_convergence} \mbox{Convergence (CVG)} \equiv \left|\frac{ \Pi_{\langle \bar q q \rangle\langle g_s \bar q \sigma G q \rangle}(\infty, M_B) }{ \Pi(\infty, M_B) }\right| \leq 10\% \, . \end{equation} We show this function obtained using $\psi_{9\mu}$ in the left panel of Fig.~\ref{fig:cvg}. We find that the OPE convergence improves with the increase of $M_B$. This criterion has a limitation on the Borel mass that $M_B^2 \geq 3.94$ GeV$^2$. Actually, the convergence can be even better if there is a clear trend of convergence with the higher order terms giving a progressively smaller contribution. Accordingly, we also show the relative contribution of each term on the OPE expansion in the right panel of Fig.~\ref{fig:cvg}. We find that a good convergence can be achieved in the same region $M_B^2 \geq 3.94$ GeV$^2$. While, we shall still use the previous criterion to determine the lower limit of the Borel mass, which can be applied more easily. \begin{figure*}[hbt] \begin{center} \scalebox{0.6}{\includegraphics{Polepsi9.pdf}} \caption{The variation of PC, defined in Eq.~(\ref{eq_pole}), as a function of the Borel mass $M_B$. The current $\psi_{9\mu}$ of $J^P = 3/2^-$ ($J^{\bar D^* \Sigma_c}_\mu$ in Ref.~\cite{Chen:2015moa}) is used here and the threshold value is chosen to be $s_0$ = 21 GeV$^2$.} \label{fig:pole} \end{center} \end{figure*} While, to insure that the one-pole parametrization in Eq.~(\ref{eq:rho}) is valid, the second criterion is to require that the pole contribution (PC) be larger than 10\% to determine the upper limit on $M_B^2$: \begin{equation} \label{eq_pole} \mbox{PC} \equiv \frac{ \Pi(s_0, M_B) }{ \Pi(\infty, M_B) } \geq 10\% \, . \end{equation} We show the variation of the pole contribution obtained using $\psi_{9\mu}$ in Fig.~\ref{fig:pole}, with respect to the Borel mass $M_B$ and when $s_0$ is chosen to be 21 GeV$^2$. We find that the PC decreases with the increase of $M_B$. This criterion has a limitation on the Borel mass that $M_B^2 \leq 4.27$ GeV$^2$. Together we obtain the working region of Borel mass $3.94$ GeV$^2< M_B^2 < 4.27$ GeV$^2$ for the current $\psi_{9\mu}$ with the continuum threshold $s_0 = 21$ GeV$^2$. The most important drawback of this current, as well as other pentaquark currents used in this paper, is that their pole contributions are small. One reason is that the two $P_c$'s poles are both mixed with the $J/\psi p$ continuum, so the continuum contribution may not be well suppressed by the Borel transformation. Another reason is due to the large powers of $s$ in the spectral function; see other sum rule analyses for the six-quark state $d^*(2380)$~\cite{Chen:2014vha} and the $F$-wave heavy mesons~\cite{Zhou:2015ywa}. Anyway, the Borel mass $M_B$ is just one of the two free parameters. We should also pay attention to the other one, that is the threshold value $s_0$, and try to find a balance between them. Actually, we can increase $s_0$ to achieve a large enough pole contribution (then the obtained mass would also be increased and not so reasonable), but there is another criterion more important, that is the $s_0$ stability. We note that in Ref.~\cite{Narison:1996fm} the author used the requirements on $M_B$ (called the $\tau$ stability) to extract the upper bound on the $0^{++}$ glueball mass, and used the requirement on $s_0$ (called the $t_c$ stability) to evaluate its optimal mass. In this paper we shall use a similar requirement on $s_0$, as discussed in the following. \begin{figure*}[hbt] \begin{center} \scalebox{0.6}{\includegraphics{masss0psi9.pdf}} \scalebox{0.6}{\includegraphics{massmbpsi9.pdf}} \caption{The variation of $M_{[\Sigma_c \bar D^*],3/2^-}$ with respect to the threshold value $s_0$ (left) and the Borel mass $M_B$ (right), calculated using the current ($J^{\bar D^* \Sigma_c}_\mu$ in Ref.~\cite{Chen:2015moa}) $\psi_{9\mu}$ of $J^P = 3/2^-$. In the left figure, the long-dashed, solid and short-dashed curves are obtained by fixing $M_B^2 = 3.9$, $4.1$ and $4.3$ GeV$^2$, respectively. In the right figure, the long-dashed, solid and short-dashed curves are obtained for $s_0 = 19$, $21$ and $23$ GeV$^2$, respectively.} \label{fig:psi9} \end{center} \end{figure*} To determine $s_0$, we require that both the $s_0$ dependence and the $M_B$ dependence of the mass prediction be the weakest in order to obtain reliable mass prediction. We show the variation of $M_X$ with respect to the threshold value $s_0$ in the left panel of Fig.~\ref{fig:psi9}, in a large region 15 GeV$^2 < s_0 < 25$ GeV$^2$. We find that the mass curves have a minimum against $s_0$ when $s_0$ is around 18 GeV$^2$. Hence, the $s_0$ dependence of the mass prediction is the weakest at this point. However, this value of $s_0$ is too small to give a reasonable working region of $M_B$. A working region can be obtained as long as $s_0>19$ GeV$^2$. Consequently, we choose the region $19$ GeV$^2\leq s_0\leq 23$ GeV$^2$ as our working region, where the $s_0$ dependence is still weak and the mass curves are flat enough. Hence, our working regions for the current $\psi_{9\mu}$ are $19$ GeV$^2\leq s_0\leq 23$ GeV$^2$ and $3.94$ GeV$^2\leq M_B^2 \leq 4.27$ GeV$^2$, where the following numerical results can be obtained~\cite{Chen:2015moa}: \begin{eqnarray} M_{[\Sigma_c \bar D^*]} = 4.37^{+0.18}_{-0.13} \mbox{ GeV} \, . \end{eqnarray} Here the central value corresponds to $M_B=4.10$ GeV$^2$ and $s_0 = 21$ GeV$^2$, and the uncertainty comes from the Borel mass $M_B$, the threshold value $s_0$, the charm quark mass and the various condensates~\cite{Chen:2015ata}. We also show the variation of $M_X$ with respect to the Borel mass $M_B$ in the right panel of Fig.~\ref{fig:psi9}, in a broader region $2.5$ GeV$^2\leq M_B^2 \leq 5.0$ GeV$^2$. We find that these curves are more stable inside the Borel window $3.94$ GeV$^2\leq M_B^2 \leq 4.27$ GeV$^2$. We note that the threshold value used here, $\sqrt{s_0} \approx 4.58$ GeV, is not far from the obtained mass of 4.37 GeV (but still acceptable), indicating it is not easy to separate the pole and the continuum. As we have found in Sec.~\ref{sec:sumrule} for the current $\psi_{9\mu}$ that the terms proportional to $q\!\!\!\slash~\times g_{\mu\nu}$ are quite similar to those proportional to $\mathbf{1} \times g_{\mu\nu}$, suggesting that $X$ has the same parity as $\psi_{9\mu}$, that is negative~\cite{Chen:2015moa}: \begin{eqnarray} M_{[\Sigma_c \bar D^*],3/2^-} = 4.37^{+0.18}_{-0.13} \mbox{ GeV} \, . \label{Pc4380} \end{eqnarray} This value is consistent with the experimental mass of the $P_c(4380)$~\cite{lhcb}, and supports it as a $[\Sigma_c \bar D^*]$ hidden-charm pentaquark with the quantum number $J^P=3/2^-$. \section{Results and discussions} \label{sec:summary} \renewcommand{\arraystretch}{1.5} \begin{table*}[hbtp] \begin{center} \caption{Numerical results for the spin $J=1/2$ hidden-charm pentaquark states.} \begin{tabular}{ccc|cc|cc} \toprule[1pt]\toprule[1pt] ~~\mbox{Current}~~ & ~~\mbox{Defined in}~~ & ~~\mbox{Structure}~~ & \mbox{$s_0$ [GeV$^2$]} & \mbox{Borel Mass [GeV$^2$]} & ~~\mbox{Mass [GeV]}~~ & ~~\mbox{($J$, $P$)}~~ \\ \midrule[1pt] $\eta_2 - \eta_4$ & Eq.~(\ref{def:eta24}) & $[p \eta_c]$ & -- & -- & -- & -- \\ $\eta_5 - \eta_7$ & Eq.~(\ref{def:eta57}) & $[p J/\psi]$ & -- & -- & -- & -- \\ $\eta_{13}$ & Eq.~(\ref{def:eta13}) & $[N^* J/\psi]$ & -- & -- & -- & -- \\ \midrule[1pt] $\xi_2 - \xi_4$ & Eq.~(\ref{def:xi24}) & $[\Lambda_c \bar D]$ & -- & -- & -- & -- \\ $\xi_5 - \xi_7$ & Eq.~(\ref{def:xi57}) & $[\Lambda_c \bar D^*]$ & -- & -- & -- & -- \\ $\xi_{14}$ & Eq.~(\ref{def:xi14}) & $[\Sigma_c \bar D]$ & $20 - 24$ & $4.12 - 4.52$ & $4.45^{+0.17}_{-0.13}$ & ($1/2,-$) \\ $\xi_{16}$ & Eq.~(\ref{def:xi16}) & $[\Lambda_c^* \bar D]$ & $25 - 29$ & $4.40 - 4.76$ & $4.86^{+0.16}_{-0.19}$ & ($1/2,+$) \\ $\xi_{17}$ & Eq.~(\ref{def:xi17}) & $[\Sigma_c^* \bar D^*]$ & $22 - 26$ & $3.64 - 4.25$ & $4.73^{+0.19}_{-0.12}$ & ($1/2,-$) \\ $\xi_{19}$ & Eq.~(\ref{def:xi19}) & $[\Lambda_c^* \bar D^*]$ & $23 - 27$ & $3.70 - 4.22$ & $4.67^{+0.16}_{-0.20}$ & ($1/2,+$) \\ \midrule[1pt] $\psi_2$ & Eq.~(\ref{def:psi2}) & $[\Sigma_c^* \bar D]$ & $19 - 23$ & $3.95 - 4.47$ & $4.33^{+0.17}_{-0.13}$ & ($1/2,-$) \\ $\psi_3$ & Eq.~(\ref{def:psi3}) & $[\Sigma_c^* \bar D^*]$ & $21 - 25$ & $3.50 - 4.11$ & $4.59^{+0.17}_{-0.12}$ & ($1/2,-$) \\ \bottomrule[1pt]\bottomrule[1pt] \end{tabular} \label{tab:spin12} \end{center} \end{table*} \renewcommand{\arraystretch}{1.5} \begin{table*}[hbtp] \begin{center} \caption{Numerical results for the spin $J=3/2$ hidden-charm pentaquark states. $\psi_{9\mu}$ is denoted as $J^{\bar D^* \Sigma_c}_\mu$ in Ref.~\cite{Chen:2015moa}.} \begin{tabular}{ccc|cc|cc} \toprule[1pt]\toprule[1pt] ~~\mbox{Current}~~ & ~~\mbox{Defined in}~~ & ~~\mbox{Structure}~~ & \mbox{$s_0$ [GeV$^2$]} & \mbox{Borel Mass [GeV$^2$]} & ~~\mbox{Mass [GeV]}~~ & ~~\mbox{($J$, $P$)}~~ \\ \midrule[1pt] $\eta_{5\mu} - \eta_{7\mu}$ & Eq.~(\ref{def:eta57mu}) & $[p J/\psi]$ & -- & -- & -- & -- \\ $\eta_{18\mu}$ & Eq.~(\ref{def:eta18mu}) & $[N^* \eta_c]$ & -- & -- & -- & -- \\ $\eta_{19\mu}$ & Eq.~(\ref{def:eta19mu}) & $[N^* J/\psi]$ & -- & -- & -- & -- \\ \midrule[1pt] $\xi_{5\mu} - \xi_{7\mu}$ & Eq.~(\ref{def:xi57mu}) & $[\Lambda_c \bar D^*]$ & -- & -- & -- & -- \\ $\xi_{18\mu}$ & Eq.~(\ref{def:xi18mu}) & $[\Sigma_c^* \bar D]$ & $21 - 25$ & $3.93 - 4.51$ & $4.56^{+0.16}_{-0.13}$ & ($3/2,-$) \\ $\xi_{20\mu}$ & Eq.~(\ref{def:xi20mu}) & $[\Lambda_c^* \bar D]$ & $23 - 27$ & $4.12 - 4.63$ & $4.56^{+0.18}_{-0.22}$ & ($3/2,+$) \\ $\xi_{25\mu}$ & Eq.~(\ref{def:xi25mu}) & $[\Sigma_c^* \bar D^*]$ & $21 - 25$ & $3.85 - 4.30$ & $4.67^{+0.21}_{-0.12}$ & ($3/2,-$) \\ $\xi_{27\mu}$ & Eq.~(\ref{def:xi27mu}) & $[\Lambda_c^* \bar D^*]$ & $23 - 27$ & $4.07 - 4.50$ & $4.68^{+0.15}_{-0.18}$ & ($3/2,+$) \\ $\xi_{33\mu}$ & Eq.~(\ref{def:xi33mu}) & $[\Sigma_c \bar D^*]$ & $20 - 24$ & $3.97 - 4.41$ & $4.46^{+0.18}_{-0.13}$ & ($3/2,-$) \\ $\xi_{35\mu}$ & Eq.~(\ref{def:xi35mu}) & $[\Lambda_c \bar D^*]$ & $27 - 31$ & $4.32 - 5.11$ & $5.18^{+0.16}_{-0.12}$ & ($3/2,+$) \\ \midrule[1pt] $\psi_{2\mu}$ & Eq.~(\ref{def:psi2mu}) & $[\Sigma_c^* \bar D]$ & $20 - 24$ & $3.88 - 4.41$ & $4.45^{+0.16}_{-0.13}$ & ($3/2,-$) \\ $\psi_{5\mu}$ & Eq.~(\ref{def:psi5mu}) & $[\Sigma_c^* \bar D^*]$ & $21 - 25$ & $3.86 - 4.46$ & $4.61^{+0.18}_{-0.12}$ & ($3/2,-$) \\ $\psi_{9\mu}$ & Eq.~(\ref{def:psi9mu}) & $[\Sigma_c \bar D^*]$ & $19 - 23$ & $3.94 - 4.27$ & $4.37^{+0.18}_{-0.13}$ & ($3/2,-$) \\ \bottomrule[1pt]\bottomrule[1pt] \end{tabular} \label{tab:spin32} \end{center} \end{table*} \renewcommand{\arraystretch}{1.5} \begin{table*}[hbtp] \begin{center} \caption{Numerical results for the spin $J=5/2$ hidden-charm pentaquark states. $\xi_{15\mu\nu}$, $\psi_{4\mu\nu}$ and $J^{\rm mix}_{\mu\nu}$ are denoted as $J^{\bar D^* \Lambda_c}_{\{\mu\nu\}}$, $J^{\bar D \Sigma_c^*}_{\{\mu\nu\}}$ and $J^{\bar D \Sigma_c^*\&\bar D^* \Lambda_c}_{\{\mu\nu\}}$ in Ref.~\cite{Chen:2015moa}, respectively.} \begin{tabular}{ccc|cc|cc} \toprule[1pt]\toprule[1pt] ~~\mbox{Current}~~ & ~~\mbox{Defined in}~~ & ~~\mbox{Structure}~~ & \mbox{$s_0$ [GeV$^2$]} & \mbox{Borel Mass [GeV$^2$]} & ~~\mbox{Mass [GeV]}~~ & ~~\mbox{($J$, $P$)}~~ \\ \midrule[1pt] $\eta_{11\mu\nu}$ & Eq.~(\ref{def:eta11munu}) & $[N^* J/\psi]$ & -- & -- & -- & -- \\ \midrule[1pt] $\xi_{13\mu\nu}$ & Eq.~(\ref{def:xi13munu}) & $[\Sigma_c^* \bar D^*]$ & $20 - 24$ & $3.51 - 4.00$ & $4.50^{+0.18}_{-0.12}$ & ($5/2,-$) \\ $\xi_{15\mu\nu}$ & Eq.~(\ref{def:xi15munu}) & $[\Lambda_c^* \bar D^*]$ & $24 - 28$ & $4.09 - 4.59$ & $4.76^{+0.15}_{-0.19}$ & ($5/2,+$) \\ \midrule[1pt] $\psi_{3\mu\nu}$ & Eq.~(\ref{def:psi3munu}) & $[\Sigma_c^* \bar D^*]$ & $21 - 25$ & $3.88 - 4.40$ & $4.59^{+0.17}_{-0.12}$ & ($5/2,-$) \\ \midrule[1pt] $\psi_{4\mu\nu}$ & Eq.~(\ref{def:psi4munu}) & $P$-wave $[\Sigma_c^* \bar D]$ & $25 - 29$ & $4.30 - 4.73$ & $4.82^{+0.15}_{-0.14}$ & ($5/2,+$) \\ \midrule[1pt] $J^{\rm mix}_{\mu\nu}$ & Eq.~(\ref{def:mix}) & $P$-wave $[\Lambda_c \bar D^* \& \Sigma_c^* \bar D]$ & $20 - 24$ & $3.22 - 3.50$ & $4.47^{+0.18}_{-0.13}$ & ($5/2,+$) \\ \bottomrule[1pt]\bottomrule[1pt] \end{tabular} \label{tab:spin52} \end{center} \end{table*} We use the currents selected in Sec.~\ref{sec:current} and in Appendixes~\ref{app:spin32} and \ref{app:spin52} to perform QCD sum rule analyses. Some of them lead to the OPE series from which the parity can be well determined. We further use these currents to perform numerical analyses. The masses obtained using the $J^P=1/2^+$ currents $\xi_{14}$, $\xi_{16}$, $\xi_{17}$, $\xi_{19}$, $\psi_2$, and $\psi_3$ are listed in Table~\ref{tab:spin12}, those obtained using the $J^P=3/2^-$ currents $\xi_{18\mu}$, $\xi_{20\mu}$, $\xi_{25\mu}$, $\xi_{27\mu}$, $\xi_{33\mu}$, $\xi_{35\mu}$, $\psi_{2\mu}$, $\psi_{5\mu}$, and $\psi_{9\mu}$ are listed in Table~\ref{tab:spin32}, and those obtained using the $J^P=5/2^+$ currents $\xi_{13\mu\nu}$, $\xi_{15\mu\nu}$, and $\psi_{3\mu\nu}$ are listed in Table~\ref{tab:spin52}. \begin{figure*}[hbt] \begin{center} \scalebox{0.6}{\includegraphics{masss0xi15.pdf}} \scalebox{0.6}{\includegraphics{massmbxi15.pdf}} \caption{The variation of $M_{[\Lambda_c^* \bar D^*],5/2^+}$ with respect to the threshold value $s_0$ (left) and the Borel mass $M_B$ (right), calculated using the current $\xi_{15\mu\nu}$ ($J^{\bar D^* \Lambda_c}_{\{\mu\nu\}}$ in Ref.~\cite{Chen:2015moa}) of $J^P = 5/2^+$. In the left figure, the long-dashed, solid and short-dashed curves are obtained by fixing $M_B^2 = 4.0$, $4.3$ and $4.6$ GeV$^2$, respectively. In the right figure, the long-dashed, solid and short-dashed curves are obtained for $s_0 = 24$, $26$ and $28$ GeV$^2$, respectively.} \label{fig:xi15} \end{center} \end{figure*} \begin{figure*}[hbt] \begin{center} \scalebox{0.6}{\includegraphics{masss0psi4.pdf}} \scalebox{0.6}{\includegraphics{massmbpsi4.pdf}} \caption{The variations of $M_{[\Sigma_c^* \bar D],5/2^+}$ with respect to the threshold value $s_0$ (left) and the Borel mass $M_B$ (right), calculated using the current $\psi_{4\mu\nu}$ ($J^{\bar D \Sigma_c^*}_{\{\mu\nu\}}$ in Ref.~\cite{Chen:2015moa}) of $J^P = 5/2^+$. In the left figure, the long-dashed, solid and short-dashed curves are obtained by fixing $M_B^2 = 4.3$, $4.5$ and $4.7$ GeV$^2$, respectively. In the right figure, the long-dashed, solid and short-dashed curves are obtained for $s_0 = 25$, $27$ and $29$ GeV$^2$, respectively.} \label{fig:psi4} \end{center} \end{figure*} \begin{figure*}[hbt] \begin{center} \scalebox{0.6}{\includegraphics{massmixs0.pdf}} \scalebox{0.6}{\includegraphics{massmixmb.pdf}} \caption{The variations of $M_{{\rm mix},5/2^+}$ with respect to the threshold value $s_0$ (left) and the Borel mass $M_B$ (right), calculated using the mixied current $J^{\rm mix}_{\mu\nu}$ ($J^{\bar D\Sigma_c^*\&\bar D^*\Lambda_c}_{\{\mu\nu\}}$ in Ref.~\cite{Chen:2015moa}) of $J^P = 5/2^+$. In the left figure, the long-dashed, solid and short-dashed curves are obtained by fixing $M_B^2 = 3.2$, $3.35$ and $3.5$ GeV$^2$, respectively. In the right figure, the long-dashed, solid and short-dashed curves are obtained for $s_0 = 20$, $22$ and $24$ GeV$^2$, respectively.} \label{fig:mix} \end{center} \end{figure*} Especially, in the previous sections we have used the current $\psi_{9\mu}$ and obtained the mass $M_{[\Sigma_c \bar D^*],3/2^-} = 4.37^{+0.18}_{-0.13}$ GeV~\cite{Chen:2015moa}. This value is consistent with the experimental mass of the $P_c(4380)$~\cite{lhcb}, and supports it as a $[\Sigma_c \bar D^*]$ hidden-charm pentaquark with the quantum number $J^P=3/2^-$. We also use the current $\xi_{15\mu\nu}$ and obtain the mass $M_{[\Lambda_c^* \bar D^*],5/2^+} = 4.76^{+0.15}_{-0.19}$ GeV. This value is significantly larger than the experimental mass of the $P_c(4450)$~\cite{lhcb}. Moreover, we show the mass as a function of the threshold value $s_0$ in the left panel of Fig.~\ref{fig:xi15}, and find that the mass curves do not have a minimum against $s_0$, which is quite different from the current $\psi_{9\mu}$. We also show the mass as a function of the Borel mass $M_B$ in the right panel of Fig.~\ref{fig:xi15}. To find a good solution consistent with the experiment, we found the mixed current consisting of $\xi_{15\mu\nu}$ and $\psi_{4\mu\nu}$~\cite{Chen:2015moa}: \begin{eqnarray} J^{\rm mix}_{\mu\nu} = \cos\theta \times \xi_{15\mu\nu} + \sin\theta \times \psi_{4\mu\nu} \, . \label{def:mix} \end{eqnarray} We note that $\psi_{4\mu\nu}$ is defined in Eq.~(\ref{def:psi4munu}) and well couples to the $P$-wave $[\Sigma_c^* \bar D]$ channel. However, it contains the axial-vector ($\bar c_d \gamma_\mu \gamma_5 c_d$) component, so not our first choice in this paper. We show the mass obtained using this current $\psi_{4\mu\nu}$, as a function of the threshold value $s_0$ and the Borel mass $M_B$ in Fig.~\ref{fig:psi4}. We find that the mass curves have a minimum against $s_0$ when $s_0$ is around 20 GeV$^2$. Moreover, this mass minimum is just around 4.45 GeV, similar to the mass of the $P_c(4450)$~\cite{lhcb}. However, a working region can be obtained as long as $s_0>25$ GeV$^2$, in which region the mass prediction is $4.82^{+0.15}_{-0.14}$ GeV, significantly larger than the mass of the $P_c(4450)$~\cite{lhcb}. To solve this problem, we further use the mixed current $J^{\rm mix}_{\mu\nu}$ to perform QCD sum rule analysis. We find that it gives a reliable mass sum rule, when the mixing angle $\theta$ is fine-tuned to be $-51\pm5^\circ$, and the hadron mass is extracted as~\cite{Chen:2015moa} \begin{eqnarray} M_{{\rm mix},{5/2^+}} = 4.47^{+0.18}_{-0.13} \mbox{ GeV} \, , \label{Pc4450} \end{eqnarray} with $20$ GeV$^2$ $\leq s_0 \leq 24$ GeV$^2$ and $3.22$ GeV$^2$ $\leq M_B^2 \leq 3.50$ GeV$^2$. This value is consistent with the experimental mass of the $P_c(4450)$~\cite{lhcb}, and supports it as an admixture of $P$-wave $[\Lambda_c\bar D^*]$ and $[\Sigma_c^* \bar D]$ with the quantum number $J^P=5/2^+$. We show the mass as a function of the threshold value $s_0$ and the Borel mass $M_B$ in Fig.~\ref{fig:mix}. In summary, in this paper we adopt the QCD sum rule approach to study the mass spectrum of hidden-charm pentaquarks. We systematically construct the local pentaquark interpolating currents having spin $J = {1\over2}/{3\over2}/{5\over2}$ and quark contents $uud c \bar c$, and select those currents containing pseudoscalar ($\bar c_d \gamma_5 c_d$) and vector ($\bar c_d \gamma_\mu c_d$) components to perform QCD sum rule analyses. We find some of them lead to the OPE series from which the parity can be well determined, and further use these currents to perform numerical analyses. The results are listed in Tables~\ref{tab:spin12}, \ref{tab:spin32}, and \ref{tab:spin52}. We find that the $P_c(4380)$ and $P_c(4450)$ can be identified as hidden-charm pentaquark states composed of an anti-charmed meson and a charmed baryon. We use $\psi_{9\mu}$ to perform QCD sum rule analysis and the result supports the $P_c(4380)$ as a $S$-wave $[\Sigma_c\bar D^*]$ hidden-charm pentaquark with the quantum number $J^P=3/2^-$. We use the mixed current $J^{\rm mix}_{\mu\nu}$ to perform QCD sum rule analysis, and the result supports the $P_c(4450)$ as an admixture of $P$-wave $[\Lambda_c \bar D^*]$ and $[\Sigma_c^* \bar D]$ with the quantum number $J^P=5/2^+$. Besides them, our results suggest that \begin{enumerate} \item The lowest-lying hidden-charm pentaquark state of $J^P = 1/2^-$ has the mass $4.33^{+0.17}_{-0.13}$ GeV. This result is obtained by using the current $\psi_2$, which is defined in Eq.~(\ref{def:psi2}), and it well couples to the $S$-wave $[\Sigma_c^* \bar D]$ channel. While, the one of $J^P = 1/2^+$ is significantly higher, that is around $4.7-4.9$ GeV; \item The lowest-lying hidden-charm pentaquark state of $J^P = 3/2^-$ has the mass $4.37^{+0.18}_{-0.13}$ GeV, consistent with the experimental mass of the $P_c(4380)$ of $J^P = 3/2^-$~\cite{lhcb}. This result is obtained by using the current $\psi_{9\mu}$, which is defined in Eq.~(\ref{def:psi9mu}), and it well couples to the $S$-wave $[\Sigma_c \bar D^*]$ channel. While, the one of $J^P = 3/2^+$ is also significantly higher, that is above 4.6 GeV; \item However, the hidden-charm pentaquark state of $J^P = 5/2^-$ has a mass around $4.5-4.6$ GeV, that is just slightly larger than the experimental mass of the $P_c(4450)$ of $J^P = 5/2^+$~\cite{lhcb}. \end{enumerate} The discovery of the $P_c(4380)$ and $P_c(4450)$ opens a new page on the exotic hadron states. In the near future, further theoretical and experimental efforts are required to study these hidden-charm pentaquark states. \section*{Acknowledgments} \begin{acknowledgement} This project is supported by the National Natural Science Foundation of China under Grants No. 11205011, No. 11475015, No. 11375024, No. 11222547, No. 11175073, and No. 11261130311, the Ministry of Education of China (SRFDP under Grant No. 20120211110002 and the Fundamental Research Funds for the Central Universities), and the Natural Sciences and Engineering Research Council of Canada (NSERC). \end{acknowledgement}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,006
Lab Story è una sitcom visibile sul portale Rai Educational de ilD sul tema dell'intercultura che mostra come l'apertura e il dialogo rendano possibile la convivenza nel rispetto delle diversità. Protagonista della serie è un piccolo gruppo di alunni di una scuola primaria multiculturale di Roma. Il diverso background dei protagonisti genera situazioni divertenti che finiscono sempre per dimostrare che le differenze sono una ricchezza per tutti. Voci correlate ilD Collegamenti esterni Rai Educational
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,684
2021 Veto Session The 2021 Reconvened Session, commonly known as the veto session, occurred on April 7, 2021. Typically a one day affair, the session has been known to go long into the evening or the early hours of the following day. With the reduced bill limits this year, the legislature only sent about 550 bills to the Governor for consideration. In turn, he did not veto any bills and only amended 37, including the budget. For comparison purposes, the Governor amended over 100 bills and vetoed four during the 2020 Session. Given the small number of bills we had to consider, I thought we would adjourn fairly early. Instead we saw a lot of speeches and politicking that extended our work past 8 o'clock in the evening. A couple of issues dominated the reconvened session. The first was marijuana. Last year the General Assembly decriminalized simple possession of marijuana, essentially replacing the criminal punishment with a civil penalty. Last fall, the Governor put forward a plan to legalize recreational use of marijuana and the development of a marketplace to allow for the legal sale of small amounts. His proposal was modeled to a large extent after the laws of 16 other states. The plan was one that could produce a significant amount of revenue to state and local governments. The House and Senate agreed to much of what the Governor proposed but took different approaches. The Governor proposed having legal sales January 1, 2023 and sought to regulate the new marketplace through the Virginia Alcoholic Beverage Control Authority. The House and the Senate opted to create a new entity to regulate marijuana, thus delaying the opening of the marketplace to 2024. The Senate had a couple of other ideas. First, the Senate provided for an advisory referendum so that the people could have a voice in this process. Frankly, there is no question in my mind that such a referendum would pass. Second, we included a reenactment clause on many significant provisions of the bill. In my view, a reenactment clause made sense because this is such a complicated, substantive proposal. Finally, the Senate proposed legalizing simple possession this July in large part because of the disproportionate amount of tickets being issued to African-Americans. While the legislature adjourned on March 1, our work on this bill continued. At the end of the day, the Governor agreed with most of what the Senate had passed. The final version did not include the advisory referendum, and I am disappointed that Virginians will not have a say in this significant policy change. The Governor's changes also included a provision allowing people to own four plants, which would provide a legal means for Virginias to acquire marijuana. The bill does not legalize the sale of marijuana this year, which must wait until we establish the new agency and regulations. A second area of focus was the work of the Parole Board. Recently the Board has come under significant criticism after several people were paroled. As is widely known, the Parole Board rarely used its parole powers in the last 26 years. In fact, people who were convicted after January 1, 1995, have not been eligible for parole. So the only people who actually are eligible, except for geriatric release, are people who were convicted over 26 years ago. Recently, a number of disturbing emails involving a former chair of the Board were released. Those emails raise a number of issues that require an independent investigation. The Governor proposed a budget amendment to require an investigation by a law firm, independent of state government or either political party. My colleagues on the other side of the aisle pushed to create a special legislative committee, armed with subpoena power, to conduct investigations. I am afraid such an investigation would allow legislators to showboat and create headlines rather than getting to the bottom of what really happened in a nonpartisan way. As such, I prefer an independent investigation, which is what we agreed to on Wednesday. In addition to these high-profile topics, the legislature also approved amendments to: allow the Department of Taxation to waive the accrual of interest given the extension to file taxes this year, extend the new VirginiaSaves program to part-time workers, and change the start date for workers compensation presumption relating to COVID from September 1, 2020 to July 1, 2020. The COVID pandemic has driven a lot of our legislative work, and there is still work to be done this year on this front. The General Assembly will convene this year for at least one special session. The legislation passed at the federal level to provide for a third round of stimulus payments, the American Rescue Plan Act of 2021, also provided approximately $3.8 billion in federal spending to Virginia. The General Assembly will have a hand in the budget process, and is expected to return in June or July, once we get direction from the federal government on how the money can be spent. My view is that we should invest that money in one-time needs rather than building it into our base budget. It's a one-time appropriation, and we would be foolish to allocate the funds for programs or services that will require additional expenditures in the years to follow. In terms of one-time spending, I have a couple of ideas to jumpstart some projects I've been working on in recent years. In last year's budget, we included language to review how the state can ensure the financial stability of the Virginia Horse Center in Rockbridge County. The Center has had an enormous economic impact locally and in Virginia over the years, and we have to protect our investment. The pandemic delayed that study. A second project that did not move forward last year that could improve the economic future for our citizens is the development of a state park in Highland County. A feasibility study was held up because of COVID as well. I am hopeful to see some progress on that front. Perhaps the new infusion of federal money can help. The federal dollars could also supplement some of the work we accomplished in this year's budget. We set aside about $10 million to look at the development of statewide trails. One proposal, that makes more sense now with the state's purchase of the Buckingham Branch line, is the development of a scenic rail line starting in Doswell in Hanover County. The line would go through Gordonsville in Orange County, Charlottesville, Staunton, and end in the Town of Clifton Forge. Such a train route would be one of the most scenic in the country and offers incredible economic benefits for citizens and communities along that route. Judicial vacancies are a second issue that could be taken up during a special session. As you may recall, the General Assembly passed legislation remaking the Court of Appeals. The Court already had a vacancy, and we added six more judges to the Court. The discussion to fill those seven seats will take some time. While we have accomplished much this year, more work always remains. Thank you for allowing me to be a part of this process. It continues to be an honor to represent you in the Senate of Virginia. If I may ever be of assistance, please call my office in Hot Springs at (540) 839-2473 in Charlottesville at (434) 296-5491 or email me at district25@senate.virginia.gov. Creigh Want to Serve? The Governor makes appointments to various Boards and Commissions ranging from the Broadband Advisory Council to the Corn Board. If you are interested in serving, you should explore the the list of vacancies and upcoming appointments and consider applying. For more information visit the website of the Secretary of the Commonwealth. If you have lost your insurance, are otherwise uninsured, or want to change plans you can enroll in the Marketplace now through May 15. If you have questions or need assistance, Enroll Virginia navigators and Certified Application Counselors can help. Visit Cover Virginia for more information. About the Author: Creigh Deeds
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
335
Q: problem editing utf8 text file with vim I have a problem editing a html file on a server via vim. The file is utf-8 encoded. While editing with vim (v7.3, no plugins active) I can see umlauts and editing and saving a line before the umlaut is ok. But if I edit after the umlaut it seems that the umlaut consumes two chars while only one char is visible and all edits are shifted. I can see this only after saving and reopening the file. And I can insert an umlaut but for removing I have to press x twice (the char changes meanwhile). I have no idea where to search for the issue vim, terminal or ssh connection? remote: > file index.html index.html: HTML document, UTF-8 Unicode text > echo $TERM xterm-256color > locale charmap ANSI_X3.4-1968 > grep CHARMAP /etc/default/console-setup CHARMAP="UTF-8" local: > locale charmap UTF-8 A: It turns out, that the terminal locales were setup somehow wrong. My .bashrc had a export LC_ALL=C. > locale LANG=en_US.UTF-8 LANGUAGE= LC_CTYPE="C" LC_NUMERIC="C" LC_TIME="C" ... LC_IDENTIFICATION="C" LC_ALL=C After removing LC_ALL=C I get this: > locale LANG=en_US.UTF-8 LANGUAGE= LC_CTYPE="en_US.UTF-8" LC_NUMERIC=en_US.UTF-8 LC_TIME=en_GB.UTF-8 ... LC_IDENTIFICATION=en_US.UTF-8 LC_ALL= Vim now opens the same file with enconding=utf-8 and fileencondig=utf-8 and editing is normal. Thanks Murphy and Radovan for some pointers. Maybe anyone has an explanation for this issue.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,649
Off-roading with the new Ford Endeavour in Philippines Bob Rupani | Updated: January 04, 2017, 04:57 PM IST Recently Ford had a SUV Experience event in the Philippines that consisted of taking an EcoSport through narrow backstreets to local markets, going fishing in the Ford Escape Kuga, driving on country roads in an Explorer and getting a massage at an eco-art resort! But the most interesting, and the reason I was there, was going off-roading in volcano ash in the Ford Endeavour that is known as the Everest in the Philippines. Yes, the highlight of the Ford SUV Experience has to be the off-road driving we did in the "Pampanga Lahar Beds". But what's so special about this? One needs a little history lesson to appreciate this. In June 1991, the second largest volcanic eruption of the 20th century took place in the Philippines. It was Mount Pinatubo that erupted, killed hundreds and blew 10 cubic kilometres of ash, stone, etc. into the air. The massive ash falls destroyed everything around. Now this region has become a popular off-road driving destination. Simply because, when the combination of sand, volcanic ash and rainwater mix, it results in a thick and sticky moving substance known locally as 'lahar'. These lahars are really gluey, and it's a real challenge to drive on this wet quicksand-like stuff. In these Lahar beds, you have to maintain a good momentum and keep the engine in its peak power band to avoid getting bogged down. At the same time you have to keep a sharp lookout for stones and boulders Fortunately, the Ford Endeavour was up to the test as it's a solid SUV with body-on-frame construction, ground clearance of 225mm, water wading depth of 800mm, and an engine that delivers 200PS and 470Nm of torque. What's equally important is how all this power is transferred to the four wheels. Like all good old-school SUVs, the Ford Endeavour still retains its 4X4 transfer case that allows low-range to be selected and the gear ratios to be lowered for delivery of torque at much lower speeds. Sadly, the transfer case has now become an 'endangered species' as more and more manufacturers are doing away with this most essential component for serious off-road driving. Given this, Ford needs to be complimented for retaining the transfer case. What also enhances the Endeavour's off-road performance are the 'Terrain Management System' and electronic locking rear deferential. Thanks to these, the Endeavour can move forward even with two front wheels and one rear wheel not having any traction as long as the fourth wheel has grip. The 'Terrain Management System' also lets the driver choose from four pre-set settings –– snow, mud, grass, and sand and rock. As mentioned, the 4WD system's transfer case can be locked into low range for more aggressive torque transfer in extreme off-road terrain. The Endeavour has a 'Terrain Management System' that is very similar to Land Rover's 'Terrain Response System' The Pampanga Lahar Beds we drove in are amongst the world's extreme off-road environments, and the Ford Endeavour performed very well in this tough terrain. Of course, the driver had to maintain a good momentum, pick the right path, feed the power smoothly and strongly when required, and at the same time keep a lookout for rogue boulders waiting to make contact with the under carriage. Interestingly, as the volcano ash is very fertile, it's resulted in a lot of greenery. In some of the images you can see mountains. Actually this is all volcano ash, and the valleys and ravines have been formed by flow of rain and river water. In fact, our refreshment camp was set below towering and obviously unstable cliffs, and the locals told us this is not a place you want to be in a heavy downpour, because you never know when the ash starts sliding and moving! Unfortunately, this awesome drive ended way too quickly, and we were back to experiencing the lifestyle opportunities offered by the SUVs. That's all very fine for lifestyle or travel journalists. But for us really it was a wasted exercise, and we would much rather have spent a lot more time in the lahar beds exploring the impressive limits of the Endeavour's off-road performance capabilities in such extreme conditions. Ford Endeavour 2019 Full Spec Starts Rs 28.19 Lakhs Max Power(ps) Max Torque(Nm) 10.6 Kmpl Tags: Ford off-road driving in the Everest | new Ford Everest off-road | Ford Everest off-roading | new Ford Endeavour off-road in Philippines Off-roading with the new Ford Endeavour in Philippines Off-roading with the new Ford Endeavour in...
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,004
<?php namespace App\Validation\Exceptions; class RegexException extends ValidationException { public static $defaultTemplates = [ self::MODE_DEFAULT => [ self::STANDARD => '{{name}} must validate against {{regex}}', ], self::MODE_NEGATIVE => [ self::STANDARD => '{{name}} must not validate against {{regex}}', ], ]; }
{ "redpajama_set_name": "RedPajamaGithub" }
7,981
Q: WPF access to control template I need to accesso to my Expander object programmatically: <DataGrid x:Name="GridData" Grid.Column="2" HorizontalAlignment="Stretch" HorizontalContentAlignment="Stretch" VerticalAlignment="Top" VerticalContentAlignment="Stretch" IsSynchronizedWithCurrentItem="True" SelectionMode="Extended"> <DataGrid.GroupStyle> <GroupStyle> <GroupStyle.ContainerStyle> <Style TargetType="{x:Type GroupItem}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type GroupItem}"> <Expander x:Name="exp" Background="#dedede" HorizontalAlignment="Left" HorizontalContentAlignment="Left" PreviewMouseLeftButtonUp="Expander_MouseDown" BorderThickness="0 0 0 1" BorderBrush="#d0d0d0" Padding="2,0,1,0" Style="{StaticResource StatusGroupExpander}" > Ay hints? A: You can get the template and find the Expander object by name. var gridTemplate = GridData.Template; var expander= (Expander)gridTemplate.FindName("exp", GridData); [The above code is not tested] A: The Expander in the template is eventually applied to a GroupItem. You could handle the Loaded event to access each created Expander in the code-behind: private void OnLoaded(object sender, RoutedEventArgs e) { Expander expander = (Expander)sender; //... } XAML: <ControlTemplate TargetType="{x:Type GroupItem}"> <Expander x:Name="exp" Loaded="OnLoaded" .... /> ...
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,213
# Table of Contents 1. Title Page 2. 3. Foreword: The Miracle of a Solar Eclipse 4. 5. 1. What is a Solar Eclipse? 6. _Total and annular solar eclipses_ 7. _The rhythm of solar eclipses_ 8. 9. 2. Observing an Eclipse 10. _Finding the best location_ 11. _Finding peace_ 12. _What you'll see: following the drama_ 13. _Protecting your eyes_ 14. 15. 3. The Solar Eclipse of August 21, 2017 16. _Table of exact times and locations_ 17. 18. Afterword: Special Accounts of Solar Eclipses 19. _The eclipse of July 8, 1842 by Adalbert Stifter_ 20. _The Egyptian eclipse of August 30, 1905 by M. Wilhelm Meyer_ 21. 22. Copyright Foreword # The Miracle of a Solar Eclipse As when the Sun, new risen, Looks through the horizontal misty air, Shorn of his beams, or from behind the Moon, In dim eclipse, disastrous twilight sheds On half the nations, and with fear of change Perplexes monarchs. Milton, _Paradise Lost_ , 1667 Storms at sea, as well as earthquakes and volcanic eruptions, are possibly the only other natural phenomena which affect people like a total solar eclipse, never to be forgotten. In the case of a solar eclipse, it affects them even though no threat whatsoever emanates from it. Calculated a long time in advance with plenty of notice, the celestial spectacle takes place and yet people are still shocked and deeply touched by the silent immensity of the Sun being covered by darkness. We ourselves become as quiet and speechless as nature surrounding us: no dog barks, no birds sing when the greenish-grey darkness of the core shadow of the Moon falls across the landscape and the sky becomes so dark that planets and bright stars become visible during the day. One would have to be a poet in order to be able to express the strong and changing feelings provoked by a solar eclipse, which is why the words of Wordsworth and the report of the last visible solar eclipse in Central Europe by Adalbert Stifter are included at the end of this little book. Anyone who has ever experienced a solar eclipse will see no pathos in the words of the Austrian writer. It is easy to understand that, in earlier times, people were filled with fear and terror by a solar eclipse, because in ancient Greece, for example, only a few educated people understood the astronomical circumstances leading to an eclipse. The 'theft' of the sunlight by the Moon suddenly called into question the goodness and divinity accorded to the Sun in religious experience. The Moon turned into a dragon which devoured the Sun – that is how many ancient cultures thought about the eclipse. Yet why are we still affected just as strongly today, in a time when everyone understands the astronomical details to a greater or lesser extent and everything can be explained? It is because, in the same way as an earthquake, the secure foundation of our existence is suddenly called into question for a short period of time: in this instance, the certainty that the Sun shines during the day. We may object that it simply becomes night for a brief period of time, but that is to miss the point. The kind of darkness which arises during a solar eclipse is not the kind of darkness we know from night. It has nothing of the romantic mood of a beautiful dusk. It is a greenish light, sallow and strangely oppressive, in which the visible veil-like ring of rays of the Sun's corona thrones majestically above the lowering atmosphere. # 1. What Is a Solar Eclipse? To understand a solar eclipse, we need to understand the movement of the Moon. The different phases of the Moon come about as it moves around the Earth. If the Moon is next to the Sun, it's a New Moon. If it's positioned at right angles to the Sun, it's a waxing and waning half Moon. If it's opposite the Sun, it's a Full Moon. The name 'New Moon' is, however, misleading because the Moon is not yet 'new'. In fact, in antiquity the New Moon designated the small lunar sickle when the Moon was visible again for the first time in the evening sky as a New Moon. The Roman priests proclaimed the new month when they saw the New Moon. The word 'calendar' is reminiscent of this, since calere means 'to call.' In this respect the New Moon should actually be called the non-moon. ## **Total and annular solar eclipses** A solar eclipse occurs when the shadow of the Moon falls on the Earth. Two different shadows are actually created – the core shadow or _umbra_ , and the half shadow or _penumbra_. If we, standing on the Earth, are positioned within the core shadow, the Sun appears completely covered. This is a total solar eclipse, and its range can be up to 165 miles (270 km) wide. The movement of the Moon means that, in the course of an eclipse, the umbra sweeps a path of darkness across the Earth. Outside this core zone, in the range of penumbra, the Sun appears partially covered by the Moon. The diameter of the penumbra is approximately 4,500 miles (7000 km). It is one of the mysteries of the planetary system that the Sun and the Moon display correlations which can hardly be coincidence: for example, the Moon takes 29.5 days to move from Full Moon to Full Moon, which is almost the same amount of time as the Sun requires on average for one rotation around its axis. The Moon thus reflects not only the light of the Sun, but also its type of movement. A second correlation between Sun and Moon lies in their size in the sky: although the Sun with a diameter of 865,000 miles (1,390,000 km) is approximately 400 times as big as the Moon at 2159 miles (3476 km), both appear to be the same size when looked at from Earth. The Moon, with its average distance from the Earth of 238,600 miles (384,000 km) is thus 400 times closer than the Sun. However, the distance between the Moon and the Earth fluctuates due to the elliptical path of the Moon around the Earth. At _apogee_ (furthest from Earth), the distance is about 221,462 miles (356,410 km) and at _perigee_ (closest to Earth) about 252,711 miles (406,700 km). This means that the distance of the Moon moves by one seventh. It is worth trying to visualize this difference in size by comparing the Full Moon when close to and distant from the Earth. Since the height of the Moon above the horizon influences the impression of size, try this when Moon is as high as possible. The problem is that the Full Moons at perigee and apogee lie approximately half a year apart. Nevertheless, we may become more aware that the Full Moons from December 2016 to March 2017 are particularly large, and in summer (May, June, July) 2017 are particularly small. But it is not only the Moon which fluctuates in its distance and thus in its size when seen from the Earth. Since the Earth also moves in an ellipse around the Sun, even if it is less elongated, the distance between the two and thus the apparent size of the Sun also change. The distance between Sun and Earth ranges between 91 and 94 million miles (147 and 151 million km) in the course of the year, so correspondingly the apparent diameter of the Sun changes by 3%. This fluctuation is four times smaller than the fluctuation in diameter of the Moon, but it contributes to the complexity of time and character of solar eclipses. In optimum conditions, that is, when the Moon is at perigee and the Sun at apogee, the tip of the umbra cone projects an umbra which is 165 miles (270 km) wide. In the opposite case, when the Sun is large (perigee) and the Moon is relatively small (apogee), the shadow cone does not reach the surface of the Earth. Instead of a total eclipse, an _annular_ one occurs in which the Moon is surrounded by a ring of light. In such a case the remaining brightness is so great that the corona described above cannot be seen, but the ring of light around the Moon, which shimmers in an anthracite colour, is still a breathtaking sight. ## **The rhythm of solar eclipses** A solar eclipse occurs when the Moon casts its shadow on to the Earth, but why does this not happen at every New Moon when the Moon is between the Earth and the Sun? The path of the Moon is inclined by 5° to the Earth's. The angle of its path means that the Moon, as a rule, travels above or below the Sun as it passes. As a consequence its shadow mostly does not fall on the Earth but passes above or below it. Two things must therefore come together in order for a solar eclipse to occur. There must be a New Moon and at the same time this New Moon must be at the same 'level' as the Earth and the Sun: that is, in its course it must break through the plane of the Earth's trajectory, the _ecliptic_. These breakthrough points of the Moon's trajectory, lying opposite one another, are called the _lunar nodes_. Only when the line connecting these two points is on a level with the Sun can an eclipse occur. By its nature, that happens twice every year. On one occasion the Sun is aligned with the ascending node and, correspondingly, half a year later is aligned with the descending node. However, the New Moon need not necessarily reach this position in relation to the Sun to the exact day in order for a solar eclipse to take place. Since the Earth is clearly larger than the Moon, the latter's umbra falls on the Earth even if the Moon is positioned as a New Moon between the Sun and the Earth up to 18 days before or after the ideal Sun position. Now this timeframe of 2 x 18 = 37 days is greater than the rhythm of the moon's orbit. A New Moon occurs every 29.5 days. Hence a solar eclipse always takes place in the periods in which the Sun stands near the nodal line. Therefore, a minimum of two and a maximum of four eclipses take place per year. As already described, they can be either central (total or annular) or partial. For the solar eclipse of August 21, 2017, the Sun reaches the ascending lunar node on August 16, five days before the New Moon. On the day of the New Moon, August 21, the Moon is thus positioned slightly above the position of the Sun so that the umbra falls north of the equator – passing through North America. # 2. Observing an Eclipse ## **Finding the best location** The best place to see an eclipse is in the line of the core shadow, the umbra. A total eclipse of the Sun is far more spectacular than a partial eclipse south or north of this corridor of total shadow. Plan your observation location well in advance. If you're in a populated area, anticipate possible traffic jams and tailbacks; where possible, travel a day or two ahead and stay in the area overnight. Although a total eclipse usually takes around two minutes, it often feels as if it's over very quickly. This is another reason to stay as close as possible to the central line: the duration of darkness rapidly reduces to the north and south. A place should be chosen which has an unrestricted view of the horizon. Most important of all is a view of the _western_ horizon, so try to choose a location on raised ground with a good aspect over the western landscape. In that location, you will be able to see the eclipse approaching in impressive style, even though the Moon has not yet covered the Sun where you are standing. A peculiar darkness, like in a distant thunder storm, will approach with ever greater speed from the west. ## **Finding peace** In order to be able to observe the many different events in the sky during the eclipse, and also to be able to see how nature responds to the celestial spectacle, it is helpful if your chosen location is sufficiently peaceful. Often cities falling within the eclipse offer many different events. Large projection screens are set up on which the approach of the eclipse is shown with a loudspeaker commentary. But to really experience the eclipse, a quiet place without diversions, out in the countryside if possible, is recommended. If you're with a group of people, you will find that your conversation dies away as the total eclipse begins and the majesty of what you're seeing takes hold. ## **What you'll see: following the drama** **Always ensure your eyes are protected by a suitable filter (such as specially designed glasses) when viewing an eclipse.** With your eyes protected by a filter, the first impression you'll have is when you suddenly see the slight dent at the edge of the Sun. It can feel like a meeting of fate: the Moon starts to move in front of the Sun at the very second which has been calculated, so many years in advance. The surroundings will not yet appear to grow dark because even when 50% of the Sun has been covered, the eye compensates for the fading light – but a dramatic change has nevertheless taken place. Butterflies disappear, swallows fly close to the ground in agitation, the mood in nature becomes more and more depressed. A dull greenish, mustard-coloured yellow settles on the fields and you may feel strangely disorientated the nearer you get to totality. About five minutes before the end of the partial phase, events take a dramatic turn: the sky acquires a greenish, sallow tint, which differs slightly across different eclipses. It is quite distinct from cloud cover or normal dusk. The spreading shadow can be seen on the western horizon. Like a wave of darkness, it takes hold of the clouds and appears to rise from the ground. It is the shadow of the Moon racing towards the observer at a speed of about 1,000 yards (1 km) per second. At the same time, it begins to get cooler and any wind often dies down. The situation has a curiously unreal effect because opposing phenomena suddenly come together: on the one hand, the level of light is similar to the advanced stage of dusk but at the same time one can see that shadows are etched very sharply, particularly on people's faces. A few seconds before the Sun is fully covered, its last rays of light fall on the Moon through the latter's valleys. The extremely thin solar sickle can then look like a necklace of pearls, producing a so-called diamond ring of light before the corona of the Sun becomes visible. A few minutes before totality, peculiar fleeting shadows, the so-called 'flying shadows', can be seen on the light surface. The speed of these mysterious dark bands – which are about the width of a hand – increases as totality approaches. After the last sunlight has disappeared, the pink chromosphere, a thin sheath of light surrounding the Sun, is visible for a few seconds until the Moon covers it too. Now darkness closes in with a greenish and sallow light, comparable to a Full Moon night. Often groups of people fall into awe-struck silence. The birds grow silent, some blossoms close. The horizon takes on a pronounced, eerie orange-red hue of dusk. Dominating everything, the Sun's corona is now visible: a white ring of rays with a bright inner part and a weaker outer part which disperses the further it extends from the Sun. The naked eye sees much more of the curved structures of this 'Sun atmosphere', 'the Sun outside the Sun', than can be captured in a photograph. The difference in brightness between the inner and outer corona exceeds the capabilities of a camera. The colours in nature cannot be photographed either. They are still dimly there but it is as if nature has lost its soul. In many familiar photographs, the prominences at the edge of the Sun look like a mighty fiery phenomena. But the actual experience during a solar eclipse tells a different story: the orange and red fringe of light exudes such mildness that we gain the impression that it is not the Moon which is victorious over the Sun, but that the Sun tolerates its own eclipse. Much too fast, after a few minutes, the light of the diamond ring appears again to the right of the Moon shadow. (Note that in the southern hemisphere, the appearance is mirrored and the light appears on the left.) With the first ray of the Sun, light and life return and you may experience a wonderful joy, a feeling of spring in nature. After a few moments, the Moon once again releases a delicate solar sickle. Now the phenomena occur in reverse order but without the ominous drama of events before totality. Once again the flying shadows can be observed. The shadow of the Moon disappears across the landscape towards the east at great speed, and nature returns to life. Rapidly, the surroundings grow light again. The temperature, which can easily fall by as much as 12°F (7°C), also rises again. Just as we can learn to appreciate friends or our familiar everyday surroundings through their sudden absence, so it is with our relationship to the Sun. The experience of a total eclipse can change our view of the Sun forever. ## **Protecting your eyes** During the partial phase, you should only observe the Sun with protected eyes, or you could permanently damage your retina. Normally our eyelids protect us from too much sunlight by blinking. But when we look at darkness in a concentrated way, this reflex is weakened. **Even when only a small sickle of the Sun is left** , too much light and heat penetrates the eye and can cause serious damage to the retina. The impression of the weak light of the remaining solar sickle is deceptive. Since the retina cannot experience pain, we don't get any warning of permanent visual damage which may in any case only occur a few hours later. The most suitable filters are protective glasses made of aluminized mylar film or welding filter glass. Glass panes covered in soot, black film, or CDs block the visible sunlight but not the UV radiation and are dangerous. If a telescope or binoculars are used to observe the eclipse, a corresponding mylar film must be attached securely to the front of the instrument, not between the eye and the lens. If children are around, the telescope must be under constant supervision for as long as it is pointed at the Sun. During the period of totality, no filter is needed. However, you must be sure that all the Sun's light has fully disappeared before viewing without a filter. Projection is another possibility for observing the solar sickle. Without a filter, the sunlight is allowed to fall on a white sheet through binoculars on a stand. A piece of cardboard with a small hole pierced in the middle could even be used instead of binoculars. A simple but slightly more comfortable form of projection is provided through the construction of a camera obscura: take two boxes, of which one is slightly larger than the other, so that the smaller one can be pushed into the larger one (once the ends have been removed). A small hole should be pierced at one end while the cardboard is cut away at the other and replaced by greaseproof paper as a matt screen. This type of projection can be found a million times over in nature: if we look at a rhododendron bush, for example, or at the shadow of a deciduous tree we can often see hundreds of superimposed sickles of the Sun on the ground. The sunlight breaks through the cracks between the individual leaves just like through the pinhole. # 3. The Solar Eclipse of August 21, 2017 The solar eclipse of August 21, 2017 will be the first total eclipse visible from mainland United States since 1979 (the eclipse of 2012 was annular). Other total eclipses visible in populated areas in recent years have included March 29, 2006 which passed across Africa and the Middle East, and July 22, 2009 which passed across India and China. Apart from the eclipse of 2015 visible in the Faeroes and Spitsbergen (Arctic Norway), there will not be a total eclipse visible in Europe until 2026. The August 2017 eclipse belongs to the same Saros cycle (a period of a little over 18 years which connects a series of eclipses) as the last central eclipse of the twentieth century on August 11, 1999, which crossed Europe and the Middle East. The eclipse will be total in the USA and across the Atlantic, and partial in North, Central and northern South America, the west coast of Europe and North Africa. Almost 2,500 miles (4000 km) off the American west coast, the eclipse of August 21 begins in the North Pacific. The band of shadow reaches Oregon at 9.05 local (Pacific Daylight Saving) time. The Sun has already reached a height of 27° above the horizon when the Moon starts to edge in front of it. One hour later, at 10.17 and with the Sun having risen to 39°, the rapid approach of the eclipse from the Pacific can be observed from the coastal cities of Lincoln City or Newport. One minute later the first larger city, Salem, the capital of Oregon, is immersed in darkness. The season could hardly be better for weather. On the west coast and in the centre of the continent, late summer is the period of least rain and cloud. This applies particularly east of the Cascade mountains. This mountain range forms a meteorological divide between the humid coastal climate and the dry conditions inland. Once the eclipse has crossed the forests of Oregon, its shadow reaches the Rocky Mountains in Idaho. Take care with the boundary between Pacific and Mountain Time which does not follow the state line here. Depending on the weather, cloud can be formed here through rising air masses. The shadow zone passes close by Boise, the capital of Idaho, and then over the 13,770 ft (4,200 m) high Grand Teton just south of Yellowstone National Park. This largest and oldest of American nature reserves, with almost 3000 geysers and an elemental landscape, offers a force of nature to equal that of the solar eclipse. The duration of the eclipse steadily increases as it travels across the American continent. Dubois, Wyoming, at the foot of Gannet Peak is already covered in darkness for 2 minutes 20 seconds. The lunar shadow here races across the eastern Rocky Mountains at a 'mere' 1,800 miles per hour (2,900 km/h) to reach the Great Plains, the dry, fertile prairies at the centre of the continent. The shadow is now 60 miles (100 km) wide. In Nebraska, again watch for the change from Mountain to Central Time which does not follow the state line. Passing just north of Kansas City, for a time it follows the Missouri and crosses the Mississippi just south of Saint Louis. The centre of the eclipse is located north of Nashville, Tennessee. The city itself is only subject to 1 minute 30 seconds of darkness; however, totality grows to 2 minutes 40 seconds only 50 miles (80 km) further north, where smaller towns such as in Hopkinsville and Russelville, Kentucky, and Cookville, Tennessee, are situated. The shadow will have slowed down the most here to 1,400 mph (2,300 km/h). At this location, the Sun will have risen to 65° above the horizon at the time of maximum, giving the observer the impression that the Sun is almost in the zenith. The eclipse passes through Great Smoky National Park between Tennessee and North Carolina where it crosses the Appalachians. However, the probability of a cloudy sky increases in comparison to the plains. At 14.48 (2.48 pm) the shadow leaves the American continent at Georgetown, South Carolina. The shadow continues to track across the Atlantic Ocean, finally leaving the Earth 600 miles (1,000 km) off the West African coast. Another eclipse will not travel across the USA for another seven years, on April 8, 2024. ## **Table of exact times and locations** **2017 Aug 21, Monday** _Location_ | _Time zone (GMT±)_ | _Beginning_ | _Maximum_ | _End_ | _Duration_ | _Height of Sun_ ---|---|---|---|---|---|--- Lincoln City, Oregon, USA | –7 | 09.04 | 10.17 | 11.36 | 1m 58s | 39° Salem, Oregon, USA | –7 | 09.05 | 10.18 | 11.38 | 1m 59s | 40° Corvallis, Oregon, USA | –7 | 09.05 | 10.18 | 11.38 | 1m 30s | 40° Madras, Oregon, USA | –7 | 09.07 | 10.21 | 11.41 | 2m 00s | 42° John Day, Oregon, USA | –7 | 09.09 | 10.24 | 11.45 | 1m 54s | 44° Weiser, Idaho, USA | –6 | 10.10 | 11.27 | 12.49 | 2m 05s | 45° Stanley, Idaho, USA | –6 | 10.12 | 11.30 | 12.53 | 2m 15s | 47° Howe, Idaho, USA | –6 | 10.14 | 11.33 | 12.56 | 2m 03s | 49° Idaho Falls, Idaho, USA | –6 | 10.15 | 11.34 | 12.58 | 1m 32s | 50° Jackson, Wyoming, USA | –6 | 10.17 | 11.36 | 13.01 | 2m 10s | 51° Dubois, Wyoming, USA | –6 | 10.18 | 11.38 | 13.03 | 2m 21s | 51° Bonneville, Wyoming, USA | –6 | 10.20 | 11.41 | 13.06 | 2m 24s | 53° Casper, Wyoming, USA | –6 | 10.22 | 11.44 | 13.10 | 2m 27s | 54° Douglas, Wyoming, USA | –6 | 10.24 | 11.46 | 13.12 | 2m 27s | 55° Scottsbluff, Nebraska, USA | –6 | 10.26 | 11.49 | 13.16 | 1m 15s | 56° Arthur, Nebraska, USA | –6 | 10.29 | 11.53 | 13.20 | 2m 14s | 57° Mullen, Nebraska, USA | –6 | 10.30 | 11.54 | 13.21 | 1m 48s | 58° North Platte, Nebraska, USA | –5 | 11.30 | 12.55 | 14.22 | 1m 26s | 58° Stapleton, Nebraska, USA | –5 | 11.31 | 12.56 | 14.22 | 2m 34s | 59° Kearney, Nebraska, USA | –5 | 11.33 | 12.59 | 14.26 | 1m 39s | 60° Grand Island, Nebraska, USA | –5 | 11.34 | 13.00 | 14.27 | 2m 36s | 61° Hasting, Nebraska, USA | –5 | 11.34 | 13.00 | 14.27 | 2m 01s | 61° Lincoln, Nebraska, USA | –5 | 11.37 | 13.03 | 14.30 | 1m 40s | 61° Beatrice, Nebraska, USA | –5 | 11.37 | 13.04 | 14.31 | 2m 31s | 61° Maryville, Missouri, USA | –5 | 11.41 | 13.04 | 14.34 | 0m 47s | 61° Atchison, Kansas, USA | –5 | 11.40 | 13.08 | 14.35 | 2m 08s | 61° St Joseph, Missouri, USA | –5 | 11.41 | 13.08 | 14.35 | 2m 39s | 61° Marshall, Missouri, USA | –5 | 11.44 | 13.12 | 14.39 | 2m 38s | 63° Columbia, Missouri, USA | –5 | 11.46 | 13.14 | 14.41 | 2m 40s | 63° Jefferson City, Missouri, USA | –5 | 11.46 | 13.15 | 14.41 | 2m 22s | 63° Sullivan, Missouri, USA | –5 | 11.48 | 13.17 | 14.44 | 2m 26s | 64° St Louis, Missouri, USA | –5 | 11.51 | 13.18 | 14.45 | 0m 57s | 64° Farmington, Missouri, USA | –5 | 11.50 | 13.19 | 14.46 | 1m 38s | 64° Cape Girardeau, Missouri, USA | –5 | 11.52 | 13.21 | 14.48 | 1m 28s | 64° Carbondale, Illinois, USA | –5 | 11.53 | 13.22 | 14.49 | 2m 40s | 64° Paducah, Kentucky, USA | –5 | 11.54 | 13.24 | 14.50 | 2m 08s | 64° Hopkinsville, Kentucky, USA | –5 | 11.57 | 13.26 | 14.52 | 2m 40s | 64° Clarksville, Tennessee, USA | –5 | 11.57 | 13.27 | 14.53 | 2m 08s | 64° Russellville, Kentucky, USA | –5 | 11.58 | 13.28 | 14.53 | 2m 35s | 63° Bowling Green, Kentucky, USA | –5 | 11.59 | 13.28 | 14.54 | 1m 22s | 63° Nashville, Tennessee, USA | –5 | 11.59 | 13.29 | 14.54 | 1m 43s | 63° Cookville, Tennessee, USA | –5 | 12.01 | 13.31 | 14.56 | 2m 39s | 64° Crossville, Tennessee, USA | –5 | 12.03 | 13.32 | 14.57 | 2m 37s | 64° Athens, Tennessee, USA | –4 | 13.04 | 14.34 | 15.59 | 2m 32s | 63° Bryson City, N Carolina, USA | –4 | 13.07 | 14.36 | 16.01 | 2m 09s | 63° Clayton, Georgia, USA | –4 | 13.07 | 14.37 | 16.02 | 2m 31s | 63° Greenville, South Carolina, USA | –4 | 13.10 | 14.39 | 16.04 | 2m 20s | 63° Spartanburg, S Carolina, USA | –4 | 13.10 | 14.40 | 16.04 | 0m 28s | 63° Clinton, S Carolina, USA | –4 | 13.11 | 14.41 | 16.05 | 2m 32s | 62° Columbia, South Carolina, USA | –4 | 13.13 | 14.43 | 16.07 | 2m 32s | 62° Charleston, South Carolina, USA | –4 | 13.17 | 14.47 | 16.10 | 1m 21s | 62° Georgetown, S Carolina, USA | –4 | 13.17 | 14.48 | 16.10 | 1m 59s | 62° Lincoln City, Oregon, USA | –7 | 09.04 | 10.17 | 11.36 | 1m 58s | 39° Salem, Oregon, USA | –7 | 09.05 | 10.18 | 11.38 | 1m 59s | 40° Corvallis, Oregon, USA | –7 | 09.05 | 10.18 | 11.38 | 1m 30s | 40° Madras, Oregon, USA | –7 | 09.07 | 10.21 | 11.41 | 2m 00s | 42° John Day, Oregon, USA | –7 | 09.09 | 10.24 | 11.45 | 1m 54s | 44° Weiser, Idaho, USA | –6 | 10.10 | 11.27 | 12.49 | 2m 05s | 45° Stanley, Idaho, USA | –6 | 10.12 | 11.30 | 12.53 | 2m 15s | 47° Howe, Idaho, USA | –6 | 10.14 | 11.33 | 12.56 | 2m 03s | 49° Idaho Falls, Idaho, USA | –6 | 10.15 | 11.34 | 12.58 | 1m 32s | 50° Jackson, Wyoming, USA | –6 | 10.17 | 11.36 | 13.01 | 2m 10s | 51° Dubois, Wyoming, USA | –6 | 10.18 | 11.38 | 13.03 | 2m 21s | 51° Bonneville, Wyoming, USA | –6 | 10.20 | 11.41 | 13.06 | 2m 24s | 53° Casper, Wyoming, USA | –6 | 10.22 | 11.44 | 13.10 | 2m 27s | 54° Douglas, Wyoming, USA | –6 | 10.24 | 11.46 | 13.12 | 2m 27s | 55° Scottsbluff, Nebraska, USA | –6 | 10.26 | 11.49 | 13.16 | 1m 15s | 56° Arthur, Nebraska, USA | –6 | 10.29 | 11.53 | 13.20 | 2m 14s | 57° Mullen, Nebraska, USA | –6 | 10.30 | 11.54 | 13.21 | 1m 48s | 58° North Platte, Nebraska, USA | –5 | 11.30 | 12.55 | 14.22 | 1m 26s | 58° Stapleton, Nebraska, USA | –5 | 11.31 | 12.56 | 14.22 | 2m 34s | 59° Kearney, Nebraska, USA | –5 | 11.33 | 12.59 | 14.26 | 1m 39s | 60° Grand Island, Nebraska, USA | –5 | 11.34 | 13.00 | 14.27 | 2m 36s | 61° Hasting, Nebraska, USA | –5 | 11.34 | 13.00 | 14.27 | 2m 01s | 61° Lincoln, Nebraska, USA | –5 | 11.37 | 13.03 | 14.30 | 1m 40s | 61° Beatrice, Nebraska, USA | –5 | 11.37 | 13.04 | 14.31 | 2m 31s | 61° Maryville, Missouri, USA | –5 | 11.41 | 13.04 | 14.34 | 0m 47s | 61° Atchison, Kansas, USA | –5 | 11.40 | 13.08 | 14.35 | 2m 08s | 61° St Joseph, Missouri, USA | –5 | 11.41 | 13.08 | 14.35 | 2m 39s | 61° Marshall, Missouri, USA | –5 | 11.44 | 13.12 | 14.39 | 2m 38s | 63° Columbia, Missouri, USA | –5 | 11.46 | 13.14 | 14.41 | 2m 40s | 63° Jefferson City, Missouri, USA | –5 | 11.46 | 13.15 | 14.41 | 2m 22s | 63° Sullivan, Missouri, USA | –5 | 11.48 | 13.17 | 14.44 | 2m 26s | 64° St Louis, Missouri, USA | –5 | 11.51 | 13.18 | 14.45 | 0m 57s | 64° Farmington, Missouri, USA | –5 | 11.50 | 13.19 | 14.46 | 1m 38s | 64° Cape Girardeau, Missouri, USA | –5 | 11.52 | 13.21 | 14.48 | 1m 28s | 64° Carbondale, Illinois, USA | –5 | 11.53 | 13.22 | 14.49 | 2m 40s | 64° Paducah, Kentucky, USA | –5 | 11.54 | 13.24 | 14.50 | 2m 08s | 64° Hopkinsville, Kentucky, USA | –5 | 11.57 | 13.26 | 14.52 | 2m 40s | 64° Clarksville, Tennessee, USA | –5 | 11.57 | 13.27 | 14.53 | 2m 08s | 64° Russellville, Kentucky, USA | –5 | 11.58 | 13.28 | 14.53 | 2m 35s | 63° Bowling Green, Kentucky, USA | –5 | 11.59 | 13.28 | 14.54 | 1m 22s | 63° Nashville, Tennessee, USA | –5 | 11.59 | 13.29 | 14.54 | 1m 43s | 63° Cookville, Tennessee, USA | –5 | 12.01 | 13.31 | 14.56 | 2m 39s | 64° Crossville, Tennessee, USA | –5 | 12.03 | 13.32 | 14.57 | 2m 37s | 64° Athens, Tennessee, USA | –4 | 13.04 | 14.34 | 15.59 | 2m 32s | 63° Bryson City, N Carolina, USA | –4 | 13.07 | 14.36 | 16.01 | 2m 09s | 63° Clayton, Georgia, USA | –4 | 13.07 | 14.37 | 16.02 | 2m 31s | 63° Greenville, South Carolina, USA | –4 | 13.10 | 14.39 | 16.04 | 2m 20s | 63° Spartanburg, S Carolina, USA | –4 | 13.10 | 14.40 | 16.04 | 0m 28s | 63° Clinton, South Carolina, USA | –4 | 13.11 | 14.41 | 16.05 | 2m 32s | 62° Columbia, South Carolina, USA | –4 | 13.13 | 14.43 | 16.07 | 2m 32s | 62° Charleston, South Carolina, USA | –4 | 13.17 | 14.47 | 16.10 | 1m 21s | 62° Georgetown, S Carolina, USA | –4 | 13.17 | 14.48 | 16.10 | 1m 59s | 62° Afterword # Special Accounts of Solar Eclipses ## **The eclipse of July 8, 1842** ### **Adalbert Stifter** There are things we have known for fifty years, and suddenly, when we reach fifty-one, we are amazed by the weight and fruitfulness of their content. That is how I felt about the total solar eclipse that I experienced in the early hours of a clear Vienna morning on July 8, 1842. I am quite able to set out such an event on paper by a sketch and calculations; and I knew that at a certain hour the Moon would coincide with the path of the Sun and the Earth would cut off a section of its conical shadow, which would then draw a black line across the globe due to the continued progress of the Moon in its trajectory and the Earth spinning around its axis – something which is seen at various locations as a disk appearing to cover the Sun, taking more and more of it away until nothing is left but a small sickle, which also disappears in the end. On Earth it grows darker and darker, until the Sun's sickle appears again on the other side, and grows until gradually full daylight is restored. All of this I knew in advance, and knew it so well that I believed myself able to describe a full solar eclipse in advance as though I had already seen it. But in the course of the event, as I was standing at a spot high above the whole city and saw the phenomenon with my own eyes, completely different things happened, of course, of which I had no inkling even in my wildest dreams, and which no one thinks of if they have not seen such a miracle. Never in my whole life was I so shaken as in these two minutes; it was as though God had suddenly spoken clearly and I had understood. I came down from my observation spot like Moses might have done thousands of years ago from the fiery mountain: confused and with a frozen heart. It was such a simple thing. One body shines on another one and the latter casts its shadow on a third body: but these bodies are so far apart that we have no way of imagining it, they are so gigantic that they extend far beyond anything we call big – such a complex of phenomena is associated with this simple event, this physical process is endowed with such a moral power, that it mounts up in our heart into an incomprehensible miracle. A thousand times a thousand years ago, God created the conditions for an eclipse to occur today at this second; and He laid the seeds in our heart so that we could experience and feel it. Into the script of His stars he laid the promise that it would happen after thousands and thousands of years, and our fathers learnt to decipher the script and predict the second in which it would occur; we, their grandchildren, direct our eyes and telescopes at the preordained moment towards the Sun and behold: there it is. Reason has triumphed in that it has managed to learn from Him and calculate the magnificence and composition of His heavens – and, indeed, this is a righteous triumph for human beings to claim. It arrives, and quietly it continues to grow, and we become aware that God also gave human beings something for their heart which we did not know in advance, and which is a millionfold greater in value than what we have learned and can calculate in advance by reason: He gave them the Word: 'I am – not because these bodies and these phenomena exist, no, I am because that is what your hearts tell you in awe at this moment, and because those hearts experiences their own greatness because of their awe' – animals fear, humans worship... At 5 o'clock I climbed up to the observation spot of house number 495 in the city, from where there is a view not just of the whole city, but also of the surrounding countryside to the farthest horizon, where the Hungarian mountains shimmer in the dawn like delicate mirages. The Sun had already risen and its friendly light shone on the steaming Danube meadows, the reflecting waters and the angular forms of the city – particularly St Stephen's Cathedral which rose out of the city like a dark, quiet mountain range so close that one could almost touch it. I looked at the Sun, which was to be the subject of such strange events in a few minutes, with a peculiar feeling. In the distance, where the great river lies, there was a thick, extended line of mist; clusters of fog and cloud also crept around on the south-eastern horizon, which we feared greatly, and whole districts of the city were suspended in the haze. There were only very thin veils where the Sun stood and they too revealed large blue islands. The instruments were set up, the glass for viewing the Sun prepared, but the time had not yet come. Below, the rattling of the carriages, the hustle and bustle began – above, people wanting to watch the eclipse gathered; our observation spot began to fill, heads were looking out of the dormer windows of surrounding houses, figures were standing on roof ridges all looking at the same spot in the sky; there was even a group at the highest tip of the tower of St Stephen's Cathedral, on the very highest platform of the scaffolding, just as trees often find a niche on a rock in which they manage to grow. Thousands of eyes were probably looking at the Sun from the surrounding mountains, the same Sun that for millennia had cast its blessings on the Earth without a word of thanks from anyone – today it was the target of millions of eyes, but observed through smoky glass it continued to hover as a red or green sphere in space, pure and beautifully rounded. Finally, at the predicted minute, as if an invisible angel had given it a gentle kiss of death, a fine band of the sun's light began to retreat from the breath cast by this kiss while the other side continued to well gently and golden in the lens of the telescope. 'It's coming' the call went up also among those who had observed the Sun only through blackened glass, but otherwise with bare eyes – 'it's coming,' and everyone watched in excitement wondering what would happen next. The first strange, alien feeling now began to seep into our hearts, namely that out there thousands and millions of miles distant, where human beings have never penetrated, something was happening on Earth, at the long predetermined moment, to bodies with whose nature no human being was familiar. People might suggest that the whole event is quite natural and easy to calculate from the laws governing the motion of these bodies; the wonderful magic of the beauty which God gave these things does not bother with such calculations. It exists because it exists, indeed, it exists in spite of such calculations and blessed is the heart which can experience it. Because that alone is wealth, there is no other – the sublime which overwhelms our soul lives in the immense space of the heavens and yet mathematics considers this space to have no quality other than its size. While everyone was looking, moving a telescope here and a telescope there, drawing each other's attention to various things that were happening, the invisible darkness increasingly encroached on the beautiful light of the Sun – everyone was full of anticipation, the excitement rose; but so mighty and full was the ocean of light showering down from the Sun that there was no feeling of privation; the clouds continued to shine, the strip of water shimmered, the birds darted across the roofs, the towers of St Stephen's Cathedral cast their peaceful shadows on the sparkling roof, people were driving and riding across the bridge as usual, little knowing that the balsam of life, the light, was secretly ebbing away. Outside, however, at the Kahlengebirge mountain and beyond the Belvedere Palace, the darkness – or rather a leaden light – was stealing closer like a wild animal. And yet it might have been an illusion, our observation spot remained lively and bright and the cheeks and faces of those nearby were as clear and friendly as always. The strange thing was that this eerie mass, this profoundly black advancing entity which was slowly eating away the Sun should be our Moon, the beautiful gentle Moon which on other occasions cast its silvery light in the night; and yet that is what it was and in the telescope its edges set with spikes and bulges also appeared, the terrible mountains piling up on the sphere, smile so sweetly at us. Finally, the effect also became visible on Earth – more and more so as the glowing sickle in the sky became smaller and smaller; the river, no longer shimmering, became a ribbon of grey taffeta, matt shadows lay everywhere, the swallows became restless, the beautifully gentle radiance of the sky was extinguished as if frosted by a breath; we felt a cool breeze arise, an indescribably strange but leaden light cast its spectre across the meadows; across the forests their gentle movement disappeared with the play of the light and peace rested on them, not the peace of sleep, however, but of impotence. The light over the landscape turned more and more sallow, and the landscape itself became more and more rigid – our shadows were cast empty and tenuous upon the walls, our faces became ashen. This gradual decay in the midst of what a few minutes ago had been a fresh morning, was dreadful. We had imagined the gradual disappearance of the light rather like the failing of the evening light only without the redness of the evening sky; we had never imagined how eerie an onset of evening without the redness of the evening sky could be. But in other respects, too, this was a quite different twilight, it was a burdensome, uncanny alienation of our nature. Towards the south-east there was an alien, yellowish reddish shadow and the mountains and even the Belvedere were submerged in it; at our feet the city sank deeper and deeper, a shadow play without being, the driving, walking and riding across the bridge occurred as though we saw it in a black mirror. As the tension rose to its highest degree, I took one last look through the telescope, the very last one: thin, like an incision with a knife in the darkness, the glowing sickle stood there, about to be extinguished at any moment, and as I lifted my unprotected eyes I saw that everyone else had also put away their telescopes and was looking up with bare eyes. They no longer needed them because, in the same way as the last spark of an extinguished wick disappears, the last spark of the Sun melted away, probably through the gap between two lunar mountains. It was an exceedingly sad moment as the one disc covered the other, and in actual fact, it was this moment that produced the truly heart-rending effect. No one had expected anything like this. With one voice we exclaimed 'Ah' and then there was a deadly silence. It was the moment in which God spoke and human beings listened attentively. Previously, if the gradual fading and disappearance of nature had depressed us and made us desolate, and if we had imagined that process as a waning into a kind of death, we were now suddenly startled and jolted upright through the terrible power and violence of the movement which suddenly burst through the whole sky: the clouds on the horizon, which had earlier made us fearful, helped more than ever to produce the phenomenon. They now stood upright like giants, a terrible red hue ran from their vertex, and below they arched in a deep, cold, heavy blueness depressing the horizon. Banks of fog which had long welled up at the outermost edge of the Earth, and had merely been discoloured, now asserted themselves and shivered in a delicate, yet terrible glow which flooded them. Colours which no eye had ever seen coursed through the heavens. The Moon was positioned in the centre of the Sun, no longer as a black disc, but in a semi-transparent state as if covered in a delicate steely gleam. The Sun had no edge but a wonderful, beautiful, shimmering ambience, bluish and reddish in colour, with broken beams, as if the Sun in its elevated position was pouring its wealth of light on the lunar sphere so that it sprayed in every direction – the most graceful effect of the light I have ever seen! Extending far over the Marchfeld area, there lay a slanted, long pyramid of light with a horrible yellow hue, flaming in sulphurous colours and with an unnatural blue hem. It was the illuminated atmosphere beyond the shadow, and I have never seen a light that was so unearthly and so terrible, yet it provided the means by which we could see. If we had been put in a desolate mood by the previous monotony, we were now crushed by power and radiance and mass – our shapes were caught in the light like black, hollow spectres without any depth; the phantom of St Stephen's Cathedral was suspended in the air, the rest of the city was like a shadow, all the rattling had stopped, there was no longer any movement on the bridge; because each carriage and each rider had come to a halt and every eye was looking heavenward. I will never, never forget those two minutes – they represented the powerlessness of a giant body, our Earth. How holy, how incomprehensible and how terrible is that entity which floods around us, which we soullessly enjoy and which causes our globe to tremble with such frisson when it withdraws its light for a short period. The air became cold, noticeably cold, dew fell so that clothes and instruments became damp, animals were terrified. The most awful thunderstorm was but superficial noise against such deathly silent majesty – the words of Lord Byron's poem came into my mind: the darkness where people set houses on fire, set forests on fire only to see light – but there was also such grandeur, I might almost say closeness to God, in these two minutes that the heart felt that he must be close by. Byron was not enough – suddenly the words from the Holy Bible came into my mind, the words at the death of Christ: 'The Sun was darkened; the Earth did quake, and the rocks rent; the graves were opened; and the veil of the Temple was rent in twain from the top to the bottom.' The effect on people's hearts was also evident. After the first fear had subsided, there were unarticulated sounds of wonder and amazement: some people lifted their hands, some wrung them quietly in front of them, others took and squeezed each other by the hands – one woman began to cry violently, another in the house next to us fainted, and one man, a serious and robust person, told me later that tears had been running down his face. I always considered the old descriptions of solar eclipses to be an exaggeration just as this one might be considered as exaggerated in the future; but all of them, just like this one, do not do reality justice. They can only paint a picture of what is on show, and do so badly, and they are even worse at describing the associated feelings. What they are totally incapable of describing is the namelessly tragic music of colour and light that spread across the sky – a Requiem, a Dies irae which breaks our heart so that God sees it and his precious dead, so that he hears the call: 'Lord, how great and magnificent are your works, we are but dust before you because you can destroy us merely by puffing away a light particle and can transform our world, the comely and familiar place where we live, into an alien space in which larvae stare!' But just as everything in creation has its allotted time, so too this phenomenon. Fortunately it only lasts for a very short time, a lifting of the cloak about his figure allowing us a fleeting access, only to cover himself up again so that everything returns to its previous state. Just as people were beginning to express their feelings in words, that is, when they began to abate, when people began to exclaim: 'How magnificent, how terrible' – just at this moment it stopped: at once the alien world disappeared and the familiar one returned. A single droplet of light welled up at the upper edge like white hot metal and we had our world back. It thrust itself out as if the Sun itself were glad that it had overcome, a beam shot through space, a second appeared – but before people had time to call out 'Oh!' the larval world had disappeared at the flash of the first atom and our world had returned: and the lead-coloured horrific light, so frightening before its disappearance, now returned to refresh us, as friend and acquaintance, objects cast shadows again, the water sparkled, the trees were green once more, we looked each other in the eyes – ray followed victorious ray, and however small and tiny the first bright circle was, it appeared to us that we had been given an ocean of light. It is beyond expression, and anyone who has not experienced it will not believe the joyous, the victorious relief in our hearts: we shook hands, we said we would remember what we had seen for the rest of our days. We heard individual sounds as people called to one another from roofs and across the alleys, the driving and noise began again, even the animals felt it; horses neighed, the sparrows on the roofs began a clamour of joy as lurid and clownish as they usually are when very excited, and the swallows shot past and flashed up and down through the air. The growing light no longer had any effect. Almost no one waited for the end, instruments were dismantled, we climbed down and in all the streets and pathways, groups and processions on their way home were engaged in the most animated and exalted discussions and exclamations. And even before the waves of admiration and veneration had subsided, before people could discuss with friends and acquaintances how the phenomenon had affected this person or that, here or there, the beautiful, graceful, warming, sparkling atmosphere returned, and the day's labour continued. But for how long people's hearts continued to be agitated until they too could return to the day's labour – who can say? May God permit the impression to last; it was a magnificent one which little can challenge even though a person might live to a hundred years of age. I know that I have never been as affected, neither by music nor poetry, nor any other phenomenon of nature or art. Of course I have been familiar with nature since childhood and my heart is used to its language, and I love its language, perhaps more one-sidedly than is good; but I believe that there cannot be any heart in which this phenomenon has not left an indelible impression. ## **The Egyptian eclipse of August 30, 1905 M. Wilhelm Meyer** Slowly the dark disc thrust its way further into the Sun. It was like inexorably approaching destiny. When the Sun's sickle looked no more than the three-day old Moon and there was still about fifteen minutes before the big moment, all those who had no reason to be there were asked to leave the area. There was just ourselves, our instruments and the waning Sun. A solemn silence descended; the Nile, too, flowed past us in solemn silence. A strange phenomenon appeared. The spots of light which shone through the palm leaves now took on a sickle shape and all the ground near our instruments displayed this strange pattern. About ten minutes before totality, a clear reduction in light intensity was noticeable on the landscape. I had set up my normal photographic equipment next to the telescope and now took a photograph of the landscape, leaving aperture and speed the same as for full sunlight. Professor West, our commander general called: 'Five minutes to go, gentlemen.' We went to our posts. The light began to fade at a faster and faster rate. Only a few reflections of the small solar sickle still bubbled on the Nile almost like moonshine, and yet unlike it. It was a type of illumination that I had not seen before, and I sought in vain for comparisons to describe what I had had to describe hundreds of times previously, without really having seen it. One could say it was like an approaching thunderstorm though it was not the same yellowish light, but rather greyish blue. Also, the remaining sunshine was very weak, whereas an approaching thunderstorm tends to produce very sharp contrasts. It really was as if the whole of nature had been overcome by some kind of impotence, or rather, as if our sight was beginning to fade in these minutes, because nothing changed in the sky and on the Earth other than the light. 'One minute to go; are you ready, gentlemen?' Only the countdown rhythmically interrupted the eerie silence. During the last ten seconds, the darkness increased with frightening speed. When the last rays of the Sun, spilling over the mountainous edge of the Moon, had died away, such a complete change in the scenery happened in the final seconds, that it came as a complete surprise, and affected me deeply within my being. It was as if nature had been fractured. If it had been dark previously, it now became wholly black in these first moments, like blackest night until our eyes became adjusted. Just as suddenly, as if the mysterious gleam from another world had begun to shine through during the previous second, the silver crown of the corona rays appeared; it was as if this light was emerging from the spot where the Sun had now completely disappeared and was being hurled at speed into dark outer space. As the place where the Sun had previously been now possessed the same darkness and colouring as the rest of the sky, and even appeared a little darker through the contrast with the corona, a confusing impression arose as if the Sun really had come from another world, dissolved into nothing, and had left behind this spectral glow around the emptiness it created in its wake, and as if the whole of nature possessed no more than just such a shadow existence. There was a cheerless orange-yellow glow on the horizon, caused by sections of the atmosphere no longer affected by the umbra of the Moon. This yellow light was also reflected in our faces so people looked like sallow shadows. If I recall, an artist created the picture of the landscape at the great moment described here. The pulses of physical nature faltered, it seemed to be coming to a halt. We can be sure that everyone, even the most mindless person, stopped in their tracks as the lunar shadow raced over them. A characteristic incident in this respect occurred when the driver of a train travelling a few kilometres from Aswan station, confused by the impression that darkness was falling, stopped the train as if he was about to hit an obstacle. Our servant, who had remained at the inner sanctum of our station and who until then had been standing with arms crossed, suddenly ducked as if the thought something was about to fall on him out of the sky; he was about to run away, but in the face of the surrounding scientific calm he regained his composure and suffered the event to the end with arms crossed. I only had a few minutes to let the impression of this great spectacle to sink in – it aroused a number of the strangest feelings.... My eyes had adjusted and my senses, deeply affected at the beginning, had calmed down. I was clearly able to see two red flames, protuberances gleaming over the edge of the Moon, with my bare eyes, and follow some bundles of rays from the corona in their peculiar form, until they were about one-and-a-half times the diameter of the Sun in the sky. Some stars shone in the sky, Venus in particular. Before we knew it, however, much faster than two minutes normally last, the first ray of Sun flashed up from the edge of the Moon and the corona withdrew back into itself; normal daylight seemed to return much faster than it had disappeared. All of us took a deep breath.... What secrets had the heavens revealed to us? # Copyright Translated by Christian von Arnim Edited by Christian Maclean This is a revised extract from _Eclipses 2005–2017: A Handbook of Solar and Lunar Eclipses_ by published in English by Floris Books, Edinburgh in 2005 First published in German as _Astronomische Sternstunden_ by Verlag Freies Geistesleben in 2005 © Verlag Freies Geistesleben & Urachhaus GmbH, Stuttgart 2005 English version © 2005, 2015 Floris Books, Edinburgh No part of this book may be reproduced in any form without the prior permission of Floris Books, 15 Harrison Gardens, Edinburgh British Library CIP Data available ISBN 978–178250–168–8
{ "redpajama_set_name": "RedPajamaBook" }
7,640
NXT Arrival (stylizowane na NXT ArRIVAL) – gala wrestlingu, wyprodukowana przez federację WWE dla zawodników brandu rozwojowego NXT. Odbyła się 27 lutego 2014 w Full Sail University w Winter Park na Florydzie. Była transmitowana na żywo za pośrednictwem WWE Network. Była to pierwsza gala ekskluzywna dla brandu NXT, jak również pierwsza gala emitowana na WWE Network. Podczas wydarzenia odbyło się siedem walk, z czego jedno starcie nie zostało wyemitowane (był to tzw. dark match). W walce wieczoru NXT Champion Bo Dallas bronił tytułu mistrzowskiego przeciwko Adrianowi Neville'owi. Gala otrzymała dobre noty od krytyków. Kevin Pantoja, jeden z redaktorów serwisu o tematyce sportowej, 411mania.com, ocenił galę na 8.5 punktu w skali dziesięciopunktowej. Średnia ocen wszystkich walk gali przyznanych przez Wrestling Observer Dave'a Meltzera wyniosła 2½ gwiazdki. Najwyższą ocenę otrzymało starcie Cesaro z Samim Zaynem – magazyn przyznał tej walce 4¼ gwiazdki na 5 możliwych. Produkcja Przygotowania Brand rozwojowy WWE NXT powstał w czerwcu 2012, zastępując poprzednią szkółkę federacji – Florida Championship Wrestling (FCW). Wrestlerzy pracujący dla FCW zostali automatycznie przeniesieni do nowego brandu, gdzie kontynuowali przygotowania do debiutów w głównym rosterze WWE. Wraz z otwarciem brandu zaczęto emitować tygodniówkę NXT, będącą duchowym następcą FCW TV. W przeciwieństwie jednak do emitowanego jedynie w lokalnej telewizji FCW TV, nagrane odcinki NXT zaczęto nadawać za pośrednictwem telewizji krajowej na wzór Raw i SmackDown. Spowodowało to szybki rozwój i wzrost popularności nowego programu WWE. 1 lutego 2014 ogłoszono, że pierwsza gala specjalna NXT, o nazwie "NXT Arrival", odbędzie się 27 lutego i będzie nadawana na żywo za pośrednictwem debiutującej zaledwie cztery dni wcześniej usługi WWE Network. NXT Arrival oferowało walki wrestlingu z udziałem różnych wrestlerów z udziałem różnych wrestlerów z zaplanowanych przez bookerów rywalizacji i storyline'ów, które były kreowane na tygodniówce NXT. Wrestlerzy byli przedstawiani jako heele (negatywni, źli zawodnicy i najczęściej wrogowie publiki) i face'owie (pozytywni, dobrzy i najczęściej ulubieńcy publiki), którzy rywalizują pomiędzy sobą w seriach walk mających w budować napięcie. Punktem kulminacyjnym rywalizacji są walki na galach PPV lub serie pojedynków. Rywalizacje Paige vs. Emma W lipcu 2013 zorganizowano kobiecy turniej, mający wyłonić pierwszą posiadaczkę nowo utworzonego NXT Women's Championship. W finale turnieju, 24 lipca 2013, spotkały się Paige oraz Emma; z walki zwycięsko wyszła Paige. Tydzień później Komisarz NXT Dusty Rhodes zapowiedział, że Emma i jej przeciwniczka z półfinału turnieju, Summer Rae, zmierzą się ze sobą w zawodach tanecznych, a stawką pojedynku będzie szansa walki z Paige. Konkurs tańca, decyzją fanów zgromadzonych w Full Sail University, wygrała Emma. Rywalizacja między Emmą, Summer Rae i Paige toczyła się do końca 2013. Jesienią do sporu dołączyła także Sasha Banks, która połączyła siły z Rae, tworząc drużynę "BFFs"; 9 października Banks i Rae zaatakowały Emmę, lecz wycofały się, gdy do ringu wbiegła Paige. W kolejnym tygodniu doszło do walki drużynowej, której zwyciężczyniami okazały się BFFs; spowodowało to kolejne napięcie między Paige i Emmą. W wyniku ataku ze strony BFFs Emma doznała urazu głowy, który uniemożliwił jej występy w następnych tygodniach i wykorzystanie prawa do walki z mistrzynią kobiet. Rywalizacja została wznowiona w styczniu 2014, kiedy Emma obroniła swoje miano pretendenckie w walce z Natalyą. 1 lutego ogłoszono, że rywalki zmierzą się ze sobą na NXT Arrival. Cesaro vs. Sami Zayn Rywalizacja między Samim Zaynem a Cesaro rozpoczęła się 22 maja 2013, kiedy debiutujący w NXT Zayn wyzwał Cesaro na pojedynek i pokonał go; po walce debiutant stał się ofiarą ataku ze strony sfrustrowanego przeciwnika. Podczas odcinka z 5 czerwca doszło do bójki na zapleczu między rywalami. Tydzień później zmierzyli się ze sobą w starciu rewanżowym, z którego zwycięsko wyszedł Cesaro. Walka nie rozwiązała konfliktu między zawodnikami, którzy w kolejnych tygodniach dopuszczali się ataków na sobie nawzajem. 17 lipca Zayn, Cesaro i wplątany w rywalizację Leo Kruger zawalczyli ze sobą w Triple Threat matchu o miano pretendenckie do NXT Championship; Kruger wykorzystał konflikt między dwoma przeciwnikami i wygrał walkę. W kolejnych tygodniach Zayn nawiązał współpracę z Bo Dallasem, wkrótce jednak opuścił partnera, by kontynuować waśnie z Cesaro. 21 sierpnia zawodnicy zmierzyli się w 2-out-of-3 Falls matchu, który ostatecznie, z wynikiem 2–1, wygrał Cesaro. Po walce rywalizacja ucichła; Cesaro wrócił do głównego rosteru WWE, Zayn natomiast kontynuował występy w NXT. Konflikt został ponownie wznowiony, gdy 22 stycznia Zayn przyznał, że nadal nie pogodził się z porażką. Wyzwał przeciwnika na kolejną walkę, ten jednak nie zgodził się na rewanż. 12 lutego, wbrew sprzeciwom Cesaro, Triple H zabookował ostateczne starcie między rywalami na NXT Arrival. Bo Dallas vs. Adrian Neville Zalążków feudu między Adrianem Neville'em a Bo Dallasem można doszukiwać się już w maju 2013, kiedy to Neville – posiadacz NXT Tag Team Championship – wybrał Dallasa na zastępstwo za swojego kontuzjowanego partnera, Olivera Greya, tym samym czyniąc Dallasa mistrzem drużynowym. Neville i Dallas szybko utracili mistrzostwa; w późniejszych miesiącach Neville ponownie zdobył mistrzostwo tag teamów, natomiast Dallas stał się antagonistą i zdobył NXT Championship. 27 listopada 2013 Neville zdobył miano pretendenckie do głównego mistrzostwa NXT, a tydzień później zmierzył się z Dallasem. Walka zakończyła się wyliczeniem pozaringowym mistrza, przez co tytuł nie zmienił właściciela. 18 grudnia zawodnicy spotkali się w rewanżowym Lumberjack matchu. I tym razem pojedynek zakończył się nieczysto – walkę zakończyła interwencja jednego z "drwali" – Tylera Breeze'a. Neville ponownie stał się pretendentem po pokonaniu Dallasa w Beat the Clock Challenge'u w styczniu 2014. 5 lutego, podczas odcinka NXT, Triple H ogłosił, że zawodnicy zawalczą o NXT Championship w Ladder matchu na nadchodzącym NXT Arrival. Gala NXT Arrival komentowali Tom Phillips, William Regal i Byron Saxton, zaś jako konferansjerka służyła Eden Stiles. W panelu pre-show udział wzięli Bret Hart, Paul Heyman, Kevin Nash oraz Renee Young. Na widowni obecni byli kanclerz Full Sail University Garry Jones, burmistrz Hrabstwa Orange Teresa Jacobs, WWE Hall of Famerzy Ric Flair, Pat Patterson, Dusty Rhodes, Larry Zbyszko, Steve Keirn, oraz wrestler WWE John Cena. Przed rozpoczęciem transmisji na żywo, fani zgromadzeni w Full Sail University mogli zobaczyć walkę między Masonem Ryanem a Sylvestrem Lefortem; zwycięzcą okazał się ten pierwszy. Główne show NXT Arrival rozpoczęło krótkie promo Triple H'a. Producent NXT zapowiedział, że "NXT to następne pokolenie, które właśnie nadeszło". Openerem gali było zapowiadane starcie między Samim Zaynem a Cesaro. Pierwsze minuty zaciętej walki były pokazem technicznego wrestlingu, z czasem jednak coraz częściej w starciu pojawiały się ryzykowne akcje wyczynowe oraz finishery obu wrestlerów. Zawodnicy nie stronili też od walki poza ringiem. W drugiej połowie spotkania Zayn wykonał na Cesaro Helluva Kick, lecz nie wystarczyło to do pokonania przeciwnika. Dzięki ciągłym atakom na kontuzjowane wcześniej kolano rywala, odgrywający rolę heela Cesaro zapewnił sobie przewagę; nakazał Zaynowi dobrowolnie poddać walkę i leżeć na macie, ten jednak wciąż wstawał, za co karany był kolejnymi, charakterystycznymi dla Cesaro ciosami podbródkowymi. Zayn odparł atak dopiero po czwartym ciosie, dzięki kontrze zdołał wykonać na przeciwniku sunset flip powerbomb, nie dało mu to jednak zwycięstwa. W odpowiedzi, Cesaro wymierzył oponentowi dwa European Uppercuty oraz Neutralizer, czym zapewnił sobie wygraną. Po zakończeniu pojedynku zawodnicy otrzymali owacje na stojąco od publiki i objęli się na znak szacunku. Mojo Rawley pokonał CJ Parkera w walce trwającej zaledwie trzy minuty; wykończył przeciwnika atakiem biodrem i Hyperdrivem. W trzecim starciu gali posiadacze NXT Tag Team Championship The Ascension (Konnor i Viktor) mieli zawalczyć z niezapowiedzianymi przeciwnikami. Ich oponentami okazał się popularny w latach Attitude Ery tag team Too Cool (Scotty 2 Hotty i Grand Master Sexay). Pod koniec sześciominutowej walki Viktor uniknął Wormu Hotty'ego, po czym The Ascension wykonało na Hottym drużynową akcję kończącą Fall of Man. Starcie o NXT Women's Championship poprzedziło przemówienie Stephanie McMahon, która posłużyła również jako konferansjerka spotkania. Walka między Paige a Emmą rozpoczęło się typową dla ówczesnego wrestlingu kobiet szarpaniną. Emma wypracowała sobie przewagę nad mistrzynią dzięki technicznej dźwigni oraz wykonanym powerbombie. Paige zdołała wrócić do gry, atakując pretendentkę Paige Turnerem oraz własną dźwignią (nazwaną później Paige Tapout), którą zmusiła Emmę do poddania się. Po walce przeciwniczki objęły się, kończąc swoją rywalizację. Tyler Breeze zmierzył się z Xavierem Woodsem. Walka została przerwana przez interwencję Ruseva i Lany. Rusev zaatakował obydwu uczestników, dlatego pojedynek został nierozstrzygnięty. Walka wieczoru Tuż przed rozpoczęciem ostatniego starcia gali krótkie przemówienie wygłosił WWE Hall of Famer Shawn Michaels. W walce wieczoru NXT Arrival mistrz NXT Bo Dallas bronił tytułu mistrzowskiego przeciwko Adrianowi Neville'owi. Był to pierwszy Ladder match w historii NXT. Pojedynek szybko zdominował Bo Dallas; po kilku udanych atakach zaplątał pretendenta w linach, po czym udał się po drabinę. Neville szybko wyswobodził się z lin i zaatakował mistrza NXT z zaskoczenia, wyrównując szanse. Zaczął wspinać się na drabinę, lecz szybko został zrzucony przez oponenta i ponownie uwięziony, tym razem pod drabiną. Zdołał zatrzymać mistrza przed ściągnięciem pasa, wkrótce jednak sam zaprzepaścił szansę zdobycia mistrzostwa spadając z drabiny. Dallas ponownie zyskał przewagę, zrzucając Neville'a z górnej liny na podłogę poza ringiem. Zaczął wspinać się po pas, lecz szybko został dogoniony przez oponenta. Wykonał na przeciwniku powerbomb, rzucając nim w usytuowaną w narożniku drugą drabinę. Pretendent skontrował kolejny atak mistrza, po czym sam cisnął nim w leżącą drabinę, wspiął się na górną linę i wykonał swoją akcję kończącą Red Arrow. Pozwoliło mu to na wspięcie się na szczyt drabiny i zerwanie pasa. Wygrywając walkę, Neville stał się czwartym posiadaczem NXT Championship. Transmisja na żywo zakończyła się celebracją nowego mistrza NXT w ringu. Po zejściu NXT Arrival z anteny do świętującego Neville'a dołączyli John Cena oraz Triple H, obaj pogratulowali mu sukcesu, a producent NXT podniósł jego rękę w geście zwycięstwa. Neville podziękował publiczności za wsparcie, po czym udał się na zaplecze. Odbiór gali Około 200 osób, które zakupiły bilety na NXT Arrival nie wpuszczono do Full Sail University, ponieważ arena nie pomieściłaby tylu fanów. Wielu widzów oglądających galę na żywo poprzez WWE Network napotkało problemy techniczne. NXT Arrival otrzymało dobre opinie krytyków i fanów. Aaron Oster z Baltimore Sun określił widowisko jako "fantastyczne, lecz zeszpecone przez problemy techniczne". Odkładając kwestię WWE Network na bok, Oster stwierdził, że Arrival było "fenomenalnym widowiskiem, które pozwoliło rosterowi NXT zabłysnąć", oraz że wszystkie z trzech najbardziej promowanych walk spełniły lub przekroczyły jego oczekiwania. Pochwalił komentatorów, opener gali, postępy Mojo Rawley'ego w ringu, określił walkę o NXT Women's Championship jako najlepsze starcie kobiet w WWE od ostatnich kilku lat. Uznał jednak, że starcie między The Ascension a Too Cool było złym pomysłem, a walka wieczoru była dobra, lecz nie wyróżniła się na tle innych pojedynków z drabinami. Kevin Pantoja, redaktor serwisu 411mania.com, ocenił galę na 8.5 punktu w skali dziesięciopunktowej. Ocena użytkowników serwisu wyniosła 7.7 punktu. Pantoja szczególnie zachwalił walkę Cesaro z Samim Zaynem, oceniając ją na 4¾ gwiazdki na 5 możliwych i przyznając jej status kandydata na pojedynek roku. Podobnie do Aarona Ostera, Pantoja uznał pojedynek Paige i Emmy za lepszy od walk kobiet głównego rosteru. Walkę wieczoru określił jako "solidny, lecz niespektakularny Ladder match". Redaktor nie szczędził słów krytyki wobec starcia drużynowego, twierdząc, że mistrzowie tag team mogli zaprezentować się lepiej, gdyby walka była krótsza. Justin James z Pro Wrestling Torch stwierdził, że "chętnie zapłaciłby za NXT Arrival tyle, ile trzeba płacić za każdą galę pay-per-view". Pochwalił zespół komentatorski, występy Samiego Zayna, Cesaro, Adriana Neville'a oraz Bo Dallasa, jak również walkę kobiet. Średnia ocen wszystkich walk gali przyznanych przez Wrestling Observer Newsletter Dave'a Meltzera wyniosła 2½ gwiazdki. Najwyższą ocenę otrzymał opener gali – magazyn przyznał tej walce 4¼ gwiazdki na 5 możliwych. Drugim najwyżej ocenionym starciem była walka wieczoru, która otrzymała 3¼ gwiazdki. Według magazynu, najgorzej wypadł pojedynek między CJ Parkerem a Mojo Rawleyem, której przyznano jedną gwiazdkę. Wydarzenia po gali WWE przeprosiło za problemy techniczne, które napotkali widzowie oglądający galę na żywo za pośrednictwem WWE Network. Starcie między Adrianem Neville'em a Bo Dallasem zostało umieszczone w kompilacji najlepszych walk NXT, wyprodukowanej przez WWE. Adrian Neville pokonał Bo Dallasa w walce rewanżowej, zaś Sami Zayn rozpoczął rywalizację z Coreyem Gravesem. Cesaro wygrał André The Giant Memorial Battle Royal na WrestleManii XXX. 7 kwietnia, dzień po WrestleManii, Paige zadebiutowała w głównym rosterze; pokonała mistrzynię Div, AJ Lee, stając się nową posiadaczką WWE Divas Championship. Po tej wygranej, Paige zmuszona była zawiesić NXT Women's Championship. Również 7 kwietnia Alexander Rusev odnotował pierwszą wygraną w głównym rosterze, a także zaczęto wypuszczać winiety promujące debiut Bo Dallasa. Druga gala specjalna NXT, nazwana NXT TakeOver, odbyła się 29 maja 2014. Rozpoczęła ona cykl gal TakeOver. Wyniki walk Zobacz też NXT TakeOver (cykl gal) Lista gal WWE Network Przypisy Linki zewnętrzne Oficjalna strona WWE NXT Wyniki gali na WWE.com Zdjęcia z gali na WWE.com Arrival Gale profesjonalnego wrestlingu na Florydzie Gale profesjonalnego wrestlingu w Stanach Zjednoczonych w 2014 Gale WWE Network 2014
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,785
import json import functools from c7n_terraform.parser import ( HclLocator, TerraformVisitor, Parser, VariableResolver) from .tf_common import data_dir, build_visitor def test_parser_eof(): data = Parser().parse_module(data_dir / "aws-s3-bucket") path = data_dir / "aws-s3-bucket" / "s3.tf" assert path in data assert len(data) == 3 tf_assets = data[path] assert list(tf_assets) == ["resource"] assert list(tf_assets["resource"][0]) == ["aws_s3_bucket"] def test_locator(): locator = HclLocator() result = locator.resolve_source( data_dir / "aws-s3-bucket" / "s3.tf", ["resource", "aws_s3_bucket", "b"] ) assert result["start"] == 1 assert result["end"] == 24 def test_visitor(): path = data_dir / "aws-s3-bucket" data = Parser().parse_module(path) visitor = TerraformVisitor(data, path) visitor.visit() blocks = list(visitor.iter_blocks(tf_kind="variable")) assert len(blocks) == 1 myvar = blocks[0] assert myvar.name == "mybucket" assert myvar.default == "mybucket2" def test_variable_resolver(aws_s3_bucket): resource_blocks = list(aws_s3_bucket.iter_blocks(tf_kind="resource")) assert len(resource_blocks) == 1 resource = resource_blocks[0] assert 'bindings' in resource bindings = resource['bindings'] assert len(bindings) == 1 binding = bindings[0] variable_blocks = list(aws_s3_bucket.iter_blocks(tf_kind="variable")) assert len(variable_blocks) == 1 variable = variable_blocks[0] assert binding['expr_path'] == ['bucket', 0] assert binding['source'] == 'default' assert binding['expr'] == '${var.mybucket}' assert binding['var']['path'] == variable['path'] assert binding['var']['default'] == variable['default'] assert binding['var']['type'] == variable['type'] assert binding['var']['value_type'] == variable['value_type'] def test_variable_resolver_value_map(): variable_resolver = functools.partial(VariableResolver, value_map={"mybucket": "mybucket3"}) visitor = build_visitor(data_dir / "aws-s3-bucket", resolver=variable_resolver) blocks = list(visitor.iter_blocks(tf_kind="resource")) assert len(blocks) == 1 assert blocks[0]['data']['bucket'] == ['mybucket3'] def test_visitor_dump(aws_s3_bucket, tmpdir): visitor_json = tmpdir.join('dump.json') aws_s3_bucket.dump(visitor_json) with open(visitor_json) as f: json.load(f) def test_visitor_provider(aws_complete): providers = list(aws_complete.iter_blocks(tf_kind="provider")) assert len(providers) == 1 assert providers[0].name == "aws" def test_visitor_module(aws_complete): blocks = list(aws_complete.iter_blocks(tf_kind="module")) assert len(blocks) == 1 assert blocks[0].name == "atlantis" def test_visitor_terraform(aws_complete): blocks = list(aws_complete.iter_blocks(tf_kind="terraform")) assert len(blocks) == 1 assert blocks[0].name == "terraform" def test_visitor_output(aws_complete): blocks = list(aws_complete.iter_blocks(tf_kind="output")) assert len(blocks) == 1 assert blocks[0].name == "bucket_arn" def test_visitor_data(aws_complete): blocks = list(aws_complete.iter_blocks(tf_kind="data")) assert len(blocks) == 1 assert blocks[0].name == "current" def test_tf_json_parsing(): path = data_dir / "tfjson-tf" tf_data = Parser().parse_module(path / "tf") tfjson_data = Parser().parse_module(path / "tfjson") assert tf_data[path / "tf" / "main.tf"] == tfjson_data[path / "tfjson" / "main.tf.json"]
{ "redpajama_set_name": "RedPajamaGithub" }
4,193
Гміна Бусько-Здруй () — місько-сільська гміна у східній Польщі. Належить до Буського повіту Свентокшиського воєводства. Станом на 31 грудня 2011 у гміні проживало 32951 особа. Територія Згідно з даними за 2007 рік площа гміни становила 235.88 км², у тому числі: орні землі: 80.00% ліси: 11.00% Таким чином, площа гміни становить 24.38% площі повіту. Населення Станом на 31 грудня 2011: Сусідні гміни Гміна Бусько-Здруй межує з такими гмінами: Віслиця, Ґнойно, Новий Корчин, Піньчув, Солець-Здруй, Стопниця, Хмельник. Примітки Бусько-Здруй Бусько-Здруй
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,149
1 Hamilton Place Boston, MA 02108 United States Price : $$ (Moderate) Venue Details Photos Facebook Twitter About Orpheum Theatre The Orpheum Theater can easily be missed, as it's hidden at the bank entrance of the Corner Mall in Downtown Crossing. Built in 1852, it was once the home of the Boston Symphony Orchestra, and though several upgrades have been taken on over time, it is still one of the most historic venues in the city and maintains much of its old world charm. While it's been utilized in many ways over the years, including as the Opera Company of Boston, a movie theater, and a house for vaudeville shows, it's currently a live concert hall. Performers from days gone by include the Grateful Dead, the Police, and Bob Dylan.Currently you'll see primarily mid-level alternative bands like Grizzly Bear, Panic! at the Disco, and Metric, though there are frequently club nights, and comedians like Margaret Cho have been known to sell out the 2,700 seat theater. Parking is relatively easy, thanks to several lots in the area, and the venue is very close to the Downtown Crossing T stop where you can pick up the Green, Orange, or Red line. Beer can be on the expensive side here, averaging $10 a bottle, and though wine is also available there are no mixed drinks or hard alcohol. ©2014MWFC Dress Code: Club Attire Website: http://crossroadspresents.com/orpheum-theatre/ https://twitter.com/OrpheumBoston/statuses/
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,206
Editor's note: The Outlook series spotlights a country to give a deeper understanding of the business, industry and consumer trends that fuel its economy. While exploring the current challenges and opportunities facing a country's economic progress, Outlook also seeks to provide an insight into its future development. (CNN) Valued at $83 billion, Taiwan Semiconductor (TSMC) is the world's largest contract chip manufacturer in the world. TSMC makes chips for customers like Qualcomm and Fujitsu and captures nearly 50% of the market. This year the company says it will invest $8.5 billion, much of it in R&D, to try and stay ahead of the pack. CNN's Pauline Chiou sat down with the founder of the company, 81-year old Chairman and CEO Morris Chang. CNN: What's the biggest challenge for you in the global slowdown? Is it the slowdown in China, the Eurozone debt crisis or the U.S. economy? Chang: If you compare all those, I would still say the U.S. economy is the most important to us because our market is mainly in the U.S. Our Chinese market has been growing very fast even though it started from a very low level. If I had to rank, the U.S., China, Europe and Japan in terms of market opportunities. I'd rank them in that order. CNN: How does the slowdown in global growth compare with other downturns like 2008/2009 or the Asian financial crisis in the late 1990s? Chang: As far as Taiwan is concerned, this one is worse than the financial crisis. This time as the world economy continues to be slow, Taiwan's exports have gotten affected. CNN: Do you see any other opportunities in other parts of the world, for example, emerging markets? Chang: Yes, but they would be relatively small at least in the next three to five years but of course, we are not ignoring them. India, for instance, we're trying to cultivate that market and South Korea. CNN: Cross strait relations between Taiwan and China have improved. Are there certain incentives that China has offered to make business easier for your company? Chang: We have a factory in China. Shanghai did offer us incentives in setting up a factory (there). But I think the main benefit we have got from improvement in cross strait relations is the sales that we make in China. CNN: Back to financials, in your last call to analysts, you said you foresee a dip in the fourth quarter and first quarter of next year. What are the reasons for that? Chang: We are in a cyclical business. We are in the middle of the total information technology supply chain. Being in the middle, you're affected by the inventory situation down the supply chain. If the end market changes and goes down a little bit, it backs up the inventory. CNN: What kinds of measure have you taken to guard against cyclical activity? Chang: It's like being careful in capacity planning. We don't try to take every advantage of an upswing. In any upswing, we often don't have enough capacity. Since we're the leaders, our customers wait for us. Of course, frugality is practiced all the time. We don't excessively spend money even when there's an upswing. This entry was posted in Work, World and tagged 產業, industry, Morris Chang, semiconductor, Taiwan, Taiwan Semiconductor Manufacturing Company, tsmc, World, 半導體, 台灣, 台積電, 張宗謀. Bookmark the permalink.
{ "redpajama_set_name": "RedPajamaC4" }
3,094
{"url":"https:\/\/tex.stackexchange.com\/questions\/502805\/is-there-a-package-or-font-which-has-a-letter-like-%c5%81-but-with-a-double-crossing","text":"# Is there a package or font which has a letter like \u0141 but with a double crossing?\n\nLike above, I want to know if there is a package or font which would allow me to type a letter \\L but with two crossings instead of one i. e. a double crossed L (L with two strokes). It would be a great letter to use for me in the paper I'm writing.\n\n\u2022 Not a package but you could manually input - or make your own macro defining something like \\LL. Aug 4 '19 at 15:53\n\u2022 @Miztli If I try to manually input it, it gives me an error because of unicode character. Anyway, I thought of something more like \u0141, with the stroke being at an angle. Aug 4 '19 at 15:57\n\u2022 You'll have to add \\usepackage[utf8]{inputenc} to you preamble or compile using XeTeX or LuaTeX. As for L with a double stroke at an angle, I'm not aware of such a character so you'd probably have to come up with a personalised way of composing\/printing such a character if \u2c60 will not do. Aug 4 '19 at 15:59\n\u2022 Related: (same method should work here) inserting a single unicode character with pdflatex Aug 4 '19 at 16:00\n\u2022 @Miztli inputenc won't help if the font you're using doesn't have the relevant glyph. (Also, it's not needed in current distributions.) Since I suspect this isn't common, then using an OpenType font that has it and XeTeX or LuaTeX would be the way to do this. Aug 4 '19 at 16:02\n\nIf it is a math symbol,\n\n\\documentclass{article}\n\\usepackage{amsmath}\n\n\\newcommand{\\LL}{\\text{\\normalfont\\fontencoding{OT1}\\itshape\\makeLL}}\n\\newcommand{\\makeLL}{%\n\\settowidth{\\dimen0}{$L$}%\n\\makebox[\\dimen0][r]{%\n\\makebox[0pt][l]{\\raisebox{0.2ex}{\\hspace{0.15ex}\\symbol{32}}}%\n\\makebox[0pt][l]{\\raisebox{-0.2ex}[0pt][0pt]{\\hspace{0.1ex}\\symbol{32}}}%\n$L$%\n}%\n}\n\n\\begin{document}\n\n$|\\LL|$\n\n$|L|$\n\n\\end{document}","date":"2021-12-08 10:39:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8552102446556091, \"perplexity\": 1480.0803418106761}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964363465.47\/warc\/CC-MAIN-20211208083545-20211208113545-00299.warc.gz\"}"}
null
null
Controversial Townsend Harris principal outed by DOE By Mark Hallum Posted on April 27, 2017 Townsend Harris High School students received news of a new principal taking the lead starting May 1. By Mark Hallum After months of turmoil at one of the top high schools in the city, Townsend Harris has found a permanent replacement for the interim principal whose ouster had been sought by students, parents and elected officials Townsend Harris High School will be leaving what many are calling a negative chapter behind after it was announced Interim Principal Rosemarie Jahoda would be leaving May 1, and a new head would be taking a permanent role in the elite school's leadership. The announcement that the C-30 process was complete and Brian Condon, 43, from the School for Tourism and Hospitality in the Bronx, would take over was made at the April 20 School Leadership Team meeting, according to the school's newspaper, The Classic. Condon was formerly an English teacher at Van Buren High School in Fresh Meadows, beginning in 2002, before becoming a principal for Tourism and Hospitality in the Bronx in2013. "I am excited to join the Townsend Harris community and meet with students, staff and families," Condon said in a Friday release. "While it is bittersweet to be leaving Tourism and Hospitality, this is an exciting new chapter and I'm looking forward to the shared work ahead of us at Townsend Harris." Condon spoke with The Classic about his intentions in running the school on the basis of trust and respect. He emphasized listening to the needs of students and getting to know the students instead of the other way around. He proposed starting a podcast in order to not only increase communication and familiarity between him and the students, but the whole Townsend Harris community. Condon praised the The Classic for the hard work the students put in to generate reliable updates on Townsend Harris in and around the school, a major break from a March statement from a District 26 official who accused the newspaper of publishing "fake news." Jahoda, appointed in September, has been accused of refusing to protect Muslim students from harassment and jeopardizing seniors' college prospects through mismanagement of transcripts. "Brian Condon is an experienced, talented educator. I look forward to the work he'll do at Townsend Harris and thank Rosemarie Jahoda for her leadership as interim acting principal," said Superintendent Elaine Lindsey, who delivered the news to Townsend faculty, students and teachers in the school's library on Thursday. Student Union President Alex Chen explained at a February rally on the steps of City Hall how Jahoda broke from standard procedures for processing college transcripts, which resulted in prolonging the process and raising fears they might get delivered to universities too late. Two members of the school's Muslim Student Association, Tahiya Choudhury and Sangida Akter, both 17, complained about Jahoda's response when they went to her after a student was heard yelling "F— Muslims" while they were hosting a bake sale shortly after the election of Donald Trump. They said Jahoda seemed reluctant to take action. Max Kurant, sophomore class president, described Jahoda as disorganized and uncooperative with student organizations that approve clubs and plan events, as well as philosophically out of step with the school's culture. "Even though she can be a very nice person on the outside, it's very hard to get things done," Kurant said in February, "I really don't know why they want her as principal so much. This doesn't go with our culture. Even if you consider what the DOE (Dept. of Education) is looking for in a principal — for them to handle finances, to make sure the school environment is safe — she doesn't do this." The issue at Townsend Harris over the past several months has circulated through the offices of elected officials in Queens, including Assemblywoman Nily Rozic (D-Fresh Meadows) and U.S. Rep. Grace Meng (D-Flushing). "Replacing the interim acting principal at Townsend Harris High School is a welcome move and, quite frankly, it's about time. Clearly, Rosemarie Jahoda did not serve the school well," Meng said. "Her lack of leadership, ineffectiveness and complaints from parents and teachers caused unnecessary stress and havoc, and it distracted hard working students from their important studies. "It is my hope that the situation at Townsend Harris will soon improve, and I welcome and look forward to working with the school's new principal Brian Condon," Meng said. "The education and future of our students must always be the top priority of our schools." The high school has a 100 percent graduation rate, according to the city Dept. of Education, and supports a student body of about 1,100 students. Townsend Harris is one of the top schools in the nation, ranked No. 7 in the state by a recent U.S. News & World Report and No. 366 by Newsweek for student participation and performance in the College Board's Advanced Placement program. Reach reporter Mark Hallum by e-mail at mhall[email protected]glocal.com or by phone at (718) 260–4564.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,023
{"url":"https:\/\/tex.stackexchange.com\/questions\/306149\/longtable-caption-too-short","text":"Longtable - Caption too short\n\nI'm using longtable. But the caption is too short that it splits up my 12 word caption into two lines centered, only 1\/3 of the table width.\n\nI would like to show the caption in one line. Any advice? Below is my code:\n\n\\documentclass[a4paper,12pt]{article}\n\\usepackage{longtable}\n\\usepackage{pdflscape}\n\\usepackage{graphicx}\n\n\\begin{document}\n\\begin{landscape}\n\\setlength\\LTleft{-40pt}\n\\setlength\\LTright{-40pt}\n\\begin{longtable}{@{\\extracolsep{\\fill}}llllllllll@{}}\n\\caption{Why My Caption is Squeezed Here Can I Make The Length Longer?}\\\\\n\\hline\\hline\nShort Name & Full Name & Source \\\\\n\\hline\n\\\\\n\\hline\n\\end{longtable}\n\\end{landscape}\n\\end{document}\n\n\u2022 Thank you @hooy. I just tried but it gave me error \"LaTeX Error: \\caption outside float.\" Any ideas? \u2013\u00a0sileli Apr 24 '16 at 20:38\n\n\\setlength\\LTcapwidth{\\linewidth}\n\n\nto your code, right after (or before) setting \\LTleft and \\LTright.\n\nBackground: As opposite to the regular \\caption the one from longtable is limited to the width \\LTcapwidth. When using in portrait mode, the default value is quite reasonable, but too short in landscape mode.\n\nComplete example document:\n\n\\documentclass[a4paper,12pt]{article}\n\\usepackage{longtable}\n\\usepackage{pdflscape}\n\\usepackage{graphicx}\n\n\\begin{document}\n\\begin{landscape}\n\\setlength\\LTleft{-40pt}\n\\setlength\\LTright{-40pt}\n\\setlength\\LTcapwidth{\\linewidth}\n\\begin{longtable}{@{\\extracolsep{\\fill}}llllllllll@{}}\n\\caption{Why My Caption is Squeezed Here Can I Make The Length Longer?}\\\\\n\\hline\\hline\nShort Name & Full Name & Source \\\\\n\\hline\n\\\\\n\\hline\n\\end{longtable}\n\\end{landscape}\n\\end{document}\n\n\nShameless self-advertisement: Adding \\usepackage{caption} would have solved the issue, too, since it ignores \\LTcapwidth (unless set to a specific value).\n\nUse \\caption package and add the following line into your preamble:\n\n\\usepackage{caption}\n\\captionsetup[table]{labelfont=bf,textfont=normalfont,singlelinecheck=on,justification=raggedright}","date":"2019-10-19 18:03:11","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9186763763427734, \"perplexity\": 4258.003660045833}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986697439.41\/warc\/CC-MAIN-20191019164943-20191019192443-00441.warc.gz\"}"}
null
null
Near beer [] (, gelegentlich mit Bindestrich geschrieben, auf Deutsch "fast wie Bier") ist die historische Bezeichnung für ein malzhaltiges Getränk mit einem Alkoholgehalt von unter 0,5 % vol, das im Zusammenhang mit der Prohibition (1919–1933) in den USA aufkam. Es durfte nicht als "Malzbier" (malt beer) bezeichnet werden, da die Verwendung des Begriffs "Bier" zu der Zeit als Gattungsbezeichnung verboten war, und wurde darum von den Behörden als "Getreidegetränk" (cereal beverage) eingeordnet. Als Trivialname setzte sich schnell der Begriff near beer durch, der zugleich der Markenname des Produkts einer Brauerei war. Heute ist die Bezeichnung in den USA ein in der Fachsprache verwendeter Oberbegriff für alkoholarme und alkoholfreie Getränke. Geschichte und historische Marken Schon vor Inkrafttreten der Prohibition in den Vereinigten Staaten suchten die Brauereien nach einem legalen Getränkeersatz und experimentierten seit 1918 mit alkoholarmen oder alkoholfreien Getränken, um unter den bald veränderten Rahmenbedingungen zum einen ihren Gewinn und die Arbeitsplätze zu erhalten und zum anderen weiterhin auf die unveränderte Nachfrage nach Bier bzw. Bierersatz reagieren zu können. Die Brauerei Anheuser-Busch aus St. Louis produzierte im Zuge dieser Entwicklung "Bevo", Budweiser "Near Beer", die Pabst Brewing Company aus Los Angeles "Pablo", Stroh aus Detroit "Lux-o" sowie die beiden in Milwaukee ansässigen Brauereien Miller "Vivo" und Joseph Schlitz "Famo". Weniger bekannte Marken mit regionaler Bedeutung waren "Chrismo", "Graino", "Barlo", "Hoppy", "Gozo", "Singo", "Golden Glow", "Quizz", "Yip", "Mannah" und "Mother's Malt". Daneben existierten Hunderte Mikrobrauereien in den USA, die mit ihrem near beer jeweils nur eine lokale Bedeutung erlangten. Keine dieser Marken überdauerte das Ende der Prohibition. Der Verkauf von Near beer war sogar an Kinder gestattet und auch dort erlaubt, wo Bier nicht hätte verkauft werden dürfen. Farbe und Geschmack variierten je nach dem verwendeten Anteil an Malz. Manche Sorten waren hell und geschmacklich eher herb, ähnlich heutigen alkoholfreien Bieren; andere Sorten waren eher dunkel und vom Geschmack her süßlich, vergleichbar mit heutigen Malztrunken. Illegale Aufspritung Eine verbreitete illegale Praxis war es, near beer durch Zugabe von Wodka, der hauptsächlich aus Kanada geschmuggelt wurde, kurz vor dem Verkauf aufzuspriten. Das so entstandene Getränk wurde umgangssprachlich needle beer [] (, auf Deutsch "Nadelbier") oder spiked beer [] (, auf Deutsch "Bier mit Schuss") genannt. Diese beiden Bezeichnungen leiten sich davon ab, dass der Wodka per Injektion durch die Kronkorken in die Flaschen eingebracht wurde. Im White River Valley Museum in Auburn im US-Bundesstaat Washington widmet sich ein Bereich der Dauerausstellung der Periode der Prohibition in den USA und zeigt einen typischen Kaufmannsladen in Auburn in den 1920er-Jahren. In dem Zusammenhang werden Flaschen fast aller damals erhältlichen near-beer-Marken präsentiert, und bei Führungen wird den Besuchern der Vorgang der Aufspritung erläutert. Da gemäß dem 18. Zusatzartikel zur Verfassung der Vereinigten Staaten nicht der Besitz oder Konsum von Alkohol illegal waren, jedoch die Herstellung, der Transport und der Verkauf, machten sich durch diese Vorgehensweise sowohl der Verkäufer durch die Aufspritung als auch die Kunden durch den Transport von needle beer bzw. spiked beer strafbar. Gegenwart in den USA Nach der Aufhebung der Prohibition verlor near beer umgehend an Bedeutung; anhaltendes Interesse an alkoholreduziertem oder alkoholfreiem Bier ist für die Zeit nach 1933 in den USA nicht nachweisbar. Ab den 2000er-Jahren hat sich in den USA die Bezeichnung near beer in Fachkreisen als Obergriff für sämtliche Arten alkoholarmer bzw. alkoholfreier Getränke etabliert. Da der Alkoholkonsum in den USA seit den 1990er-Jahren rückläufig ist, begann 2010 die Brauerei Anheuser-Busch, ihr historisches Produkt der Marke Near Beer wieder aufzugreifen und seit 2016 als Anheuser-Busch's Budweiser Prohibition Brew zu vermarkten, jetzt mit ausdrücklichem Hinweis auf die damals für sie negative Prohibitionszeit. Verbreitung in Europa Auf Island wurde Bier erst 1989 legalisiert, und dort existierte bis dahin ebenfalls ein near beer unter dem Namen Bjórlíki. Da Island keine eigenen Streitkräfte besitzt, traditionell enge Beziehungen zu den USA unterhält und von 1951 bis 2006 unter dem Schutz der US-Streitkräfte stand, war aus den USA importiertes near beer seit den 1920er-Jahren auf der Insel bekannt. Die Aufspritung dieses Getränks durch Wodka war dort ebenfalls bis zur Legalisierung von Bier eine gängige Praxis. Die größte französische Brauerei, Kronenbourg in Straßburg, stellt seit 2006 mit "Tourtel" (0,4 % vol) ein in Europa erhältliches near beer her, das sich an den amerikanischen Rezepturen aus den 1930er-Jahren orientiert. Ähnliche Getränke Alkoholfreies Bier Dünnbier Kwas Leichtbier Malzbier Root Beer Literatur J. Anne Funderburg: Bootleggers and Beer Barons of the Prohibition Era. McFarland, Jefferson, N.C. / London 2014, ISBN 978-0-7864-7961-0. Gunther Hirschfelder, Manuel Trummer: Bier: Eine Geschichte von der Steinzeit bis heute. Theiss, Darmstadt 2016, ISBN 978-3-8062-3270-7. Mark A. Noon: Yuengling: A History of America's Oldest Brewery. McFarland, Jefferson, N.C. / London 2005, ISBN 0-7864-1972-5. Weblinks Einzelnachweise Alkoholfreies Getränk Bier (Vereinigte Staaten) Biersorte Biergattung US-amerikanische Küche Essen und Trinken (Island) Prohibition Biergeschichte
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,864
\section{Introduction}\label{section1} \section{Introduction}\label{int} When there is no duplication, linear unichromosomal genomes are represented by permutations, where each number represents a gene or a marker. To compare two linear unichromosomal genomes with an identical set of genes, one can count the number of their dissimilarities or \textit{breakpoints}. More precisely, for two linear unichromosomal genomes $G$ and $G'$ with the same set of genes, a pair of adjacent genes in $G$ is called a breakpoint of $G$ with respect to $G'$, if these genes are not adjacent in $G'$. It is clear that the number of breakpoints of $G$ with respect to $G'$ is equal to the number of breakpoints of $G'$ with respect to $G$. Introduced by Sankoff and Blanchette in~\cite{sankoff97}, 1997, the breakpoint distance is the number of breakpoints in the set of gene adjacencies of two unichromosomal genomes with an identical set of genes. On the other hand, we can use the definition of \emph{median} to compare more than two genomes, namely, having a set of genomes $A=\{G_1,...,G_k\}$ (all genomes are in the symmetric group $S_n$) and a genomic distance $d$, a median of $A$ is a genome that minimizes the total distance function $d_T(\cdot,A):=\sum_{i=1}^k d(\cdot,G_i)$. The minimum value of $d_T(\cdot,A)$ is called the median value of the set $A$. Motivated by the Steiner points, the median problem is the problem of finding a median for a given set $A$ of genomes. The median problem, has been used for the first time by Sankoff \emph{et al.}~\cite{sankoff96}, for evolutionary models of gene orders. The goal was to obtain more information about the ancestors of a given set of genomes and also to apply it to \emph{small phylogeny problems}. In the small phylogeny problem the topology of the ancestral tree is given and the ancestral nodes (vertices of degree greater than 1) should be estimated such that the total sum of distances over all pairs of neighbours in the tree attains its minimum. The tree obtained in this way is the closest tree to preserve the parsimony principle on its paths. The median problem has been extensively studied for different genome distances, and for many of them including the breakpoint distance on linear unichromosomal genomes, it is shown that the median problem is NP-hard~\cite{bryant98,caprara03,tannier09,fertincombinatorics}. This paper concerns the breakpoint median problem for linear unichromosomal genomes represented by unsigned permutations. Despite of its importance in parsimony-based phylogenetics, the median suffers from several disadvantages. The first one is that it is very hard to find a median for most genomic distances. In fact, as we mentioned, the median problem is NP-hard in many cases. Another problem is that although a median genome may carry valuable information from all given genomes (inputs), it is not necessarily close to the ancestral genome. In other words, it is not a good estimator for the true ancestor. Zheng and Sankoff~\cite{zheng11} provided some simulation studies, for a random model of evolution, showing that their heuristic median does not approximate the ancestor for the long-time evolution of genomes, while for genomes involved in evolution for a shorter period of time, medians may approximate the true ancestor. Later, Jamshidpey and Sankoff~\cite{jam13} proved that when the evolution is modelled by some continuous time random walks on $S_n$ (group of permutations of length $n$), including reversal, DCJ, and transposition random walks (here by transposition we mean the mathematical transposition), until time $cn$ of the evolution, for $c<1/4$, the true ancestor can be approximated asymptotically almost surely by a median while for $c>0.61$, the medians are not close to the true ancestor. They conjectured that the median solutions lose their credibility to approximate the ancestor right after $n/4$. It is worth mentioning that, although the medians will not be useful to approximate the true ancestor for some random evolutionary models, they may still carry some important information about ancestors. More recently, Jamshidpey and Sankoff found all possible positions of asymptotic medians of $k$ random permutations sampled from high speed random walks \cite{jam16, jam17}. Determining all possible locations of medians with respect to a random sample of genomes, their results significantly reduce the median search space for a number of edit distances on groups of permutations or signed permutations. Another obstacle about the median is that they are not unique and different medians may be of considerable distance from each other~\cite{haghighi12}. Then, for a set of genomes having many medians it is not clear which of them is the closest to the ancestor. Still, another concern is that not all the medians carry useful information about the ancestor or input genomes. Following some simulation studies, Haghighi and Sankoff~\cite{haghighi12} conjectured that a major proportion of breakpoint medians of $k$ random permutations lie around these $k$ random permutations (corners), and so most of breakpoint medians for random genomes just have information about one of them. However, in their simulations they observed that even it is a minority, there still exist medians that are far from any of these $k$ random permutations, and from the biological point of view, studying these medians is more interesting since they have information from all of the given permutations. They observed that as the size of permutations increases, the proportion of these medians far from the corners decreases. Jamshidpey et al.~\cite{jam14} investigated this conjecture further and found a family of breakpoint median points using the new concept of \emph{accessible points.} This concept may also help us to find a median far from corners. They partially proved the conjecture stated in~\cite{haghighi12}, that the median value of $k$ permutations chosen uniformly at random from $S_n$ is almost $(k-1)(n-1)$ ($2n$ for three random permutations), with high probability, after a convenient rescaling of the breakpoint distance. They showed that any accessible point from a set of $k$ random permutations is an asymptotic median of those $k$ random permutations, with high probability. They proved that any median of $k$ random permutations must take almost all of its adjacencies from at least one of the $k$ random permutations. Making use of this mathematical property in~\cite{jam14}, Larlee \emph{et al.}~\cite{larlee14} proposed a construction for a genome which includes gene order information from all three given genomes such that the total distance is approximately $2.25 n$, where $n$ is the size of the permutations, that is $0.25n$ bigger than the median value. Motivated by the conjecture of Haghighi and Sankoff in \cite{haghighi12}, one of the objectives of this paper is to study this conjecture starting with two random permutations, as a first step, and in doing so, construct tools and results that can be used later to help in the general problem (for more than two permutations). In particular, in this paper, we study the accessible points of two random permutations. We introduce different notions to study the breakpoint median of two or more number of permutations. We provide an equivalence definition for the concept of accessibility of two permutations. Given a subset $I$ of adjacencies of the identity permutation $id=id^{(n)}$ (later we call this kind of subsets, segment sets), we classify the set of all adjacencies of the symmetric group $S_n$, with respect to $I$, into four types. Then for a permutation $\xi^{(n)}$ chosen uniformly at random from $S_n$ we compute the expectation and variance of the number of adjacencies of each type in $\xi^{(n)}$. We derive a convergence theorem for the normalized number (after dividing by $n$) of different types of adjacencies of $\xi^{(n)}$ with respect to $I$ (for both random or deterministic choice of $I$). This leads us to discuss further about the possible segment sets $I$ chosen from identity for which one can construct a permutation $\pi$ in the set of all permutations lying on partial geodesics connecting $id$ and $\xi^{(n)}$, denoted by $ \overline{[id, \xi^{(n)}]}$, such that the set of adjacencies of $\pi$ contains $I$ and the remaining adjacencies of $\pi$ are contained in the set of adjacencies of $\xi^{(n)}$. Taking convenient segment sets $I$ (whose size is neither very small nor very big) we can say that $\pi$ is located far from $id$ and $\xi^{(n)}$. In this way, we can estimate an upper bound for the probability of existence of a permutation $\pi$ in $\overline{[id,\xi^{(n)}]}$, far from corners. We see that this probability converges to $0$, as $n$ tends to $\infty$. \section{Preliminaries}\label{prem} A permutation of length $n$ is a bijection on $[n]:=\{1,...,n\}$. A permutation $\pi$ is denoted by \[ {1 \ \ 2 \ \ ... \ \ \ n \choose \pi_1 \ \pi_2 \ \ ... \ \ \pi_n}, \] or simply by $\pi_{1} \ \pi_2 \ \ ... \ \ \pi_{n}$. We represent a linear unichromosomal genome with $n$ genes or markers by a permutation of length $n$. Each number represents a gene or a marker in the genome. The set of all permutations of length $n$ with the function composition operator is a group called the \textit{symmetric group} of order $n$ denoted by $S_n$. We denote by $id:=id^{(n)}$ the identity permutation $1 \ 2 \ 3 \ ... \ n$. For a permutation $\pi := \pi_1 \ ... \ \pi_n$, any unordered pair $\{\pi_i , \pi_{i+1}\}=\{\pi_{i+1} , \pi_i\}$, for $i=1, ..., n-1$, is called an adjacency of $\pi$. We denote by $\mathcal{A}_{\pi}$ the set of all adjacencies of $\pi$ and by $\mathcal A_{x_1,...,x_k}$ the set of all common adjacencies of $x_1,...,x_k\in S_n$. For any $x,y \in S_n$, the \textit{breakpoint distance} (bp distance) between $x$ and $y$ is defined by $d^{(n)}(x,y):=n-1-|\mathcal A_{x,y}|$ which is a pseudometric. We say a pseudometric (or a metric) $\rho$ is left-invariant on a group $G$ if for any $x,y,z \in G$, $\rho(x,y)=\rho(zx, zy)$. The bp distance is a left-invariant pseudometric on $S_n$. We say two permutations $\pi$ and $\pi'$ in $S_n$ are equivalent, denoted by $\pi\sim \pi'$, if $d^{(n)}(\pi,\pi')=0$. In other words they are equivalent if $\pi_i=\pi'_{n+1-i}$, for $i=1,...,n$. The equivalence class containing permutation $\pi$ is denoted by $[\pi]$. The set of all equivalence classes of $S_n$ under $\sim$, denoted by $\hat{S}_n:=S_n/\sim$, endowed with $d^{(n)}$ is a metric space. A discrete metric space $(S, \rho)$ (i.e. a metric space $S$ with metric $\rho: S\times S \rightarrow \mathbbm{N}_0:=\mathbbm{N}\cup \{0\}$) is said to be a \emph{discrete geodesic space}, if for any two points $x,y\in S$, there exists a finite subset of $S$ containing $x$ and $y$ that is isometric with the discrete line segment $[0, 1, ..., \rho(x, y)]$ ($\mathbbm{N}_0$ is endowed with the standard metric $dist(m, n) :=|m-n|$). In other words, it is a geodesic space if for any two points $x,y \in S$ with $\rho(x,y)=k \in \mathbbm{N}_0$, there exists a finite chain of length $k$ in $S$, namely $z_0=x,z_1...,z_k=y$, such that $\rho(z_i,z_{i+1})=1$, for $i=0,...,k-1$. Any chain in $S$ with this property is called a geodesic between $x$ and $y$. Indeed, a countable metric space is a discrete geodesic if and only if it is isometric with a connected graph. Of course, one side of this is more obvious. For the other side (sufficiency), construct a graph $G$ from a countable discrete geodesic metric space $(S,\rho)$ whose vertices are points of $S$ and a pair of points $x,y\in S$ are connected by an edge if $\rho(x,y)=1$. The graph $G$ endowed with the graph distance is isometric with $(S,\rho)$, as the shortest paths between two vertices $x,y$ coincide with the geodesics between $x,y\in S$. When a discrete metric space $(S, \rho)$ is not geodesic, as for the case of $\hat S_n$ endowed with bp-distance~\cite{jam14}, the concept of a geodesic between two points $x$ and $y$ can be extended to the concept of a \emph{partial geodesic} or \emph{geodesic patch} (p-geodesic)~\cite{jam14}, that is a maximal subset of S containing $x$ and $y$ which is isometric to a subsegment (not necessarily contiguous) of the line segment $[0,...,\rho(x,y)]$. In other words, a p-geodesic between $x$ and $y$ is a maximal chain $z_0=x,z_1,...,z_k=y$ in $S$ such that \[ \sum\limits_i \rho(z_i,z_{i+1})=\rho(x,y). \] Note that the former form of the definition is very general and can be extended to general metric spaces, i.e. for a general metric space a p-geodesic between two points $x$ and $y$ is the maximal subset of the metric space which can be isometrically embedded into the real interval $[0,\rho(x,y)]$ (where $\mathbbm{R}_+$ is endowed with the Euclidean topology). Since, our spaces of interest are the finite symmetric groups, we only work on discrete metric spaces in this paper, and so the second form of the definition for p-geodesics is suitable for us. For any two points $x,y$ in an arbitrary metric space $(S, \rho)$ there exists at least one p-geodesic between them, since the trivial chain of length one, $z_0=x,z_1=y$, always exists. If this chain is maximal then the p-geodesic $z_0=x,z_1=y$ is called trivial. Only non-trivial p-geodesics, those containing at least three points of the space, are interesting for us. Any point on a p-geodesic between $x$ and $y$ is called a geodesic point of $x$ and $y$. In the case of permutations (or permutation classes), we also call a geodesic point, a geodesic permutation (or a geodesic permutation class). Note that any geodesic is a p-geodesic, and for any geodesic point of $x$ and $y$, say $z$, we have $\rho(x,y)=\rho(x,z)+\rho(z,y)$. We denote by $\overline {[x,y]}_S$ the set of all geodesic points of $x$ and $y$ in a metric space $(S,\rho)$, and in particular for $x,y\in S_n$, we denote by $\overline {[x,y]}^*$ or $\overline {[[x],[y]]}^*=\overline {[u,v]}^*$ the set of all geodesic points of $u=[x],v=[y]\in \hat S_n$, that is the set of all permutation classes lying on partial geodesics connecting $[x]$ and $[y]$ in $\hat S_n$. In addition, for $x,y\in S_n$, we denote \[ \overline {[x,y]}:=\{z\in S_n: d^{(n)}(x,y)=d^{(n)}(x,z)+d^{(n)}(z,y)\}. \] In other words, $z\in \overline {[x,y]}$ if and only if $[z]\in \overline {[x,y]}^*$. For a metric (or pseudometric) space $(S,\rho)$, let us define the total distance of a point $x\in S$ to a finite subset $A\subset S$ by \[ \rho_T(x,A):=\sum\limits_{y\in A} \rho(x,y). \] A \emph{median} of a finite subset $A\subseteq S$ is a point of $S$ (not necessarily unique) whose total distance to $A$ takes the infimum (respectively, minimum for a finite space $S$), i.e. a point $x\in S$ such that \[ \rho_T(x,A)=\inf_{y\in S} \rho_T(y,A). \] For the finite space $S$, ``\textit{inf}'' is replaced by ``\textit{min}'' in the above definition, that is $x\in S$ is a median of $A$ if it minimizes the total distance function $\rho_T(.,A)$. Furthermore, the median value of $A$, denoted by $\mu(A)$, is the infimum (respectively, minimum) value of the total distance function to $A$. We denote by $\mathcal{M}_{S,\rho}(A)$ the set of all medians of $A\subset S$. In particular, we denote by $d_T^{(n)}(x,A)$ the total breakpoint distance of permutation $x\in S_n$ to $A\subset S_n$, and by $\mathcal{M}_n(A)$ the set of all breakpoint medians of $A$, that is $\mathcal{M}_n(A):=\mathcal{M}_{S_n,d^{(n)}}(A) $. There always exists a median (not necessarily unique) for any subset of a finite metric space, while this is not true for general infinite metric spaces. In the simple case of two points $x$ and $y$ in a general metric space, it is clear from the definition that every median of $x$ and $y$ is a geodesic point of them and vice versa. That is, $\overline {[x,y]}_S$ is the set of medians of $x$ and $y$. Medians play an important role in small and large phylogeny problems. In some evolutionary models, at least one of the medians of some species carries valuable information about their first common ancestor or even about the phylogenetic tree. According to some simulation studies, when the symmetric group is endowed with the bp-distance, Haghighi et al.~\cite{haghighi12} conjectured that a major proportion of bp medians of $k$ random permutations lie around these $k$ random permutations (corners). Therefore, it seems hard to find a median far from any of these $k$ random permutations. Jamshidpey et al.~\cite{jam14} investigated this further and found a family of bp median points using the new concept of accessible points. This concept may also help to find a median far from all random permutations. More precisely, let $X$ be a subset of $\hat S_n$. Following \cite{jam14}, we say $z \in \hat S_n$ is $1$-accessible from $X$ if there exists a natural number $m$, a finite sequence $y_1,..., y_m \in X$, and a finite sequence $z_1=y_1, ..., z_m=z \in \hat S_n$ such that $z_{i+1} \in \overline{[z_i,y_{i+1}]}^*$, for $i=1 ... m-1$ (See Fig.~\ref{fig2}). The set of all $1$-accessible points of $X$ is denoted by $Z(X)$. Let $Z_0(X):=X$, and, by induction, for $r \in \mathbbm{N}_0$, let $Z_{r+1}(X):=Z(Z_r(X))$. By definition, we have \[ \bigcup\limits_{x,y \in Z_r(X)}\overline{[x,y]}^* \subset Z_{r+1}(X). \] A permutation class $z$ is accessible from $X$ if there exists a natural number $r$ such that $z \in Z_r(X)$. We denote the set of all accessible points by $\bar Z(X) = \cup_{r \in \mathbbm{N}_0} Z_r(X)$. \begin{figure}[ht!] \begin{scriptsize} \begin{center} \begin{tabular}{ll} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.7cm,y=0.7cm] \clip(-0.54,-1.8) rectangle (9.08,2.76); \draw [shift={(1.67,0.78)}] plot[domain=0:2.85,variable=\t]({1*0.63*cos(\t r)+0*0.63*sin(\t r)},{0*0.63*cos(\t r)+1*0.63*sin(\t r)}); \draw [shift={(2.39,-0.33)}] plot[domain=0.47:2.03,variable=\t]({1*1.93*cos(\t r)+0*1.93*sin(\t r)},{0*1.93*cos(\t r)+1*1.93*sin(\t r)}); \draw [shift={(3.96,-0.17)}] plot[domain=0.32:1.98,variable=\t]({1*1.71*cos(\t r)+0*1.71*sin(\t r)},{0*1.71*cos(\t r)+1*1.71*sin(\t r)}); \draw [shift={(5.68,-0.03)}] plot[domain=0:2.06,variable=\t]({1*1.32*cos(\t r)+0*1.32*sin(\t r)},{0*1.32*cos(\t r)+1*1.32*sin(\t r)}); \draw [rotate around={-8.06:(4.06,0.31)},fill=black,fill opacity=0.05] (4.06,0.31) ellipse (2.9cm and 0.5cm); \draw [rotate around={-4.38:(4.39,0.47)}] (4.39,0.47) ellipse (3.3cm and 1.4cm); \draw (0.28,0.7) node[anchor=north west] {$X$}; \draw (1.14,0.86) node[anchor=north west] {$=z_1$}; \draw (6.88,1.42) node[anchor=north west] {$=z$}; \draw (0.06,2.56) node[anchor=north west] {$\mathbf{\hat S_n}$}; \fill [color=black] (1.06,0.96) circle (2.5pt); \draw[color=black] (1.1,0.6) node {$y_1$}; \fill [color=black] (2.3,0.78) circle (2.5pt); \draw[color=black] (2.64,0.78) node {$y_2$}; \fill [color=black] (4.12,0.54) circle (2.5pt); \draw[color=black] (4.5,0.56) node {$y_3$}; \fill [color=black] (5.58,0.36) circle (2.5pt); \draw[color=black] (5.96,0.32) node {$y_5$}; \fill [color=black] (7,-0.02) circle (2.5pt); \draw[color=black] (7.34,-0.2) node {$y_6$}; \fill [color=black] (1.53,1.4) circle (2.5pt); \draw[color=black] (1.68,1.76) node {$z_2$}; \fill [color=black] (3.27,1.39) circle (2.5pt); \draw[color=black] (3.38,1.76) node {$z_3$}; \fill [color=black] (5.06,1.13) circle (2.5pt); \draw[color=black] (5.32,1.5) node {$z_4$}; \fill [color=black] (3.02,0.16) circle (2.5pt); \draw[color=black] (3.32,0.22) node {$x_i$}; \fill [color=black] (3.88,-0.12) circle (2.5pt); \draw[color=black] (4.28,-0.06) node {$x_j$}; \fill [color=black] (5.16,-0.22) circle (2.5pt); \draw[color=black] (5.64,-0.22) node {$x_k$}; \fill [color=black] (6.51,0.99) circle (2.5pt); \draw[color=black] (6.78,1.34) node {$z_5$}; \end{tikzpicture} & \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.5cm,y=0.5cm] \clip(2.32,-2.88) rectangle (10.2,3.92); \draw (4.94,3.38)-- (3.02,0.48); \draw (4.18,2.23)-- (6.74,-2.26); \draw (5.69,-0.42)-- (8.16,2.88); \draw (6.99,1.31)-- (9.52,-0.28); \draw (8.55,0.33)-- (3.02,0.48); \draw (4.36,0.44)-- (4.26,-1.82); \fill [color=black] (4.94,3.38) circle (2.5pt); \draw[color=black] (5.2,3.74) node {$x_1$}; \fill [color=black] (3,2) circle (2.5pt); \draw[color=black] (2.68,2.3) node {$x_2$}; \fill [color=black] (3.02,0.48) circle (2.5pt); \draw[color=black] (2.72,0.3) node {$x_3$}; \fill [color=black] (4.26,-1.82) circle (2.5pt); \draw[color=black] (4.32,-2.2) node {$x_4$}; \fill [color=black] (6.74,-2.26) circle (2.5pt); \draw[color=black] (6.9,-2.7) node {$x_5$}; \fill [color=black] (8.26,-1.72) circle (2.5pt); \draw[color=black] (8.36,-2.2) node {$x_6$}; \fill [color=black] (9.52,-0.28) circle (2.5pt); \draw[color=black] (10,-0.28) node {$x_7$}; \fill [color=black] (9.24,1.58) circle (2.5pt); \draw[color=black] (9.48,1.9) node {$x_8$}; \fill [color=black] (8.16,2.88) circle (2.5pt); \draw[color=black] (8.26,3.24) node {$x_9$}; \fill [color=black] (6.68,3.32) circle (2.5pt); \draw[color=black] (6.82,3.68) node {$x_{10}$}; \fill [color=gray] (4.18,2.23) circle (2.5pt); \draw[color=gray] (4.02,2.64) node {$z_2$}; \fill [color=gray] (5.69,-0.42) circle (2.5pt); \draw[color=gray] (6.2,-0.36) node {$z_3$}; \fill [color=gray] (6.99,1.31) circle (2.5pt); \draw[color=gray] (6.82,1.66) node {$z_4$}; \fill [color=gray] (8.55,0.33) circle (2.5pt); \draw[color=gray] (8.72,0.68) node {$z_5$}; \fill [color=gray] (4.36,0.44) circle (2.5pt); \draw[color=gray] (4.34,0.86) node {$z_6$}; \fill [color=gray] (4.31,-0.76) circle (2.5pt); \draw[color=gray] (3.3,-0.7) node {$z_7=z$}; \end{tikzpicture} \end{tabular} \end{center} \end{scriptsize} \caption{Accessibility for 10 points: original definition (Figure from~\cite{jam16}).\label{fig2}} \end{figure} Here we represent accessible points in a slightly more illustrative way. That is, for a set of given permutation classes $X$, let \begin{equation} \mathcal{Z}(X)=\bigcup\limits_{x,y\in X} \overline{[x,y]}^*. \end{equation} Then setting $\mathcal{Z}_0(X):=X$, by induction, we define $\mathcal{Z}_{k+1}(X)=\mathcal{Z}(\mathcal{Z}_k(X))$ for $k\in \mathbbm{N}_0$, that is \[ \mathcal{Z}_{k+1}(X):= \bigcup\limits_{x,y\in \mathcal{Z}_k(X)} \overline{[x,y]}^*. \] Finally, the set of all accessible points is defined by \[ \bar{\mathcal{Z}}(X):=\bigcup\limits_{n\in \mathbbm{N}_0} \mathcal{Z}_n(X). \] Obviously, these two definitions are not restricted to the case of the symmetric groups and can be considered for a general metric (pseudometric) space $(S,\rho)$. We only need to replace $\hat{S}_n$ and $\overline{[.,.]}^*$ by $S$ and $\overline{[.,.]}_S$, respectively. The latter definition gives a new representation of the former notion of accessibility, and, in fact, we can see that these two definitions are equivalent, in general. We have the following proposition. \begin{proposition}\label{accessibility equivalence} Let $S$ be a metric space, then $\bar{\mathcal{Z}}(X)= \bar Z(X)$, for any $X\subset S$. \end{proposition} To prove the above proposition, we need the following lemmas. \begin{lemma}\label{lemma accessible 1} For $X\subset Y\subset S$, we have $\mathcal{Z}(X)\subset Z(Y)$. \end{lemma} \begin{proof} By definition of $\mathcal{Z}(X)$, it is clear that $\mathcal{Z}(X)\subset \mathcal{Z}(Y)$. Also, if $z\in \mathcal{Z}(Y)$, then there exists $x,y\in Y$ such that $z\in \overline{[x,y]}_S$. Therefore, $z$ is a 1-accessible point of $Y$, i.e. $z\in Z(Y)$. \end{proof} \begin{lemma}\label{lemma accessible 2} Let $X\subset S$ be such that for any $x,y\in X$, $\overline{[x,y]}_S\subset X$, then $\mathcal{Z}(X)=Z(X)=X$. In particular, $\mathcal{Z}(\bar{\mathcal{Z}}(X))=Z(\bar{\mathcal{Z}}(X))=\bar{\mathcal{Z}}(X)$. \end{lemma} \begin{proof} From the last lemma, it is sufficient to prove $Z(X)=X$, and this is itself clear by definition, as if $z\in Z(X)$, then there exists $m\in \mathbbm{N}$, a finite sequence $y_1,..., y_m \in X$, and a finite sequence $z_1=y_1, ..., z_m=z \in S$ such that $z_{i+1} \in \overline{[z_i,y_{i+1}]}_S$. Hence, by assumption, $z_1,...,z_m=z$ are all in $X$. Now, for any two points $x,y\in \bar{\mathcal{Z}}(X)$, there exists $k\in \mathbbm{N}$ such that $x,y\in \mathcal{Z}_k(X)$, and thus, $\overline{[x,y]}_S \in \mathcal{Z}_{k+1}(X)\subset \bar{\mathcal{Z}}(X)$. This completes the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{accessibility equivalence}] By Lemma \ref{lemma accessible 1}, $\mathcal{Z}(X)\subset Z(X)$, and hence by induction and applying the same lemma repeatedly, we have $\mathcal{Z}_k(X)\subset Z_k(X)$, for any $k\in \mathbbm{N}$. Therefore, $\bar{\mathcal{Z}}(X)\subset \bar{Z}(X)$. To prove the other side, let $z\in Z(X)=Z_1(X)$. There exist $m\in \mathbbm{N}$, $y_1,..., y_{m+1} \in X$, and $z_1=y_1, ..., z_{m+1}=z \in S$ such that $z_{i+1} \in \overline{[z_i,y_{i+1}]}_S$. Thus, for $i=2,...,m+1$, $z_i\in \mathcal{Z}_{i-1}(X)$, and in particular, $z\in \mathcal{Z}_{m}(X)$, and hence $Z_1(X)\subset \bar{\mathcal{Z}}(X)$. Now, by Lemma \ref{lemma accessible 1} and Lemma \ref{lemma accessible 2}, \[ Z_2(X)=Z(Z_1(X))\subset Z(\bar{\mathcal{Z}}(X))=\bar{\mathcal{Z}}(X). \] Repeating this argument, for any $r\in \mathbbm{N}_0$, $Z_r(X)\subset \bar{\mathcal{Z}}(X)$ and therefore, $\bar{Z}(X)\subset \bar{\mathcal{Z}}(X)$. \end{proof} We say $m\in \mathbbm{N}_0$ is the order of accessibility of a set $X$ in a metric space $S$, if $m$ is the minimum number such that $\mathcal{Z}_m(X)=\mathcal{Z}_{m+r}(X)$, for any $r\in \mathbbm{N}_0$. In other words, $m$ is the order of accessibility of $X$, if it is the minimum number such that $\bar{\mathcal{Z}}(X)=\mathcal{Z}_m(X)$. If there is no such number, we say the order of accessibility of $X$ is $\infty$. In the case that $S$ is a finite metric space, as in the case of $\hat{S}_n$ endowed with the bp metric, the order of accessibility of any subset of $S$ is finite. As an example, let $A$ be a bounded closed convex subset of $S=\mathbbm{R}^n$ (endowed with the Euclidean topology), and let $\partial A$ be its boundary. Then the order of accessibility of $\partial A$ is $1$, and $\bar{Z}(\partial A)=A$. It is shown in \cite{jam14} that for $k$ permutations $\{x_1,...,x_k\}$ in $S_n$ with maximum distance $n-1$ between any two of them, a permutation $x$ is a median if and only if $\mathcal{A}_x \subset \mathcal{A}_{x_1,...,x_k}$. The situation is similar for $k$ random permutations, since the expected number of common adjacencies for any two random permutations is very small~\cite{jam14}. On the other hand, for $x,y\in S_n$, a permutation $\pi$ lies on $\overline{[x,y]}$ if and only if \[ \mathcal{A}_{x,y}\subset \mathcal A_{\pi} \subset \mathcal{A}_x \cup \mathcal{A}_y \] (see Lemma 2~\cite{jam14}). This shows that the idea of accessible points can be useful in order to find a median far from corners. For example, in the case of three permutations with maximum distance $n-1$ from each other, we can start from two of them, say $x$ and $y$, and find a permutation $\pi \in \overline{[x,y]}$ that is not very close to $x$ and $y$. This must be done by choosing carefully adjacencies from both $x$ and $y$ (including all common adjacencies of them in the case of three random permutations) such that these adjacencies together construct permutation $\pi$. Then we should try to pick some of the adjacencies of $\pi$ (with a sufficient number of adjacencies from each of $x$ and $y$) and also pick some adjacencies from the third permutation, say $z$, to construct a permutation far from all $x,y,z$ whose set of adjacencies is contained in $\mathcal{A}_x\cup \mathcal{A}_y\cup \mathcal{A}_z$, and therefore it is a median of these three points. Common adjacencies of permutations can be regarded as a set of segments. A segment (of $S_n$) is a set of consecutive adjacencies of a permutation of length $n$. More explicitly, a segment of length $k\in [n-1]$ is a set of adjacencies \[ \{\{n_0,n_1\},\{n_1,n_2\},...,\{n_{k-2},n_{k-1}\},\{n_{k-1},n_k\}\}, \] where $n_0,n_1,...,n_k\in [n]$ are different natural numbers. It can also be denoted by $[n_0,n_1,...,n_k]$ or equivalently by $[n_k,...,n_1,n_0]$. In particular, any segment of length $n-1$ is the set of adjacencies of a permutation class and vice versa. By convention, we assume that the empty set $\emptyset$ is a segment. We say a segment $s$ is a subsegment of a segment $s'$ if $s\subset s'$. For a given permutation $\pi=\pi_1 \ ... \ \pi_n\in S_n$, for $i\leq j$, the segment $[\pi_i,\pi_{i+1},...,\pi_j]=[\pi_j,...,\pi_{i+1},\pi_i]$ is denoted by $s_{ij}=s_{ij}^\pi$ and is called a segment of $\pi$. We denote by $|s|$ the length of a segment $s$. For a segment $s:=[n_0,...,n_k]$ , $n_0$ and $n_k$ are called end points, and $n_1,...,n_{k-1}$ are called intrinsic points of the segment. Any point (number) which is not either an end point or an intrinsic point of $s$ is called an isolated point with respect to $s$. We denote by $End(s)$, $Int(s)$, and $Iso(s)$, the set of end points, intrinsic points, and isolated points of $s$, respectively. Note that a segment is originally defined as a set of adjacencies and therefore all set operations can be applied on it. Two segments $s=[n_0,...,n_k]$ and $s'=[m_0,...,m_{k'}]$ are said to be strongly disjoint if $\{n_0,...,n_k\}\cap \{m_0,...,m_{k'}\}=\emptyset$. They are disjoint if $s\cap s'=\emptyset$, otherwise we say that they intersect. Also, by a set of segments (segment set) of $S_n$, we mean the union of some pairwise strongly disjoint segments of $S_n$. In other words, a set of segments or a segment set $I$ is a subset of $\mathcal{A}_\pi$ for a permutation class $\pi$. In this case, we say $I$ is a segment set of $\pi$ or $\pi$ contains $I$. It is clear that a segment set can be contained in more than one permutation, or in other words, it can be contained in the intersection of adjacencies of several permutations. By a segment (or component) of a segment set $I$ we mean a maximal segment contained in $I$, and to show a segment $s$ is a segment of $I$, we denote $s\hat{\in} I$. Although a segment set $I$ containing segments $s_1,...,s_k$ is in principle the union of adjacencies of $s_i$'s, that is $I=\cup_{i=1}^k s_i$, to ease the notation, we sometime denote it by $\{s_1,...,s_k\}$. Also we denote by $\|I\|:=k$, the number of segments of $I$. Note that the notation $|\ .\ |$ is used for both cardinality of a set and absolute value of a real number. For example, as we already indicated for a segment $s$, $|s|$ is the number of adjacencies of $s$, and, by the original definition of a segment set as a union of segments, \[ |I|=\sum\limits_{i=1}^{\|I\|} |s_i| \] is the number of adjacencies of $I$. Also, we frequently use, $|\ .\ |$ for sets such as $End(s)$, $Int(s)$, and $Iso(s)$, to indicate their cardinality. Denote by $\mathcal{I}_{m,k}^{(n)}$ the set of all segment sets of $S_n$ with $m$ adjacencies and $k$ segments, i.e. $\mathcal{I}_{m,k}^{(n)}$ is the set of all segment sets $I$ with $|I|=m$ and $\|I\|=k$. Similarly, let $\mathcal{I}_m^{(n)}$ be the set of all segment sets of $S_n$ with $m$ adjacencies. Finally, denote by $\mathcal{I}^{(n)}$, the set of all segment sets of $S_n$. Note that $\mathcal{I}_{m,k}^{(n)}$ may be empty for some $m,k,n$. To have $\mathcal{I}_{m,k}^{(n)}$ non-empty, it is necessary to have \[ k\leq m\leq n-k, \] where the last inequality holds since for a segment set $I\in \mathcal{I}_{m,k}^{(n)}$ and for any arbitrary permutation $\pi$ containing $I$, there should be at least $k-1$ adjacencies of $\pi$ that are not used in $I$ in order to separate $k$ segments of $I$, and therefore $m$ should be bounded by $(n-1)-(k-1)=n-k$. It is clear that the intersection of segments (and in general, the intersection of segment sets) is always a segment set. Two segment sets $I$ and $J$ (in particular two segments $s$ and $s'$, respectively) are said to be consistent, if their union is contained in $\mathcal{A}_\pi$, for a permutation class $\pi$. In particular, any two segment sets of a permutation $\pi$ are consistent. For example, for $n=10$, two segments $[3,7,10,2,5]$ and $[2,5,8,1]$ are consistent and their union is the segment $[3,7,10,2,5,8,1]$, while two segments $[2,6,3,8,1]$ and $[8,1,4,7,6,3,5]$ are not consistent. When we speak of the union of two or more segment sets (respectively, two or more segments) we always assume that they are pairwise consistent. We say segment sets $I_1,...,I_k$ complete each other if there exists a permutation $\pi$ such that $\cup_{i=1}^k I_i=\mathcal{A}_\pi$. The complement of a segment set $I$ contained in a permutation $\pi$, is $\bar{I}_\pi:=\mathcal{A}_\pi\setminus I$. In other words, for a segment set $I=\{s_{i_1j_1}^\pi,s_{i_2j_2}^\pi, ...,s_{i_kj_k}^\pi\}$ contained in $\pi$, $\overline{I}_{\pi}=\{s_{j_0i_1}^\pi,s_{j_1i_2}^\pi, ...,s_{j_ki_{k+1}}^\pi\}$, with $j_0=1,i_{k+1}=n$. For $r=1,...,k+1$, we denote by $\overline{I}_{\pi}^{(r)}$ the $r$-th segment of $\overline{I}_{\pi}$ on $\pi$ from left, that is $\overline{I}_{\pi}^{(r)}=s_{j_{r-1}i_r}^\pi$. When we write $\bar{I}_\pi$, we assume that $I$ is contained in $\pi$. We can extend the notions of \emph{end point}, \emph{intrinsic point}, and \emph{isolated point} to the case of segment sets as follows. A number $u\in \{1,...,n\}$ is an end point (respectively, an intrinsic point) of a non-empty segment set $I=\{s_1,...,s_l\}$ if it is an end point (respectively, an intrinsic point) of exactly one of the segments of $I$. It is an isolated point of $I$ if it is neither an end point nor an intrinsic point of $I$, or equivalently if it is an isolated point of all of segments of $I$. In other words, using the same notations of $End(I)$, $Int(I)$ and $Iso(I)$ for these three types of points for the segment set $I$, we have \[ \begin{array}{c} End(I)=\bigcup\limits_{s\hat{\in} I} End(s)\\ Int(I)=\bigcup\limits_{s\hat{\in} I} Int(s)\\ Iso(I)=\bigcap\limits_{s\hat{\in} I} Iso(s) \end{array} \] When $I$ is the empty segment set, we define $End(I)=Int(I)=\emptyset$ and $Iso(I)=[n]$. For example, when $n=10$, $I=\{[2,3,9,4],[5,6]\}$ is a segment set having $3,9$ as its intrinsic points, $2,4,5,6$ as its end points, and $1,7,8,10$ as its isolated points. We say two segments $s,s'\hat{\in} I$ are neighbours with respect to $\pi$, if there exist $i<j$ such that $\pi_i,\pi_j \in End(s)\cup End(s')$ and for any $k$ with $i<k<j$ (if there is any), $\pi_k\in Iso(I)$. We say a segment $s$ connects two disjoint segments $s_1$ and $s_2$, if $s_1\cup s\cup s_2$ is a segment. Given a segment set $I$ of $id$, our goal is to count the number of all permutations $x\in S_n$ such that there exists a permutation $\pi \in \overline{[id,x]}\setminus \{id,x\} \neq \emptyset$ containing $I$ such that $\mathcal{A}_\pi\setminus I$ is a segment set in $x$. In order to find all permutations $x$ with this property, it is convenient to classify the adjacencies of any permutation $x\in S_n$ with respect to $I$. This classification should show all possible ways that every adjacency of $x$ may be used to construct such a permutation $\pi$. We say the adjacency $\{x_i,x_{i+1}\}$ is \emph{2-free-end}, with respect to $I$, if $x_i$ and $x_{i+1}$ are both isolated points of $I$. It is called \emph{1-free-end}, w.r.t. $I$, if either $x_i$ or $x_{i+1}$ is an isolated point of $I$, and the other is an end point of $I$. It is a \emph{trivial segment}, w.r.t. $I$, if $x_i$ and $x_{i+1}$ are both end points of $I$. Finally, $\{x_1,x_{i+1}\}$ is \emph{0-free-end}, w.r.t. $I$, if either $x_i$ or $x_{i+1}$ is an intrinsic point of $I$. In order to construct a permutation $\pi$ containing $I$ such that $\pi\in\overline{[id,x]}$, $\bar{I}_\pi$ should be contained in $x$. We see that this is an important observation to count the number of permutations $x$ having a permutation $\pi\in \overline{[id,x]}$ far from $id$ and $x$. \section{Analysis of the adjacency types Let $I$ be a segment set of $S_n$. We define $X_n(I)$ to be the set of all permutations $x$ containing a segment set $J$ such that $I\cap J=\emptyset$ and $I\cup J=\mathcal{A}_\pi$ for a permutation $\pi$. Equivalently, letting \[ \mathbf{C}_n(I)=\{J\in \mathcal{I}^{(n)}: \exists \pi\in S_n \ s.t. \ I\cap J=\emptyset, \ I\cup J=\mathcal{A}_\pi\}, \] and \[ \mathcal{R}_n(J)=\{\pi\in S_n: \ J\subset \mathcal{A}_\pi\}, \ for J\in \mathcal{I}^{(n)}, \] we have \[ X_n(I)=\bigcup\limits_{J\in \mathbf{C}_n(I)} \mathcal{R}_n(J). \] Note that when $I$ is a segment set of $id$, this definition does not guarantee that $\pi \in \overline{[id,x]}$, and in order to have this property, $\pi$, in addition, must include all adjacencies of $\mathcal{A}_{id,x}$. This motivates us to denote by $\bar{X}_n(I)$ the set of all permutations $x$ for which there exists a permutation $\pi\in \overline{[id,x]}$ whose set of adjacencies can be decomposed into disjoint segment sets $I$ and $J$, i.e. $I\cup J=\mathcal{A}_\pi$, such that $J$ is contained in $x$. In fact, $J$ serves as $\bar{I}_\pi$, the complement of $I$ w.r.t. $\pi$, and by definition $\bar{X}_n(I)\subset X_n(I)$. Therefore, counting the number of elements in $X_n(I)$ gives an upper bound for $|\bar{X}_n(I)|$. To be able to have such a permutation $\pi$, $x$ should contain a segment set $J$ as described above. Then our strategy to count the number of permutations in $X_n(I)$ is, firstly, to find every possible segment set $J$ that can be the complement of $I$ w.r.t. a permutation $\pi$ such that $\mathcal{A}_\pi=I\cup J$, and secondly for each such segment set $J$, to count the number of all possible permutations $x$ containing $J$. Note that for two different segment sets $J$ and $J'$ with the above property, the set of permutations containing $J$ and the set of permutations containing $J'$ do not intersect, and thus considering all possibilities of the segment $J$ gives us a partitioning of $X_n(I)$. An easy observation is that $|\|I\|-\|J\||\leq 1$, and hence there are three possibilities for the number of segments in $J$. On the other hand, all isolated points of $I$, except at most two of them, are intrinsic points of $J$ and vice versa. So this makes it clear how to construct $J$. Basically, depending on the value of $\|I\|-\|J\|$, we take at most two isolated points of $I$ and consider them as end points of $J$. The other end points of $J$ are chosen from the set of end points of $I$, in an appropriate way. Also, the rest of the isolated points of $I$ will be used as intrinsic points of $J$. Once $J$ is determined, it is easy to see that the number of ways one can complete $J$ in order to construct $x$ depends on $\|I\|-\|J\|$ and not $J$ itself. This makes it easy to compute the cardinality of $X_n(I)$, as is indicated in Section \ref{section-main-1}. When it is clear, we drop ``$n$'' from the subscript of $X$. Motivated by above explanations, to construct a permutation $\pi$ in $\overline{[id,x]}$ containing $I$ such that $\bar{I}_\pi$ is contained in $x$, we cannot use any 0-free-end adjacency of $x$ w.r.t. $I$, since both numbers in the extremities of this type of adjacency are already used in $I$ as its intrinsic points. Therefore, to be able to construct $\pi$ with this property, we must take 2-free-end adjacencies of $x$ w.r.t $I$, to choose the segment set $J$ contained in $x$ as mentioned above. Both the other two types of adjacencies, i.e., 1-free-end and trivial segment adjacencies, can be used only as extremities of segments of $J$. More precisely, a 1-free-end adjacency of $x$ w.r.t $I$ may be used in extremities of segments of any size in $J$, while a trivial segment adjacency of $x$ w.r.t. $I$ may be used only as a segment of length $1$ in $J$. In this section, we compute the expected number (Theorem~\ref{expectation}) and variance (Theorem~\ref{var}) of all four types of adjacencies of a random permutation w.r.t. a random segment contained in $id$, and establish a convergence (in probability) theorem for them. Following this, we study the possibility of constructing a permutation $\pi$ in $\overline{[id,x]}$ containing segment set $I$ from identity such that $\overline{I}_\pi$ is contained in $x$. The following proposition will be used to prove some of our main results. \begin{proposition}\label{segmentset} Given a permutation $x\in S_n$, there exist \[\left(\!\!\!\!\begin{array}{c} m-1 \\ k-1 \end{array}\!\!\!\!\right)\cdot \left(\!\!\!\!\begin{array}{c} n-m\\ k \end{array}\!\!\!\!\right)\] segment sets of $x$ with $k>0$ non-empty segments and $m\leq n-1$ adjacencies. \end{proposition} \begin{proof} Consider a segment set $I=\{s_1,...,s_k\}$, with $k$ non-empty segments and $m$ adjacencies that is contained in $x\in S_n$. Then $|\|\overline{I}_{x}\|-k|\leq 1$, and therefore we represent the segments of $\overline{I}_{x}$ by $s'_1,...,s'_{k+1}$, where $s'_j$ is non-empty for $2\leq j\leq k$, and $s'_1$ and $s'_{k+1}$ may be empty. Note that $\sum\limits_{i=1}^{k} |s_i|=m$ and $\sum\limits_{j=1}^{k+1} |s'_j|=n-1-m$ with $|s_i|\geq 1$ for $1\leq i\leq k$ and $|s'_j|\geq 1$ for $2\leq j\leq k$. Hence, the number of solutions for these two equations is equal to: \[\left(\!\begin{array}{c} m-k+(k-1) \\ k-1 \end{array}\!\right)\cdot \left(\!\!\!\!\begin{array}{c} n-1-m-(k-1)+(k+1-1)\\ (k+1-1) \end{array}\!\!\!\!\right)=\left(\!\!\!\!\begin{array}{c} m-1 \\ k-1 \end{array}\!\!\!\!\right)\cdot \left(\!\!\!\!\begin{array}{c} n-m\\ k \end{array}\!\!\!\!\right)\] In other words, that is the number of ways we can choose $k$ segments with $m$ adjacencies of $x$. \end{proof} We assume that all random elements and variables are defined on a probability space $(\Omega, \mathbbm{P}, \mathcal{F})$, and denote by $\mathbbm{E}[\ .\ ]$ and $Var(\ .\ )$, the expected value and variance of a random variable, respectively. We denote by $\xi^{(n)}$, a permutation chosen uniformly at random from $S_n$, and by $I_m^{(n)}$ a segment set chosen uniformly at random from $\mathcal{I}_m^{(n)}$. Similarly, let us denote by $I_{m,k}^{(n)}$ a segment set chosen uniformly at random from $\mathcal{I}_{m,k}^{(n)}$, and let $A_{m,k}^{(n)}$ be the event that $I_m^{(n)}$ has $k$ segments, that is $A_{m,k}^{(n)}:=\{I_m^{(n)}\in \mathcal{I}_{m,k}^{(n)}\}$. We also assume that $\xi^{(n)}$, $I_m^{(n)}$, and $I_{m,k}^{(n)}$ are independent. Let $\alpha ,\beta , \gamma , \delta$ be functions \[ \alpha ,\beta , \gamma , \delta: \bigcup\limits_{n\in \mathbbm{N}} (S_n\times \mathcal{I}^{(n)}) \rightarrow \mathbbm{N}_0, \] such that, for $x\in S_n$ and a segment set of $S_n$, namely $I$, let $\alpha(x,I)$, $\beta(x,I)$, $\gamma(x,I)$ and $\delta(x,I)$ be the number of 2-free-end adjacencies, 1-free-end adjacencies, trivial segments, and 0-free-end adjacencies of $x$ w.r.t. $I$, respectively. In particular, let $\alpha_m^{(n)}:=\alpha(\xi^{(n)},I_m^{(n)})$, $\beta_m^{(n)}:=\beta(\xi^{(n)},I_m^{(n)})$, $\gamma_m^{(n)}:=\gamma(\xi^{(n)},I_m^{(n)})$ and $\delta_m^{(n)}:=\delta(\xi^{(n)},I_m^{(n)})$. Similarly, let $\alpha_{m,k}^{(n)}:=\alpha(\xi^{(n)},I_{m,k}^{(n)})$, $\beta_{m,k}^{(n)}:=\beta(\xi^{(n)},I_{m,k}^{(n)})$, $\gamma_{m,k}^{(n)}:=\gamma(\xi^{(n)},I_{m,k}^{(n)})$ and $\delta_{m,k}^{(n)}:=\delta(\xi^{(n)},I_{m,k}^{(n)})$. When there is no risk of confusion, we drop ``$n$'' from the superscripts. \begin{theorem}\label{expectation} Let $m=m(n)$ and $k=k(n)$ be such that $0< k\leq m<n$, and let $I$ be an arbitrary segment set in $\mathcal{I}_{m,k}$. Then \begin{equation}\label{conditional expected number} \begin{array}{l} {\displaystyle \mathbbm{E}[\alpha_m |A_{m,k}]=\mathbbm{E}[\alpha_{m,k}]=\mathbbm{E}[\alpha(\xi,I)]=\frac{(n-m-k)(n-m-k-1)}{n},}\\\\ {\displaystyle \mathbbm{E}[\beta_m |A_{m,k}]=\mathbbm{E}[\beta_{m,k}]=\mathbbm{E}[\beta(\xi,I)]=\frac{4k(n-m-k)}{n},}\\\\ {\displaystyle \mathbbm{E}[\gamma_m |A_{m,k}]=\mathbbm{E}[\gamma_{m,k}]=\mathbbm{E}[\gamma(\xi,I)]=\frac{2k(2k-1)}{n}},\\\\ {\displaystyle \mathbbm{E}[\delta_m |A_{m,k}]=\mathbbm{E}[\delta_{m,k}]=\mathbbm{E}[\delta(\xi,I)]=\frac{(m-k)(2n-m+k-1)}{n}.} \end{array} \end{equation} Furthermore, \begin{equation}\label{expected number} \begin{array}{l} {\displaystyle \mathbbm{E}[\alpha_m]=\frac{(n- m) (n-m-1)^2 (n-m-2)}{n(n-1)(n-2)},}\\\\ {\displaystyle \mathbbm{E}[\beta_m]=\frac{4 m (n-m) (n-m-1)^2}{n(n-1)(n-2)},}\\\\ {\displaystyle \mathbbm{E}[\gamma_m]=\frac{2 m (n-m) (2 m (n-m) + n)}{n(n-1)(n-2)}},\\\\ {\displaystyle \mathbbm{E}[\delta_m]=\frac{m(m-1)(2 n^2- 6 n- m^2+ 3 m +2)}{n (n-1)(n-2)}.} \end{array} \end{equation} \end{theorem} \begin{proof} For $i=1,...,n-1$, let $\hat\alpha_{m,i}$, $\hat\beta_{m,i}$, $\hat\gamma_{m,i}$ and $\hat\delta_{m,i}$ be random variables such that $\hat\alpha_{m,i}=1$ if the $i$-th adjacency of $\xi$, i.e. $\{\xi_i,\xi_{i+1}\}$, is 2-free-end w.r.t. $I_m$ and $\hat\alpha_{m,i}=0$ otherwise; $\hat\beta_{m,i}=1$ if the $i$-th adjacency of $\xi$ is 1-free-end w.r.t. $I_m$ and $\hat\beta_{m,i}=0$ otherwise; $\hat\gamma_{m,i}=1$ if the $i$-th adjacency of $\xi$ is a trivial segment and $\hat\gamma_{m,i}=0$ otherwise; and $\hat\delta_{m,i}=1$ if the $i$-th adjacency of $\xi$ is 0-free-end and $\hat\delta_{m,i}=0$ otherwise. Then, for every $i=1,...,n-1$, we have: \begin{equation*} \begin{array}{l} {\displaystyle \mathbbm{P}(\hat\alpha_{m,i}=1|A_{m,k})=\mathbbm{P}(\hat \alpha_{m,i}=1|I_m=I)= \frac{(n-m-k)(n-m-k-1)}{n(n-1)},}\\\\ {\displaystyle \mathbbm{P}(\hat \beta_{m,i}=1|A_{m,k})=\mathbbm{P}(\hat \beta_{m,i}=1|I_m=I)=\frac{4k(n-m-k)}{n(n-1)},}\\\\ {\displaystyle \mathbbm{P}(\hat \gamma_{m,i}=1|A_{m,k})=\mathbbm{P}(\hat \gamma_{m,i}=1|I_m=I)=\frac{2k(2k-1)}{n(n-1)},}\\\\ {\displaystyle \mathbbm{P}(\hat \delta_{m,i}=1|A_{m,k})=\mathbbm{P}(\hat \delta_{m,i}=1|I_m=I)=\frac{(m-k)(2n-m+k-1)}{n(n-1)}.} \end{array} \end{equation*} Therefore, \begin{multline*} \mathbbm{E}[\alpha_m |A_{m,k}]=\mathbbm{E}[\alpha_{m,k}]=\mathbbm{E}[\alpha(\xi,I)]=\\ \sum\limits_{i=1}^{n-1}\mathbbm{P}(\hat\alpha_{m,i}=1 |A_{m,k})=\frac{(n-m-k)(n-m-k-1)}{n}. \end{multline*} The other conditional expected values of (\ref{conditional expected number}) are proved similarly. From Proposition~\ref{segmentset}, the probability that $A_{m,k}$ occurs is \[ \mathbbm{P}(A_{m,k})=\frac{\left(\begin{array}{c} m-1 \\ k-1 \end{array}\right) \left(\begin{array}{c} n-m\\ k \end{array}\right)}{\left(\begin{array}{c} n-1 \\ m \end{array}\right)}.\] Therefore, by averaging over $k$, we have \begin{flalign*} \mathbbm{E}[\alpha_m]&=\sum\limits_{k=1}^{m}\frac{(n-m-k)(n-m-k-1)}{n}\frac{\left(\begin{array}{c} m-1 \\ k-1 \end{array}\right) \left(\begin{array}{c} n-m\\ k \end{array}\right)}{\left(\begin{array}{c} n-1 \\ m \end{array}\right)}&\\ &=\frac{(n- m) (1 + m - n)^2 (n - m-2)}{(n-2) (n-1) n},& \end{flalign*} \begin{flalign*} \mathbbm{E}[\beta_m]&=\sum\limits_{k=1}^{m}\frac{4k(n-m-k)}{n}\frac{\left(\begin{array}{c} m-1 \\ k-1 \end{array}\right) \left(\begin{array}{c} n-m\\ k \end{array}\right)}{\left(\begin{array}{c} n-1 \\ m \end{array}\right)} =\frac{4 m (n-m) (1 + m - n)^2}{(n-2) (n-1) n},& \end{flalign*} \begin{flalign*} \mathbbm{E}[\gamma_m]&=\sum\limits_{k=1}^{m}\frac{2k(2k-1)}{n}\frac{\left(\begin{array}{c} m-1 \\ k-1 \end{array}\right) \left(\begin{array}{c} n-m\\ k \end{array}\right)}{\left(\begin{array}{c} n-1 \\ m \end{array}\right)}=\frac{2 m (n-m) (2 m (n-m) + n)}{(n-2) (n-1) n}, & \end{flalign*} and \begin{flalign*} \mathbbm{E}[\delta_m]&=\sum\limits_{k=1}^{m}\frac{(m-k)(2n-m+k-1)}{n}\frac{\left(\begin{array}{c} m-1 \\ k-1 \end{array}\right) \left(\begin{array}{c} n-m\\ k \end{array}\right)}{\left(\begin{array}{c} n-1 \\ m \end{array}\right)}&\\ &=\frac{m(m-1)(2 n^2- 6 n- m^2+ 3 m +2)}{n (2 - 3 n + n^2)}.& \end{flalign*} \end{proof} \begin{theorem}\label{var} Let $m=m(n)$ and $k=k(n)$ be such that $0< k\leq m<n$, and let $I$ be an arbitrary segment set in $\mathcal{I}_{m,k}$. Then \begin{footnotesize} \begin{flalign*} Var(\alpha_{m,k})=Var(\alpha(\xi,I))&=\mathbbm{E}[\alpha_{m,k}](1-\mathbbm{E}[\alpha_{m,k}])+\frac{(n-m-k)(n-m-k-1)^2(n-m-k-2)}{n(n-1)}\\ &=(1-\frac{m+k}{n})^2(\frac{m+k}{n})^2n+o(n), \end{flalign*} \begin{flalign*} Var(\beta_{m,k})=&Var(\beta(\xi,I))=\mathbbm{E}[\beta_{m,k}](1-\mathbbm{E}[\beta_{m,k}])+\frac{4k(n-m-k)((n-m-k-1)(4k-1)+2k-1)}{n(n-1)}& \end{flalign*} \begin{flalign*} &=4\frac{k}{n}(1-\frac{m+k}{n})(\frac{k}{n}(3-\frac{4k}{n})+\frac{m}{n}(1-\frac{4k}{n}))n+o(n), \end{flalign*} \begin{flalign*} Var(\gamma_{m,k})=&Var(\gamma(\xi,I))=\mathbbm{E}[\gamma_{m,k}](1-\mathbbm{E}[\gamma_{m,k}])+\frac{2k(2k-1)^2(2k-2)}{n(n-1)}=4(1-\frac{2k}{n})^2(\frac{k}{n})^2n+o(n),& \end{flalign*} \begin{flalign*} Var(\delta_{m,k})=&Var(\delta(\xi,I))=\mathbbm{E}[\delta_{m,k}](1-\mathbbm{E}[\delta_{m,k}])& \end{flalign*} \begin{flalign*} &+\frac{(m-k)[(m-k-1)(2n-m+k-2)(2n-m+k-3)+2(n-2)(n-1)]}{2n(n-1)}\\ &=(\frac{m-k}{n})^2(1-\frac{m-k}{n})^2n+o(n). \end{flalign*} Furthermore, \begin{multline*} Var(\alpha_{m})=\mathbbm{E}[\alpha_{m}](1-\mathbbm{E}[\alpha_{m}])+\sum\limits_{k=1}^{m}\frac{(n-m-k)(n-m-k-1)^2(n-m-k-2)}{n(n-1)}\mathbbm{P}(A_{m,k})=\\ (1-\frac{m}{n})^4(\frac{m}{n})^2 (8+\frac{m}{n}(-12+\frac{5m}{n}))n+o(n), \end{multline*} \begin{multline*} Var(\beta_{m})=\mathbbm{E}[\beta_{m}](1-\mathbbm{E}[\beta_{m}])+\sum\limits_{k=1}^{m}\frac{4k(n-m-k)((n-m-k-1)(4k-1)+2k-1)}{n(n-1)}\mathbbm{P}(A_{m,k})=\\ 4(1-\frac{m}{n})^3(\frac{m}{n})^2(8-\frac{m}{n}(31+\frac{4m}{n}(-11+\frac{5m}{n})))n+o(n), \end{multline*} \begin{multline*} Var(\gamma_{m})=\mathbbm{E}[\gamma_{m}](1-\mathbbm{E}[\gamma_{m}])+\sum\limits_{k=1}^{m}\frac{2k(2k-1)^2(2k-2)}{n(n-1)}\mathbbm{P}(A_{m,k})=\\ 4(1-\frac{m}{n})^2(\frac{m}{n})^2(1-4(1-\frac{m}{n})(\frac{m}{n})(1+5(1-\frac{m}{n})\frac{m}{n}))n+o(n), \end{multline*} \begin{multline*} Var(\delta_{m})=\mathbbm{E}[\delta_{m}](1-\mathbbm{E}[\delta_{m}])\\ +\sum\limits_{k=1}^{m}\frac{(m-k)[(m-k-1)(2n-m+k-2)(2n-m+k-3)+2(n-2)(n-1)]}{2n(n-1)}\mathbbm{P}(A_{m,k})=\\ (\frac{m}{n})^2(1-(\frac{m}{n})^2)^2(4+\frac{m}{n}(-8+\frac{5m}{n}))n+o(n), \end{multline*} \begin{flalign*} where &~~~~\mathbbm{P}(A_{m,k})=\frac{\left(\begin{array}{c} m-1 \\ k-1 \end{array}\right) \left(\begin{array}{c} n-m\\ k \end{array}\right)}{\left(\begin{array}{c} n-1 \\ m \end{array}\right)}.& \end{flalign*} \end{footnotesize} \end{theorem} \begin{proof} For $i=1,...,n-1$, recall the definition of $\hat\alpha_{m,i}$, $\hat\beta_{m,i}$, $\hat\gamma_{m,i}$ and $\hat\delta_{m,i}$ from the proof of Theorem \ref{expectation}, and similarly, let $\hat\alpha_{m,k,i}$, $\hat\beta_{m,k,i}$, $\hat\gamma_{m,k,i}$ and $\hat\delta_{m,k,i}$ be random variables such that $\hat\alpha_{m,k,i}=1$ if the $i$-th adjacency of $\xi$, i.e. $\{\xi_i,\xi_{i+1}\}$, is 2-free-end w.r.t. $I_{m,k}$ and $\hat\alpha_{m,k,i}=0$ otherwise; $\hat\beta_{m,k,i}=1$ if the $i$-th adjacency of $\xi$ is 1-free-end w.r.t. $I_{m,k}$ and $\hat\beta_{m,k,i}=0$ otherwise; $\hat\gamma_{m,k,i}=1$ if the $i$-th adjacency of $\xi$ is trivial segment w.r.t. $I_{m,k}$ and $\hat\gamma_{m,k,i}=0$ otherwise; and $\hat\delta_{m,k,i}=1$ if the $i$-th adjacency of $\xi$ is 0-free-end w.r.t. $I_{m,k}$ and $\hat\delta_{m,k,i}=0$ otherwise. Then, for every $i=1,...,n-1$, we have: \[ \begin{array}{l} \mathbbm{E}[\alpha_{m,k}^2]=\sum\limits_i\mathbbm{E}[\hat\alpha_{m,k,i}^2]+2\sum\limits_{i> j}\mathbbm{E}[\hat\alpha_{m,k,i} \hat\alpha_{m,k,j}]=\\ \sum\limits_i \mathbbm{P}(\hat\alpha_{m,k,i}^2=1)+2\sum\limits_{i> j}\mathbbm{P}(\hat\alpha_{m,k,i} \hat\alpha_{m,k,j}=1)=\\ \sum\limits_i \mathbbm{P}(\hat \alpha_{m,k,i}=1)+2\sum\limits_{i> j}\mathbbm{P}(\hat \alpha_{m,k,i} \hat\alpha_{m,k,j}=1)=\\ \mathbbm{E}[\alpha_{m,k}]+2\sum\limits_{i> j}\mathbbm{P}(\hat \alpha_{m,k,i} \hat\alpha_{m,k,j}=1). \end{array} \] Now, note that: \begin{small} \[ \begin{array}{l} {\displaystyle \sum\limits_{i>j+1}\mathbbm{P}(\hat \alpha_{m,k,i} \hat\alpha_{m,k,j}=1)=\sum\limits_{i>j+1}\frac{(n-m-k)(n-m-k-1)(n-m-k-2)(n-m-k-3)}{n(n-1)(n-2)(n-3)}}\\ {\displaystyle =\frac{(n-m-k)(n-m-k-1)(n-m-k-2)(n-m-k-3)}{2n(n-1)}}, \end{array} \] \end{small} and \begin{small} \[ \begin{array}{l} {\displaystyle \sum\limits_{i=j+1}\mathbbm{P}(\hat \alpha_{m,k,i} \hat\alpha_{m,k,j}=1)=\sum\limits_{i=j+1}\frac{(n-m-k)(n-m-k-1)(n-m-k-2)}{n(n-1)(n-2)}}\\ {\displaystyle =\frac{(n-m-k)(n-m-k-1)(n-m-k-2)}{n(n-1)}}. \end{array} \] \end{small} Hence, \begin{small} \[ \begin{array}{l} {\displaystyle Var(\alpha_{m,k})=\mathbbm{E}[\alpha_{m,k}^2]-(\mathbbm{E}[\alpha_{m,k}])^2}\\ {\displaystyle =\mathbbm{E}[\alpha_{m,k}](1-\mathbbm{E}[\alpha_{m,k}])+\frac{(n-m-k)(n-m-k-1)^2(n-m-k-2)}{n(n-1)}.} \end{array} \] \end{small} Exactly the same calculations give $Var(\alpha(\xi,I))$. Similarly we can compute $Var(\beta_{m,k})=Var(\beta(\xi,I))$, $Var(\gamma_{m,k})=Var(\gamma(\xi,I))$ and $Var(\delta_{m,k})=Var(\delta(\xi,I))$.\\ \noindent Now to compute $Var(\alpha_m)$ write \[ \begin{array}{l} \mathbbm{E}[\alpha_m^2]=\sum\limits_i\mathbbm{E}[\hat\alpha_{m,i}^2]+2\sum\limits_{i> j}\mathbbm{E}[\hat\alpha_{m,i} \hat\alpha_{m,j}]=\sum\limits_i \mathbbm{P}(\hat \alpha_{m,i}^2=1)+2\sum\limits_{i> j}\mathbbm{P}(\hat \alpha_{m,i} \hat \alpha_{m,j}=1)=\\ \sum\limits_i \mathbbm{P}(\hat \alpha_{m,i}=1)+2\sum\limits_{i> j}\mathbbm{P}(\hat \alpha_{m,i} \hat \alpha_{m,j}=1)=\mathbbm{E}[\alpha]+2\sum\limits_{i> j}\mathbbm{P}(\hat \alpha_{m,i} \hat \alpha_{m,j}=1). \end{array} \] Now, we note that: \begin{footnotesize} \begin{flalign*} \sum\limits_{i>j+1}\mathbbm{P}(\hat \alpha_{m,i}\cdot\hat \alpha_{m,j}=1)&=& \end{flalign*} \begin{flalign*} &\sum\limits_{i>j+1}\sum\limits_{k=1}^{m}\frac{(n-m-k)(n-m-k-1)(n-m-k-2)(n-m-k-3)}{n(n-1)(n-2)(n-3)}\mathbbm{P}(A_{m,k})& \end{flalign*} \begin{flalign*} &=\sum\limits_{k=1}^{m}\frac{(n-m-k)(n-m-k-1)(n-m-k-2)(n-m-k-3)}{2n(n-1)}\mathbbm{P}(A_{m,k}),& \end{flalign*} \end{footnotesize} \noindent and \begin{footnotesize} \begin{flalign*} \sum\limits_{i=j+1}\mathbbm{P}(\hat \alpha_{m,i} \hat \alpha_{m,j}=1)&=\sum\limits_{i=j+1}\sum\limits_{k=1}^{m}\frac{(n-m-k)(n-m-k-1)(n-m-k-2)}{n(n-1)(n-2)}\mathbbm{P}(A_{m,k})&\\ &=\sum\limits_{k=1}^{m}\frac{(n-m-k)(n-m-k-1)(n-m-k-2)}{n(n-1)}\mathbbm{P}(A_{m,k}).& \end{flalign*} \end{footnotesize} \noindent Therefore, \begin{footnotesize} \begin{flalign*} ~~~~&Var(\alpha_m)=\mathbbm{E}[\alpha_m^2]-(\mathbbm{E}[\alpha_m])^2&\\ &=\mathbbm{E}[\alpha_m](1-\mathbbm{E}[\alpha_m])+\sum\limits_{k=1}^{m}\frac{(n-m-k)(n-m-k-1)^2(n-m-k-2)}{n(n-1)}\mathbbm{P}(A_{m,k})&\\ &=\mathbbm{E}[\alpha_m](1-\mathbbm{E}[\alpha_m])&\\ &+\frac{(n-m) (n-m-1)^2 (n-m-2)^2 (n-m-3) \left(n^2-5 n+4-2 m n+m (m+7)\right)}{n(n-1)^2(n-2)(n-3)(n-4)}&\\ &=(1-\frac{m}{n})^4(\frac{m}{n})^2 (8+\frac{m}{n}(-12+5\frac{m}{n}))n+o(n).& \end{flalign*} \end{footnotesize} \noindent Similarly we can show that \begin{footnotesize} \begin{flalign*} ~~~~&Var(\beta_m)=\mathbbm{E}[\beta_m^2]-(\mathbbm{E}[\beta_m])^2&\\ &=\mathbbm{E}[\beta_m](1-\mathbbm{E}[\beta_m])+\sum\limits_{k=1}^{m}\frac{4k(n-m-k)((n-m-k-1)(4k-1)+2k-1)}{n(n-1)}\mathbbm{P}(A_{m,k})&\\ &=\mathbbm{E}[\beta_m](1-\mathbbm{E}[\beta_m])+\left(\frac{4 m (m-n) (m-n+1)^2}{(n-4) (n-3) (n-2)(n-1)^2 n}\right)\times &\\ & \left((1-4 m) n^3+(4 m (3 m+5)-3) n^2-(m+1) (3 m (4 m+11)+1) n+4 (m+1)^2 (m (m+4)+1)\right) &\\ &=4(1-\frac{m}{n})^3(\frac{m}{n})^2(8-\frac{m}{n}(31+4\frac{m}{n}(-11+5\frac{m}{n})))n+o(n),& \end{flalign*} \end{footnotesize} \begin{footnotesize} \begin{flalign*} ~~~~&Var(\gamma_m)=\mathbbm{E}[\gamma_m^2]-(\mathbbm{E}[\gamma_m])^2&\\ &=\mathbbm{E}[\gamma_m](1-\mathbbm{E}[\gamma_m])+\sum\limits_{k=1}^{m}\frac{2k(2k-1)^2(2k-2)}{n(n-1)}\mathbbm{P}(A_{m,k})=&\\ &=\mathbbm{E}[\gamma_m](1-\mathbbm{E}[\gamma_m])&\\ &+\frac{4 (m-1) m (m-n) (m-n+1) \left(4 m^4-8 m^3 n+4 m^2 \left(n^2+n+3\right)-4 m n (n+3)+n (n+9)-4\right)}{(n-4) (n-3) (n-2)(n-1)^2 n}&\\ &=4(1-\frac{m}{n})^2(\frac{m}{n})^2(1-4(1-\frac{m}{n})(\frac{m}{n})(1+5(1-\frac{m}{n})\frac{m}{n}))n+o(n),& \end{flalign*} \end{footnotesize} and finally, \begin{footnotesize} \begin{flalign*} &Var(\delta_m)=\mathbbm{E}[\delta_m^2]-(\mathbbm{E}[\delta_m])^2&\\ &=\mathbbm{E}[\delta_m](1-\mathbbm{E}[\delta_m])&\\ &+\sum\limits_{k=1}^{m}\frac{(m-k)[(m-k-1)(2n-m+k-2)(2n-m+k-3)+2(n-2)(n-1)]}{2n(n-1)}\mathbbm{P}(A_{m,k})&\\ &=\mathbbm{E}[\delta_m](1-\mathbbm{E}[\delta_m])&\\ &+\frac{(m-1)m}{(n-4) (n-3) (n-2) (n-1)^2 n} \times \left\lbrace (m-5) m \left(m \left(m^3-10 m^2+m+40\right)+4\right)+4 (m-4) (m+1) n^4 \right. &\\ &\left. +2 (9-23 (m-3) m) n^3+2 (m (m (51-2 (m-8) m)-235)+50) n^2\right. &\\ &\left. +2 m (m (13 (m-8) m+121)+170) n+2 n^5-152 n+48\right\rbrace &\\ &=(\frac{m}{n})^2(1-(\frac{m}{n})^2)^2(4+\frac{m}{n}(-8+\frac{5m}{n}))n+o(n).& \end{flalign*} \end{footnotesize} \end{proof} We are ready to state a convergence theorem for all different types of adjacencies of $\xi^{(n)}$ w.r.t. $I_{m(n),k(n)}^{(n)}$ or $I_{m(n)}^{(n)}$. Let $m:\mathbbm{N}\to \mathbbm{N}$ and $k:\mathbbm{N}\to \mathbbm{N}$ be such that $1\leq k(n)\leq m(n)\leq n-k(n)$, for any $n\in \mathbbm{N}$. Also, let $(\hat{I}_n)_{n\in \mathbbm{N}}$ be an arbitrary sequence of segment sets that $\hat{I}_n\in \mathcal{I}_{m(n),k(n)}^{(n)}$. Denote \[ \begin{array}{l} \tilde{\alpha}_n:=\alpha(\xi^{(n)},I_{m(n)}^{(n)}), \ and\\ \bar{\alpha}_n:=\alpha(\xi^{(n)},I_{m(n),k(n)}^{(n)}).\\ \end{array} \] Similarly, for $n\in \mathbbm{N}$, we define $\tilde \beta_n, \tilde \gamma_n, \tilde \delta_n$, and $\bar \beta_n, \bar \gamma_n, \bar \delta_n$. \begin{theorem}\label{convergence in probability} Suppose $\frac{m(n)}{n}\to c$ and $\frac{k(n)}{n}\to c'$, as $n \to \infty$. Then, as $n\to \infty$ \begin{footnotesize} \[ \begin{array}{l} {\displaystyle \frac{\tilde{\alpha}_n}{n}\overset{L^2,p}{\longrightarrow} (1-c)^4}, \\\\ {\displaystyle\frac{\tilde{\beta}_n}{n}\overset{L^2,p}{\longrightarrow} 4c(1-c)^3},\\\\ {\displaystyle \frac{\tilde{\gamma}_n}{n}\overset{L^2,p}{\longrightarrow} 4c^2(1-c)^2},\\\\ {\displaystyle \frac{\tilde{\delta}_n}{n}\overset{L^2,p}{\longrightarrow} c^2(2-c)^2},\\\\ {\displaystyle \frac{\bar{\alpha}_n}{n} \ , \ \frac{\alpha (\xi^{(n)},\hat{I}_n)}{n}\overset{L^2,p}{\longrightarrow} (1-c-c')^2},\\\\ {\displaystyle \frac{\bar{\beta}_n}{n} \ , \ \frac{\beta (\xi^{(n)},\hat{I}_n)}{n}\overset{L^2,p}{\longrightarrow} 4c'(1-c-c')},\\\\ {\displaystyle \frac{\bar{\gamma}_n}{n} \ , \ \frac{\gamma (\xi^{(n)},\hat{I}_n)}{n}\overset{L^2,p}{\longrightarrow} 4c'^2},\\\\ {\displaystyle \frac{\bar{\delta}_n}{n} \ , \ \frac{\delta (\xi^{(n)},\hat{I}_n)}{n}\overset{L^2,p}{\longrightarrow} (c-c')(2-c+c')}.\\\\ \end{array} \] \end{footnotesize} \end{theorem} \begin{proof} First observe that, by Theorem \ref{expectation}, as $n\to \infty$, \[ \begin{array}{l} {\displaystyle \mathbbm{E}[\frac{\tilde{\alpha}_n}{n}]\to (1-c)^4}, \\\\ {\displaystyle \mathbbm{E}[\frac{\tilde{\beta}_n}{n}]\to 4c(1-c)^3},\\\\ {\displaystyle \mathbbm{E}[\frac{\tilde{\gamma}_n}{n}]\to 4c^2(1-c)^2},\\\\ {\displaystyle \mathbbm{E}[\frac{\tilde{\delta}_n}{n}]\to c^2(2-c)^2},\\\\ {\displaystyle \mathbbm{E}[\frac{\bar{\alpha}_n}{n}] \ , \ \mathbbm{E}[\frac{\alpha (\xi^{(n)},\hat{I}_n)}{n}]\to (1-c-c')^2},\\\\ {\displaystyle \mathbbm{E}[\frac{\bar{\beta}_n}{n}] \ , \ \mathbbm{E}[\frac{\beta (\xi^{(n)},\hat{I}_n)}{n}]\to 4c'(1-c-c')},\\\\ {\displaystyle \mathbbm{E}[\frac{\bar{\gamma}_n}{n}] \ , \ \mathbbm{E}[\frac{\gamma (\xi^{(n)},\hat{I}_n)}{n}]\to 4c'^2},\\\\ {\displaystyle \mathbbm{E}[\frac{\bar{\delta}_n}{n}] \ , \ \mathbbm{E}[\frac{\delta (\xi^{(n)},\hat{I}_n)}{n}]\to (c-c')(2-c+c')}.\\\\ \end{array} \] Also, following Theorem \ref{var}, the variances of all these sequences converge to $0$. Hence, the convergence in $L^2$ and in probability holds. \end{proof} Let $I$ be a segment set of $id^{(n)}$. In order to construct a permutation $x\in X_n(I)$, we need to find a segment set of $S_n$, namely $J$, such that $I\cap J=\emptyset$ and $I\cup J=\mathcal{A}_\pi$, for a permutation $\pi$. Then, $x$ is constructed by completing the segment set $J$. Conversely, when a permutation $x\in X_n(I)$ is given, an easy observation shows that there exists at least one permutation $\pi$ containing $I$ such that $J=\mathcal{A}_\pi\setminus I \subset \mathcal{A}_x$ and all 2-free-end adjacencies of $x$ are used in $\pi$ (Lemma \ref{lemma-J-interior}). For the moment, let us denote by $J\strut^\mathrm{o}$, the segment set of $x$ containing all 2-free-end adjacencies of $x$ w.r.t. $I$, and note that we must have $J\strut^\mathrm{o}\subset J$. So in order to find the permutation $\pi$ with the above property, we first take the segment set $I\cup J\strut^\mathrm{o}$. In fact, $\mathcal{A}_\pi \setminus (I\cup J\strut^\mathrm{o})$ should still be a segment set of $x$, and $n-1-|I|-|J\strut^\mathrm{o}|$ more adjacencies of $x$ (1-free-end adjacencies and trivial segments) should be taken in order to complete $I\cup J\strut^\mathrm{o}$. To analyse this further, we define this more formally as follows. Let $F$ be a function \[ F:\bigcup\limits_{n\in \mathbbm{N}}(S_n\times \mathcal{I}^{(n)})\rightarrow \mathcal{I}^{(n)}, \] where for any permutation $x\in S_n$ and any segment set $I\in \mathcal{I}^{(n)}$, $F(x,I)$ is the segment set of $x$ containing all 2-free-end adjacencies of $x$ w.r.t. $I$, that is \[ F(x,I):=\{\{l,l'\}\in \mathcal{A}_x : \ \{l,l'\} \ is \ 2-free-end\}. \] Let \[ Q:\bigcup\limits_{n\in \mathbbm{N}}(S_n\times \mathcal{I}^{(n)})\rightarrow \mathbbm{N}_0, \] where for $(x,I)\in S_n\times \mathcal{I}^{(n)}$, $Q(x,I)$ is the number of adjacencies needed in order to complete $I\cup F(x,I)$ to a permutation $\pi$, that is \[ Q(x,I)=n-1-|I|-|F(x,I)|. \] The following theorem restricts the range of $Q(x,I)$, for $x\in X_n(I)$. \begin{theorem}\label{interval-theorem} Let $I\in \mathcal{I}^{(n)}$, and $x\in X_n(I)$. Then \[ \|I\|-1\leq Q(x,I)\leq 2\|I\|. \] \end{theorem} Before proving the above theorem, we introduce a new concept. Let $I$ be a segment set of $S_n$. The freedom factor of a point (number) $k\in [n]$, is $0$ if $k\in Int(I)$. It is $1$, if $k\in End(I)$. Finally, it is $2$, if $k\in Iso(E)$. Similarly, the freedom factor of a segment $s=[v_1,...,v_l]=[v_l,...,v_1]$ is denoted by $u=<u_1,...,u_l>=<u_l,...,u_1>$, where for each $i\in [l]$, $u_i$ is the freedom factor of $v_i$. A segment $s$, with the freedom vector $u$ is called a $u$-segment. Also, for $\pi\in S_n$ and $i\in [n]$, the set of neighbours of $i$ in $\pi$ is defined by \[ \mathcal{N}_\pi(i):=\{j\in [n]: \ \{i,j\}\in \mathcal{A}_\pi\}. \] For an arbitrary segment set of $S_n$, namely $I$, in order that $x\in X_n(I)$, we need to find a segment set $J$ contained in $x$ such that $I\cup J=\mathcal{A}_\pi$ and $I\cap J=\emptyset$. As we mentioned, $J$ may not have all adjacencies of $F(x,I)$. For instance, let $I=[4,5,6,7]$ and $x=6 \ 4 \ 1 \ 3 \ 8 \ 10 \ 2 \ 9 \ 7 \ 5$. Then $x\in X_{10}(I)$ and $J_1=\{[3,1,4],[7,9,2,10,8]\}$ have the required property, while it does not contain the adjacency $\{3,8\} \in F(x,I)$. However, even in this case, we see that there are segment sets $J_2=[9,2,10,8,3,1,4]$ and $J_3=[1,3,8,10,2,9,7]$ including all adjacencies of $F(x,I)$ both with the required properties. In fact, in the following lemma we can see that there are not many adjacencies of $F(x,I)$ that can be ignored in the construction of $\pi$ from $x$ and $I$. \begin{lemma}\label{lemma-J-interior} Let $I$ be a segment set of $\mathcal{I}^{(n)}$, and $x\in X_n(I)$. \begin{itemize} \item[a)] Let $\pi\in S_n$ be such that $I\subset \mathcal{A}_\pi$, and $\mathcal{A}_\pi\setminus I \subset \mathcal{A}_x$. Then either $F(x,I)\subset \mathcal{A}_\pi \setminus I$, or there exists an adjacency of $F(x,I)$, namely $e \in F(x,I)$ such that $F(x,I)\setminus \{e\} \subset \mathcal{A}_\pi\setminus I$. \item[b)] There always exists a permutation $\pi\in S_n$ such that $I\subset \mathcal{A_\pi}$, and $F(x,I)\subset \mathcal{A}_\pi\setminus I\subset \mathcal{A}_x$. \end{itemize} \end{lemma} \begin{proof} Suppose $\{a,b\} \in F(x,I)\setminus \mathcal{A}_\pi$. As $a,b\in Iso(I)$ and therefore the neighbours of $a$ in $\pi$ should be from set $\mathcal{N}_x(a)\setminus \{b\}$ and the neighbours of $b$ in $\pi$ should be from set $\mathcal{N}_x(b)\setminus \{a\}$, we have $|\mathcal{N}_\pi(a)|,|\mathcal{N}_\pi(b)| \leq 1$. But $|\mathcal{N}_\pi(a)|$ and $|\mathcal{N}_\pi(b)|$ cannot be $0$, since in that case $a$ or $b$ cannot be connected to the rest of the numbers to construct $\pi$, and therefore $|\mathcal{N}_\pi(a)|=|\mathcal{N}_\pi(b)|=1$ which means that $a$ and $b$ are extremities of permutation $\pi$, i.e. $\{\pi_1,\pi_n\}=\{a,b\}$. In other words, there may exist at most one adjacency $\{a,b\}\in F(x,I)\setminus \mathcal{A}_\pi$. This proves part $(a)$. For part $(b)$, suppose $\pi'\in \overline{[id,x]}$ and there exists adjacency $\{a,b\}$ such that $\{a,b\}\in F(x,I)\setminus \pi'$. As we showed above $\{\pi'_1,\pi'_n\}=\{a,b\}$. Also, as $a$ and $b$ are connected in $\pi'$ through a segment of $\pi'$ containing at least one segment of $I$ and this means that there exists at least one $<1,2>$-adjacency ($<1,2>$-segment) in the segment of $\pi'$ connecting $a$ to $b$, namely $e$, and hence $e$ is not in $F(x,I)$. Therefore, we can construct a new permutation $\pi$ by cutting $e$ in $\pi'$ and joining $a$ to $b$. This proves part $(b)$. \end{proof} \begin{proof}[Proof of Theorem \ref{interval-theorem}] The left inequality holds, since, when $F(x,I)$ is an empty segment set, we need at least $\|I\|-1$ $<1,1>$-segments (trivial segments) from $x$ to complete $\pi$. To prove the right inequality, let $\pi$ be a permutation such that $I\subset \mathcal{A}_\pi$ and $F(x,I)\subset \mathcal{A}_\pi\setminus I \subset \mathcal{A}_x$. From Lemma \ref{lemma-J-interior}, we know that such $\pi$ exists. As the freedom of every number in any segment of $F(x,I)$ is $2$, for two segments of $F(x,I)$, say $s_1,s_2$, the segment of $\pi$ that is located between them in $\pi$, say $s$, should necessarily contain at least one segment of $I$. In fact, the freedom of $s$ cannot be $<2,2,...,2>$ (since in that case $s_1\cup s\cup s_2$ should be a segment of $F(x,I)$ that is not supposed so) and then there must be at least a number in the segment $s$ with freedom $1$, and this implies that a segment of $I$ must be contained in $s$. This yields that two segments of $F(x,I)$ cannot be connected to each other in $\pi$ without using at least a segment of $I$ between them. On the other hand, let $s_1,s_2$ be two segments of $I$, and call the segment of $\pi$ located between them in $\pi$, $s$. If $s$ does not contain a segment of $F(x,I)$ and does not contain a segment of $I$, then it must be either a $<1,1>$-segment (i.e. a trivial segment) or a $<1,2,1>$-segment. Lastly, let $s_1$ be a segment of $I$ and $s_2$ be a segment of $F(x,I)$ and let $s$ be a segment of $\pi$ that is located between $s_1$ and $s_2$ in $\pi$. If $s$ does not contain a segment of $I$, it should be a $<1,2>$-segment necessarily. Putting all these together, we conclude that between each pair of segments of $I$ in $\pi$, say $s_1,s_2$, we may need either a $<1,2,1>$-segment of $\mathcal{A}_x\setminus F(x,I)$ or at most one segment of $F(x,I)$. In the latter for each end of this segment from $F(x,I)$, we need a $<1,1>$-segment of $\mathcal{A}_x\setminus F(x,I)$ to connect it to $s_1$ and $s_2$. On the other hand, on the right-hand side (left-hand side) of the most right (left) segment of $I$ in $\pi$, we may place either a $<1,2>$-segment ($<2,1>$-segment) or a $<1,2>$-segment ($<2,1>$-segment) followed by a segment of $F(x,I)$ on its right (on its left). So in general, we need at most $2$ adjacencies of $\mathcal{A}_x\setminus F(x,I)$ between each pair $s_1,s_2\hat{\in} I$ which are neighbours with respect to $\pi$ and in extremities we need at most one adjacency of $\mathcal{A}_x\setminus F(x,I)$. In other words, we need at most $2(\|I\|-1)+2=2\|I\|$ adjacencies of $\mathcal{A}_x\setminus F(x,I)$ in order to complete $\pi$. This finishes the proof. \end{proof} \begin{center} \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.6, trim = 0.3cm 13.5cm 3.5cm 2.7cm,clip]{grafico.pdf} \end{center} \caption{The value of $\mathbbm{E}[\alpha]/n$ (in green), $\mathbbm{E}[\beta]/n$ (in dark blue), $\mathbbm{E}[\delta]/n$ (in light blue) and $\mathbbm{E}[\gamma]/n$ (in red), when we choose $\frac{n}{20}$, $\frac{2n}{20}$,$\frac{3n}{20}$...,$\frac{19n}{20}$ adjacencies of $id$.}\label{figexp} \end{figure} \end{center} Let $(\hat{I}_n)_{n\in \mathbbm{N}}$ be an arbitrary sequence of segment sets with $|\hat{I}_n|=m(n)$ and $\|\hat{I}_n\|=k(n)$ for $n\in \mathbbm{N}$. As we already saw, to have $x\in X_n(\hat{I}_n)$, it is necessary to have $k(n)\leq m(n)\leq n-k(n)$ and also by Theorem \ref{interval-theorem} \[ \|\hat{I}_n\|-1\leq Q(x,\hat{I}_n)\leq 2\|\hat{I}_n\|. \] Also, for $x\in X_n(\hat{I}_n)$, by definition we have, \[ Q(x,\hat{I}_n)\leq \beta(x,\hat{I}_n)+\gamma(x,\hat{I}_n). \] Now suppose $m(n)/n\rightarrow c$ and $k(n)/n\rightarrow c'$, as $n\rightarrow \infty$, for $c,c'\in \mathbbm{R}_+$. Then Theorem \ref{convergence in probability} implies that the right side of the above inequality converges to $4c'-4cc'$, in probability, as $n$ goes to $\infty$. Similarly, the left side of the last inequality converges to $1-c-(1-c-c')^2$, in probability, as $n\rightarrow \infty$. Now suppose $c,c'$ is such that \[ 1-c-(1-c-c')^2>4c'-4cc'. \] Let $\varepsilon <<1-c-(1-c-c')^2-4c'+4cc'$. Then \[ \begin{array}{l} \mathbbm{P}[\xi^{(n)}\in X_n(\hat{I}_n)]\leq\\ \mathbbm{P}[Q(\xi^{(n)},\hat{I}_n)\leq \beta(\xi^{(n)},\hat{I}_n)+\gamma(\xi^{(n)},\hat{I}_n)]\leq \\ \mathbbm{P}[|\frac{Q(\xi^{(n)},\hat{I}_n)}{n}-(1-c-(1-c-c')^2)|>\varepsilon]+\\ \mathbbm{P}[|\frac{\beta(\xi^{(n)},\hat{I}_n+\gamma(\xi^{(n)},\hat{I}_n)}{n}-(4c'-4cc')|>\varepsilon] \rightarrow 0, \end{array} \] as $n\rightarrow 0$. So, to avoid this, we should assume $1-c-(1-c-c')^2 \leq 4c'-4cc'$. Similarly, we derive \[ \left\{ \begin{array}{l} 1-c-(1-c-c')^2\leq 4c'-4cc',\\ c'\leq 1-c-(1-c-c')^2\leq 2c',\\ 0< c'\leq c\leq 1-c'. \end{array} \right. \] \section{Finding non-trivial partial geodesics}\label{section-main-1} In this section we count the number of elements in $X_n(I)$ for a given segment set $I$. This gives an upper bound for the number of elements in $\bar{X}_n(I)$, by which we will be able to estimate the asymptotic behaviour of the probability of having a geodesic point of $id$ and $\xi^{(n)}$, far from both of them, as $n$ tends to $\infty$. In fact we can prove that this probability converges to $0$. This partly proves a conjecture stated by Haghighi and Sankoff in \cite{haghighi12}, for the case of two random permutations. Recall the definition of the set of intrinsic points, end points and isolated points of a given segment set $I$ from Section~\ref{prem}, and as before denote them by $Int(I)$, $End(I)$ and $Iso(I)$, respectively. \begin{lemma}\label{complementary} Let $x$ be a permutation in $S_n$, and let $I\in \mathcal{I}^{(n)}$ be a segment se . There exist a permutation $\pi\in S_n$ containing $I$ such that $\mathcal{A}_{\pi}\setminus I \subset \mathcal{A}_x$ if and only if there exist $q,r\in [n]$ and a segment set $J$ contained in $x$ satisfying one of the following conditions: \begin{enumerate}[(i)] \item $\{q,r\}=End(I)\cap Iso(J)$, $\|J\|=\|I\|-1$, $Int(J)=Iso(I)$, $Iso(J)\setminus\{q,r\}=Int(I)$ and $End(J)=End(I)\setminus\{q,r\}$; \item $\{r\}=End(J)\cap Iso(I)$ and $\{q\}=End(I)\cap Iso(J)$, $\|J\|=\|I\|$, $Int(J)=Iso(I)\setminus\{r\}$, $Iso(J)\setminus\{q\}=Int(I)$ and $End(J)\setminus\{r\}=End(I)\setminus\{q\}$; or \item $\{q,r\}=End(J)\cap Iso(I)$, $\|J\|=\|I\|+1$, $Int(J)=Iso(I)\setminus\{q,r\}$, $Iso(J)=Int(I)$ and $End(J)\setminus\{q,r\}=End(I)$. \end{enumerate} \noindent In any of these three cases, $q$ will be $\pi_1$ and $r$ will be $\pi_n$, or the opposite. \end{lemma} \begin{proof} To prove necessity, let $\pi$ be a permutation in $S_n$ containing $I$ such that $\mathcal{A}_{\pi}\setminus I \subset \mathcal{A}_x$, and define $J:=\mathcal{A}_{\pi}\setminus I$. Then $I$ and $J$ are disjoint and they complete each other in an alternating way, that is for any pair of neighbour segments $s_1,s_2\hat{\in}I$ with respect to $\pi$, there exists exactly one segment of $J$ that connects $s_1$ and $s_2$, and similarly, for any pair of neighbour segments $s_1',s_2'\hat{\in} J$ with respect to $\pi$, there exists exactly one segment of $I$ that connects $s_1'$ and $s_2'$. Therefore, we have $|\|I\|-\|J\||\leq 1$, and also, all intrinsic points of $I$ must be isolated points of $J$, $Int(I)\subseteq Iso(J)$, as well as all intrinsic points of $J$ must be isolated points of $I$, $Int(J)\subseteq Iso(I)$. Furthermore, all end points of $I$, except at most two of them, must be end points of $J$, and similarly, all end points of $J$, except at most two of them must be end points of $I$. Indeed, when we remove the intersection of end points of $I$ and end points of $J$ from the end points of the union, two points remain. In other words, there exists two points $q,r\in [n]$ such that \[ End(I \cup J)\setminus (End(I)\cap End(J))=\{q,r\}. \] These two points can either be both end points of $I$, or both end points of $J$, or one of them an end point of $I$ and the other an end point of $J$ according to the following cases. \begin{itemize} \item[(i)] If $\|J\|=\|I\|-1$, then $\{\pi_1,\pi_2\}$ and $\{\pi_{n-1},\pi_n\}$ are adjacencies of $id$, and so $q:=\pi_1$ and $r:=\pi_n$ are end points of $I$ while both are isolated points of $J$. Therefore, we have $Int(J)=Iso(I)$, $Iso(J)\setminus\{q,r\}=Int(I)$ and $End(J)=End(I)\setminus\{q,r\}$. \item[(ii)] If $\|I\|=\|J\|$, then either $\{\pi_1,\pi_2\}$ is an adjacency of $id$ and $\{\pi_{n-1},\pi_n\}$ is an adjacency of $x$ or vice versa, $\{\pi_1,\pi_2\}$ is an adjacency of $x$ and $\{\pi_{n-1},\pi_n\}$ is an adjacency of $id$. Without loss of generality suppose $\{\pi_1,\pi_2\}$ is an adjacency of $id$ and $\{\pi_{n-1},\pi_n\}$ is an adjacency of $x$. Then $q:=\pi_1$ is an end point of $I$ and also an isolated point of $J$, while $r:=\pi_n$ is an end point of $J$ and also an isolated point of $id$ with respect to $I$, and we have $Int(J)=Iso(I)\setminus\{r\}$, $Iso(J)\setminus\{q\}=Int(I)$ and $End(J)\setminus\{r\}=End(I)\setminus\{q\}$. \item[(iii)] Finally, if $\|J\|=\|I\|+1$, then $\{\pi_1,\pi_2\}$ and $\{\pi_{n-1},\pi_n\}$ are adjacencies of $x$. Therefore, $q:=\pi_1$ and $r:=\pi_n$ are end points of $J$ and also isolated points of $I$. Furthermore, $Int(J)=Iso(I)\setminus\{q,r\}$, $Iso(J)=Int(I)$ and $End(J)\setminus\{q,r\}=End(I)$. \end{itemize} To prove sufficiency, let $q,r\in [n]$ and $J$ be a segment set contained in $x$ satisfying condition $(i)$ in the statement of the lemma (the proof is similar, for $q,r,$ and $J$ satisfying conditions $(ii)$ and $(iii)$). Then \[ Int(I)\cup End(I) \cup Int(J)=Int(I)\cup End(I) \cup Iso(I)=[n], \] and \[ Int(I)\cap Int(J)=Int(I)\cap Iso(J)=\emptyset . \] In fact, this shows that $I$ and $J$ complete each other in an alternating way, and $I\cup J$ is a unique segment with extremities $q$ and $r$, i.e. $End(I\cup J)=\{q,r\}$, and with intrinsic points $Int(I\cup J)=[n]\setminus \{q,r\}$. In other words, there exists a permutation $\pi$ such that $\mathcal{A}_\pi=I\cup J$. As $I$ and $J$ are disjoint, one can write $J=\mathcal{A}_\pi\setminus I\subset \mathcal{A}_x$. This finishes the proof. \end{proof} Let $I$ be a segment set in $\mathcal{I}^{(n)}$, and let $x\in X_n(I)$. From Lemma \ref{complementary}, $x$ contains a segment set $J$ satisfying one of the three conditions indicated in the statement of Lemma~\ref{complementary}. \begin{remark} Let $I$ be a segment set of $id$ and $\pi$ a permutation containing $I$. In order to construct a permutation $x$ such that $\bar{I}_\pi=\mathcal{A}_\pi\setminus I \subset \mathcal{A}_x$, we should take different rearrangements of segments of $\overline{I}_{\pi}$ (considering two directions) and intrinsic points of $I$. Each such rearrangement gives us a permutation $x\in X_n(I)$. \end{remark} In Theorem~\ref{thm:permnumb}, we give an explicit formula for the number of permutations in $X_n(I)$ as a function of the number of adjacencies and segments in $I$. To this end, we need the following lemma. \begin{lemma}\label{permutation} Given a segment set $I$ with $m$ adjacencies and $k$ segments, that is $I\in \mathcal{I}_{m,k}^{(n)}$, the number of permutations in $S_n$ containing $I$ is equal to $2^{k}(n-m)!$. \end{lemma} \begin{proof} As the segment set $I$ has $m$ adjacencies and $k$ segments, each permutation containing $I$ has $n-m-k$ isolated points with respect to $I$. Therefore, noting that segments have two directions, we have $2^{k}(k+(n-m-k))!$ permutations containing $I$. \end{proof} \begin{theorem}\label{thm:permnumb} Given a segment set $I$ with $m$ adjacencies and $k$ segments, that is $I\in \mathcal{I}_{m,k}^{(n)}$, we have: \begin{multline} |X_n(I)|= \frac{2^k(m+1)!(n-m-2)!}{k!}\\ \times\left(k^2(k-1)+2k(n-m-k)+\frac{(n-m-k)(n-m-k-1)}{k+1}\right) \end{multline} \end{theorem} \begin{proof} Note that since the segment set $I$ has $m$ adjacencies and $\|I\|=k$, then $|Int(I)|=m-k$, $|Iso(I)|=n-m-k$ and $|End(I)|=2k$. By definition, $x\in X_n(I)$ if there exist a segment set $J$ that satisfies one of the three conditions in Lemma~\ref{complementary}. We divide the proof into three cases. We shall count the number of ways we can construct $J$ for each one of the three case , and thus, we use Lemma~\ref{permutation} to compute the number of permutations $x\in X_n(I)$ containing $J$ in each case. If $\|J\|=k-1$, then to have a permutation $\pi$ such that $\mathcal{A}_\pi$ is a sequence of alternating segments from $I$ and $J$, the number of ways that we can choose pairs of end points to construct segments of $J$ is equal to the number of ways we can rearrange the segments of $I$, noting that each segment can be placed in two different directions and $End(J)\subset End(I)$. Hence, we have $2^{k}k!$ ways to choose pairs of end points for $J$. On the other hand, when the end points of segments of $J$ are fixed, as $Int(J)=Iso(I)$, the number of ways that one can distribute (with order) $n-m-k$ intrinsic points in $k-1$ segments of $J$ is $\frac{((n-m-k)+(k-2))!}{(k-2)!}=\frac{(n-m-2)!}{(k-2)!}$. Ignoring the direction and order of segments in this calculation, we have \[ 2^kk!\frac{(n-m-2)!}{(k-2)!(k-1)!2^{k-1}}=2k\frac{(n-m-2)!}{(k-2)!} \] ways to construct segment set $J$. Remember that each of these possible segment sets $J$ has exactly $k-1$ segments and $n-m-1$ adjacencies and therefore, applying Lemma~\ref{permutation}, there exist \[ 2k\frac{(n-m-2)!}{(k-2)!}2^{k-1}(n-(n-m-1))!=\frac{2^k k (n-m-2)!(m+1)!}{(k-2)!} \] permutations $x\in X_n(I)$ containing $J$ that satisfies the case $(i)$ of Lemma~\ref{complementary}. Similarly, if $\|J\|=k$, the number of ways that we can choose pairs of end points for segments of $J$ is equal to the number of ways we can arrange the segments of $I$, noting that each segment can be in two directions and in this case one of the end points of $J$ must be chosen from $Iso(I)$ since $End(J)\setminus\{r\}=End(I)\setminus\{q\}$ where $r$ is an end point of $J$ and an isolated point in $id$ with respect to $I$, and $q$ is an end point of $I$ and an isolated point in $x$ with respect to $J$. Therefore, we have $2(n-m-k) 2^{k}k!$ ways to choose pairs of end points in order to construct $J$. Whereas, $|Int(J)|=|Iso(I)|-1$, we have \[ 2(n-m-k)2^{k}k!\frac{((n-m-k-1)+k-1)!}{(k-1)!2^kk!}=\frac{2(n-m-k)(n-m-2)!}{(k-1)!} \] ways to construct segment set $J$. Thus there exist \[ 2^{k}(m+1)!\frac{2(n-m-k)(n-m-2)!}{(k-1)!}=\frac{2^{k+1}(n-m-k)(m+1)!(n-m-2)!}{(k-1)!} \] permutations $x\in X_n(I)$ containing $J$ that satisfies case $(ii)$ of Lemma~\ref{complementary}. Lastly, if $\|J\|=k+1$ then $|Int(J)|=n-m-k-2$, $|Iso(J)|=m-k$ and $End(J)\setminus\{q,r\}=End(I)$ where $q$ and $r$ are end points of $J$ and isolated points of $I$. Therefore, similarly, there exist \begin{multline} 2^kk!(n-m-k)(n-m-k-1)\frac{(n-m-2)!}{k!(k+1)!2^{k+1}}2^{k+1}(m+1)!=\\ \frac{2^k(n-m-k)(n-m-k-1)(n-m-2)!(m+1)!}{(k+1)!} \end{multline} permutations $x\in X_n(I)$ containing $J$ that satisfies case $(iii)$ of Lemma~\ref{complementary}. \end{proof} \begin{remark}[Random segment set] Applying Proposition~\ref{segmentset}, the probability of existence of a permutation $\pi\in \overline{[id,\xi^{(n)}]}$ containing random segment set $I_m^{(n)}$ such that $\mathcal{A}_{\pi}\setminus I_m^{(n)} \subset \mathcal{A}_{\xi^{(n)}}$, is bounded by \begin{multline*} \mathbbm{P}(\xi^{(n)}\in X_n(I_m^{(n)}))=\sum\limits_{k=1}^m \frac{{m-1 \choose k-1}{n-m \choose k} 2^k(m+1)!(n-m-2)!}{{n-1 \choose m} k!n!}\\ \times\left(k^2(k-1)+2k(n-m-k)+\frac{(n-m-k)(n-m-k-1)}{k+1}\right). \end{multline*} \qed \end{remark} For $0< \varepsilon<1/2$, let \[ \Lambda_n^\varepsilon:=\bigcup\limits_{m\leq n-1} \bigcup\limits_{k\geq \varepsilon n} \mathcal{I}_{m,k}^{(n)}. \] Note that the condition $k\geq l$, for convenient $l\in [n]$, implies that $l\leq m\leq n-l$, since $k\leq m$ and also in order that a segment set $I$ contained in a permutation $x$ has at least $l$ segments, at least $l-1$ adjacencies of $x$ should not appear in $I$. The following theorem is the consequence of Theorem \ref{permutation}. \begin{theorem}\label{thm:main} Let $0<\varepsilon<1/2$ and let $(I_n)_{n\in \mathbbm{N}}$ be a sequence of segment sets such that $I_n\in \mathcal{I}^{(n)}$ and $\varepsilon n\leq |I_n| \leq (1-\varepsilon) n$. Then \[ \frac{|X_n(I_n)|}{n!} \rightarrow 0, \] as $n\rightarrow \infty$. Furthermore, \[ \mathbbm{P}(\xi^{(n)}\in \bigcup\limits_{I\in \Lambda_n^\varepsilon} X_n(I))\rightarrow 0, \] as $n\rightarrow \infty$. \end{theorem} \begin{proof} By assumption, for every $n\in \mathbbm{N}$, there exists $c_n$ such that $\varepsilon\leq c_n \leq 1-\varepsilon$ and $|I_n|=nc_n+o(n)$. Then, by Lemma \ref{permutation} and Stirling's formula, there exists a constant $c_0$ such that \[ \begin{array}{l} \lim\limits_{n\rightarrow \infty} \frac{|X_n(I_n)|}{n!}\\ \leq c_0 \lim\limits_{n\rightarrow \infty} \frac{ (\frac{c_n n}{e})^{c_n n+o(n)}(\frac{(1-c_n)n}{e})^{(1-c_n)n-o(n)}}{(\frac{n}{e})^n}(n^{\frac{7}{2}}+o(n^{\frac{7}{2}}))\\ \leq c_0 \lim\limits_{n\rightarrow \infty} (\varepsilon^\varepsilon (1-\varepsilon)^{1-\varepsilon})^{n+o(n)} (n^{\frac{7}{2}}+o(n^{\frac{7}{2}}))=0, \end{array} \] where the last inequality holds as the maximum of the function $f(x)=x^x(1-x)^{1-x}$ in the domain $[\varepsilon,1-\varepsilon]$ is $\varepsilon^\varepsilon (1-\varepsilon)^{1-\varepsilon}$. For the second part, recall that if $I\in \Lambda_n^\varepsilon$, then $\|I\|\geq \varepsilon n$, and hence, $\varepsilon n\leq |I|\leq (1-\varepsilon)n$. For any $I\in \Lambda_n^\varepsilon$, from Theorem \ref{thm:permnumb}, we have \[ \frac{|X_n(I)|}{n!} \leq \frac{2^{\lfloor (1-\varepsilon)n\rfloor +1}\lfloor \varepsilon n\rfloor ! (\lfloor (1-\varepsilon)n\rfloor+1)!}{\lfloor \varepsilon n\rfloor ! n!}(n^3+o(n^3)). \] Therefore, $|\Lambda_n^\varepsilon|\leq 2^{n-1}$ and Stirling's formula imply \[ \begin{array}{l} \lim\limits_{n\rightarrow \infty} \mathbbm{P}(\xi^{(n)}\in \bigcup\limits_{I\in \Lambda_n^\varepsilon} X_n(I))\\ \leq \lim\limits_{n\rightarrow \infty} \frac{2^n (2e)^{(1-\varepsilon)n+o(n)} (\varepsilon^\varepsilon (1-\varepsilon)^{1-\varepsilon})^n}{(\varepsilon n)^{\varepsilon n}}(n^3+o(n^3))=0.\\ \end{array} \] \end{proof} Now we prove the main theorem of this section, namely, we prove, in part, a conjecture stated in Haghighi et. al. \cite{haghighi12}. For $\varepsilon>0$, set \[ \mathcal{D}_n^\varepsilon:=\{x\in S_n : \exists \pi \in \overline{[id,x]} \ s.t. \ \ d^{(n)}(\pi,id),d^{(n)}(\pi,x)\geq \varepsilon n\}. \] Also, for $a\in \mathbbm{R}$, define \[ \Delta_n^a:=\{x\in S_n : |\mathcal{A}_{id,x}|\leq a\}. \] \begin{theorem}\label{thm:main-main} For any $\varepsilon >0$, \[ \mathbbm{P}(\xi^{(n)}\in \mathcal{D}_n^\varepsilon)\rightarrow 0, \] as $n\rightarrow 0$. \end{theorem} \begin{proof} Let $(a_n)_{n\in \mathbbm{N}}$ be an arbitrary sequence of real numbers diverging to $\infty$ such that $a_n / n\rightarrow 0$, as $n\rightarrow \infty$. Let \[ \Upsilon_n^\varepsilon:=\bigcup\limits_{m\in [\frac{\varepsilon}{2}n,(1-\frac{\varepsilon}{2})n]} \mathcal{I}_m^{(n)} \] Then \[ \begin{array}{l} 0\leq \lim\limits_{n \rightarrow\infty}\mathbbm{P}(\xi^{(n)}\in \mathcal{D}_n^\varepsilon)=\lim\limits_{n \rightarrow\infty}\mathbbm{P}(\xi^{(n)}\in \mathcal{D}_n^\varepsilon \cap \Delta_n^{a_n})\leq \\ \lim\limits_{n \rightarrow\infty}\mathbbm{P}(\xi^{(n)}\in \Delta_n^{a_n} \cap \bigcup\limits_{I\in \Upsilon_n^\varepsilon} X_n(I))\leq\\ \lim\limits_{n\rightarrow\infty}\mathbbm{P}(\xi^{(n)}\in \bigcup\limits_{I\in \Upsilon_n^\varepsilon} X_n(I))=0,\\ \end{array} \] where the last convergence holds from Theorem \ref{thm:permnumb}, Theorem \ref{thm:main}, and Stirling's formula. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,768
The main arena of Volgograd was built on expedia singapore flight discount code the demolished Central Stadium site, at the foot of the Mamayev Kurgan memorial complex.Retrieved 10 December 2002.Archived from the original on Retrieved Ornstein, David (17 November 2014).Costs continued to balloon as preparations were underway.Archived from.. I really like painting.".She hopes to christmas gift ideas starting with the letter r utilize her excellent English skills one day on a trip to the US or Canada.So, In the spirit of the holidays, if you feel so inclined to donate to Bee Downtown Foundation to help.. However, the Sallie Mae Mastercard only offers the 5 cash back on gas (and groceries) on the first 250 spent each month.To help deal with that, we have built a tool which allows you to see what rewards you can expect, given your particular spending habits.This appears to.. Sneak Peek 3, promo, sneak Peeks, promotional Photos. Bath and body works Semi Annual sale online 2018 will come twice in a year Winter and Summer in the month of Summer, Bath and body work annual sale will be started from November to December 2018 which take for you free shipping with your. At exit 51H merge onto Interstate 290 East toward downtown Chicago. They also offers both online and in-store sale.Sale40 can give you discount on your ttc shop discount code order 100 or more.Mens Body Care Buy 2 Get 1 Free.Take the Franklin Street exit and turn right onto South Franklin Street.Bath and body Semi Annual sale Winter will be started in the month of December-January 2018.New Mexico, Florida,Georgia, Hawaii, New Jersey, Idaho, Illinois, Maryland, Indiana, Tampa, Atlanta, Floria, Chicago, Orlando, Houston, San Diego, Indianapolis and New York.Your rating: none, rating: 5 - 1 vote.I mean to say First one is for summer and another one is on winter.MDW, airport Phone:, hotel direction:.1 miles NE, driving directions, follow Cicero Avenue north to Interstate 55 North to Interstate 94 West toward Wisconsin.Tags: bath and body works semi annual sale, bath and body works semi annual sale dates, bath and body works semi annual sale dates 2018, bath and body works semi annual sale winter 2018.Want to see your poll/question here?All there customers are allowed to stack your coupon code with any ongoing sale.Exit at Washington Boulevard and make a slight left onto Washington Street.Once you are ready with the list then visit nearest sale store to buy.They are having same pattern as Victorias Secret Semi Annual Sale.Meanwhile, Voight (Jason Beghe) calls in a favor as a last-ditch effort to clear Olinskys (Elias Koteas) name for Binghams murder.December 2018 at all bath and body locations.Or for check online youll also find some latest update.Hotel direction: 18 miles SE, driving directions, follow Interstate 190 free pizza hut coupon code 2017 East to Interstate 90 and continue toward downtown Chicago.Both of their half-yearly sale comes with different and unique saving opportunity and you wont see same deal both time.Press Release "chicago PD" "allegiance" (10:00PM - 11:00PM) (Wednesday) : Halstead (Jesse Lee Soffer) and Atwater (LaRoyce Hawkins) go undercover to prevent military-grade weapons from hitting the street. Signature Body Lotion 4, classics Body Care will be. Bath and body, annual sale started would be on June-July and another would be in December-January of same year.
{ "redpajama_set_name": "RedPajamaC4" }
4,884
Q: Strange behaviour of RegionIntersection Consider this list of inequalities listCond = {1.72149*10^-17 x1 - 1.85662*10^-10 x2 + 1.34154*10^-9 x3 > 0, 0.0339918 x1 + 0.00037422 x2 - 0.00244772 x3 > 0, 8.91888*10^-11 x1 - 3.22923*10^-7 x2 - 7.94529*10^-7 x3 > 0, 0.0450231 x1 + 0.0684201 x2 + 0.168349 x3 > 0, 1.89778*10^-8 x1 - 0.0000293102 x2 - 0.000177299 x3 > 0, 0.0251657 x1 + 0.0191776 x2 + 0.101373 x3 > 0, 1.51228*10^-17 x1 - 6.09044*10^-10 x2 - 5.12373*10^-10 x3 > 0, 0.0473469 x1 + 0.00463677 x2 + 0.00387406 x3 > 0, 9.43537*10^-10 x1 - 7.24241*10^-6 x2 - 0.0000162489 x3 > 0, 0.0324393 x1 + 0.0351078 x2 + 0.079777 x3 > 0, 5.08188*10^-10 x1 + 6.15116*10^-6 x2 - 1.19815*10^-6 x3 > 0, 0.0532417 x1 - 0.0116377 x2 + 0.00226581 x3 > 0, 3.22738*10^-17 x1 - 1.03283*10^-9 x2 - 4.76066*10^-9 x3 > 0, 0.0171107 x1 + 0.00197426 x2 + 0.00901362 x3 > 0, 1.66869*10^-8 x1 - 0.000033618 x2 - 0.000082661 x3 > 0, 0.0279473 x1 + 0.0381579 x2 + 0.0939938 x3 > 0, 1.89097*10^-18 x1 - 7.63154*10^-11 x2 - 1.50989*10^-10 x3 > 0, 0.0100021 x1 + 0.026301 x2 + 0.0503768 x3 > 0, 6.315*10^-16 x1 - 4.74586*10^-9 x2 - 1.48738*10^-8 x3 > 0, 0.0398816 x1 + 0.0482081 x2 + 0.151094 x3 > 0, 2.96169*10^-10 x1 + 4.84749*10^-6 x2 + 4.78587*10^-7 x3 > 0, 0.00977103 x1 - 0.0106303 x2 - 0.00105459 x3 > 0, 8.56294*10^-10 x1 + 6.8701*10^-6 x2 - 2.95241*10^-9 x3 > 0, 0.0428018 x1 - 0.023279 x2 + 9.41786*10^-6 x3 > 0} If I ask to RegionIntersection if there is an intersection, I get EmptyRegion[3] Print[RegionIntersection @@ (ImplicitRegion[#, Evaluate@{x1,x2,x3}] & /@(listCond)) // RepeatedTiming]; (* Out: {0.0092,EmptyRegion[3]} *) If instead I use FindInstance, I get a solution to the system of inequalities listCond // FindInstance[#, {x1, x2, x3}, 2] & (* Out: {{x1 -> 131., x2 -> -0.00789569, x3 -> -0.000611689}, {x1 -> 109., x2 -> 2.17051*10^-6, x3 -> -5.00551*10^-7}} *) Why is RegionIntersection returning no solution?? Moreover, If I plot the region selected by the inequalities for $x_3=1$ I get a huge region RegionPlot[listCond /. List -> And /. x3 -> 1, {x1, 0, 10^8}, {x2, -10, 10}] A: Seems to be an issue with floating point numbers. Converting to exact numbers gives back an ImplicitRegion: N[RegionIntersection[ ImplicitRegion[Rationalize[#, 0], {x1, x2, x3}] & /@ listCon])] // Head ImplicitRegion
{ "redpajama_set_name": "RedPajamaStackExchange" }
304
\section{Introduction} The concept of time reversal (TR) invariant topological insulators (TIs) characterizes a new class of materials that are insulating in the interior of a sample, but whose surfaces contain robust conducting channels protected by TR symmetry.\cite{RevModPhys823045, RevModPhys831057,moore2010,qi2010phystoday} Since the metallic surface states of TIs can be described by Dirac fermions with exotic physical properties, researchers have been interested in pursuing different types of topological materials.\cite{RevModPhys823045, RevModPhys831057} TR invariant TIs can exist in both two dimensions and three dimensions. TIs in two dimensions, also known as ``quantum spin Hall insulators", were first predicted theoretically.\cite{PhysRevLett95226801,PhysRevLett96106802,Bernevig15122006} The quantum spin Hall effect was experimentally observed in HgTe/CdTe quantum wells (QWs)\cite{König02112007} and in InAs/GaSb type II QWs \cite{PhysRevLett107136603,PhysRevB81201301,2013arXiv13061925D}. In three dimensions, TR invariant TIs can be further classified into two categories, strong TIs and weak TIs.\cite{PhysRevLett98106803,PhysRevB76045302,PhysRevB75121306,PhysRevB79195322} Strong TIs have an odd number of Dirac cones at their surfaces and can be characterized by one $Z_2$ topological invariant, while the surface states of weak TIs contain an even number of Dirac cones and three additional $Z_2$ topological invariants are required.\cite{PhysRevB76045302} Strong TIs have been realized experimentally in various classes of materials, including Bi$_{1-x}$Sb$_{x}$, \cite{PhysRevB76045302,PhysRevB80085307,nature06843} Bi$_{2}$Se$_{3}$ family,\cite{Nphys101038,Chen10072009,PhysRevLett103146401} $\mathrm{TlBiTe_2}$ family,\cite{PhysRevLett105266401,PhysRevLett105136802} $\mathrm{PbBi_{2}Se_{4}}$ family, \cite{2010arXiv10075111X,PhysRevB83041202} strained bulk HgTe,\cite{PhysRevLett98106803,PhysRevLett106126803,PhysRevLett107136803} {\it etc}. In contrast, only a few practical systems for weak TIs have been proposed theoretically \cite{PhysRevLett109116406,PhysRevB84075105,2013arXiv13078054T} and no experiments on weak TIs have been reported, up to our knowledge. Surface states of weak TIs are also expected to possess intriguing phenomena, such as one-dimensional helical modes along dislocation lines,\cite{ranyingnaturephys} the weak anti-localization effect,\cite{PhysRevLett108076804} the half quantum spin Hall effect,\cite{Liu2012906} {\it etc}.\cite{PhysRevB84035443,PhysRevB86245436,PhysRevB88045408,PhysRevB86045102,2012arXiv12126191F,PhysRevLett109246605} Impurity scattering is reduced for surface states of TIs, so electric currents can flow with low dissipation. This leads to wide-ranging interests in device applications of TIs. To search for new topological materials with robust physical properties, it is essential to engineer electronic band structures with the desired features. A useful and intuitive physical picture to understand TIs is the concept of ``band inversion''. Band inversion means that the band orderings of conduction and valence bands are changed at some high symmetry momenta in the Brillouin zone (BZ), so that band dispersion cannot be adiabatically connected to the atomic limit of the system under certain symmetries.\cite{PhysRevB79195321,PhysRevB76045302,Bernevig15122006} In other words, the band gap changes its sign from positive to negative when band inversion occurs (We usually define the sign of a normal band gap to be positive). Actually all the well-known topological materials have ``inverted'' band structures. A large variety of experimental methods can be applied to tune band gaps, such as controlling chemical compositions by doping,\cite{nature06843,nmat3449,PhysRevLett110206804} applying strain,\cite{PhysRevLett106126803} {\it etc}. In this paper, we propose that band inversion can be controlled by constructing superlattice structures, which has the obvious advantage of their controllability. We will consider the PbTe/SnTe superlattice, a typical semiconductor superlattice made of IV-VI group compounds, as an example due to its simplicity. We will show that weak TIs can be achieved for PbTe/SnTe superlattices grown along the [001] direction. Remarkably, the weak TI phase we found is \emph{not} equivalent to a stack of 2D quantum spin Hall layers in [001] direction. Instead, the nontrivial topology arises from the folding of BZ, which plays an essential role in inducing band inversion. This mechanism can be in principle generalized to search for other topological phases, including strong TIs and topological crystalline insulators, and paves the way to engineering topological phases in artificial structures. This paper is organized in the following way. In section II, we briefly review the bulk properties of PbTe and SnTe, and introduce our calculation methods. In section III, we will show the evolution of band structure from PbTe to a $\mathrm{(PbTe)_{1}(SnTe)_{1}}$ superlattice step by step to illustrate how we obtain weak TIs. We calculate topological invariants, as well as surface states of the $\mathrm{(PbTe)_{1}(SnTe)_{1}}$ superlattice, to demonstrate the weak TI phase in this system. We will also discuss experimental realization and other possibilities to realize weak TI based on the same mechanism. The conclusion is drawn in section IV. \section{Superlattice configuration and calculation methods} We start with a review of material properties of bulk PbTe and SnTe. PbTe and SnTe have face-centered cubic NaCl-type of structures with the corresponding BZ shown in Fig. \ref{pic:fig1}. Both materials are narrow gap semiconductors with multiple applications in thermoelectricity, infrared diode and even superconductivity\cite{PhysRevB5513605,PhysRevB75195211,PhysRevB81245120}. The band gaps of these materials are located at the center of the hexagon on the BZ boundary, usually denoted as L points (See Fig. \ref{pic:fig1}(b)). There are four L points in totel, which are related to each other by mirror symmetry. It was shown that the band gap of SnTe has the opposite sign of that of PbTe.\cite{ncomms101038} Consequently, the systems consisting of SnTe and PbTe may possess topologically non-trivial properties. For example, recent interest in these IV-VI semiconductors is stimulated by the prediction that SnTe represents a new type of topological phase dubbed ``topological crystalline insulators", which host gapless surface states protected by mirror symmetry. \cite{PhysRevLett106106802,ncomms101038,fang2012a,slager2012,jw} This finding was recently confirmed by the experimental observation of surface states in SnTe family of materials.\cite{nphys2442,xu2012a,nmat3449} With additional uniaxial strain along the [111] direction, both PbTe and SnTe are expected to be strong TIs.\cite{PhysRevB76045302,PhysRevLett105036404} A more recent theoretical work \cite{PhysRevB85205319} shows that, in a large thickness range, PbTe/SnTe superlattice along the [111] direction exhibit properties similar to a strong TI phase. In this work, we will consider a superlattice with alternative stacks of SnTe and PbTe layers along the [001] direction, denoted as (PbTe)$_m$(SnTe)$_{2n-m}$, where $m$ and $2n-m$ represent the number of PbTe and SnTe layers, respectively. Fig. \ref{pic:fig1}(a) shows the (PbTe)$_1$(SnTe)$_1$ superlattice as an example. The calculations are performed within the framework of density functional theory (DFT) calculations using the Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation\cite{PhysRevLett773865} and the projector augmented wave (PAW) potential\cite{PhysRevB5017953}, as implemented in the Vienna \emph{ab initio} simulation package (VASP)\cite{PhysRevB5411169}. The spin-orbit coupling is included in all the calculations. The energy cutoff of the plane-wave basis is 340 eV. The 10$\times$10$\times$10 and 10$\times$10$\times$1 Monkhorst-Pack \emph{k} points are used for bulk and surface calculations separately. The lattice parameters are obtained by structural optimization. Different topological phases can be determined by calculating $Z_2$ topological invariants.\cite{PhysRevB74195312,PhysRevB76045302} In three dimensions, there are four topological invariants, one strong topological index and three weak topological indices. With space inversion symmetry preserved, topological invariants can be evaluated easily using the parity of occupied states at eight time-reversal-invariant momenta (TRIM) $\Gamma_i$ ($i=1,\cdots,8$).\cite{PhysRevB76045302} The strong topological index $\nu_{0}$ is given by \begin{equation} \label{equ:strongTI} (-1)^{\nu_{0}}=\prod_{i=1}^{8} \delta_{\Gamma_i}, \end{equation} and the three weak topological indices $\nu_{k}$ are given by \begin{equation} \label{equ:weakTI} (-1)^{\nu_{k}}=\prod _{n_{k}=1;n_{j\neq k}=0,1} \delta_{\Gamma_{i=(n_{1},n_{2},n_{3})}}, \end{equation} where $\delta_{\Gamma_i}=\prod _{n \in occ} \xi _{2n}(\Gamma _{i})$ is the product of parity of all occupied states for one time-reversal copy at TRIM $\Gamma _{i}=\Gamma_{i=(n_{1},n_{2},n_{3})}=(n_{1}\mathrm{\vec{b}_{1}}+n_{2}\mathrm{\vec{b}_{2}}+n_{3}\mathrm{\vec{b}_{3}})/2 $ and $\mathrm{\vec{b}_{i}}$ are the reciprocal lattice vectors.\cite{PhysRevB76045302} A strong TI phase is determined by the index $\nu_0$ while a weak TI phase is characterized by a vector $\bar\nu=(\nu_1,\nu_2,\nu_3)$. Weak TIs can be understood as a stacking of 2D TI layers along the direction $\mathrm{\vec{G}_{\nu}=\nu_{1}\vec{b}_{1}+\nu_{2}\vec{b}_{2}+\nu_{3}\vec{b}_{3}}$.\cite{PhysRevB76045302} On the surfaces with miller index $h \neq \bar{\nu}(mod2)$ in weak TIs, an even number of Dirac surface states can appear.\cite{PhysRevB76045302} As a check of our numerical methods, we calculate the parity of all occupied states at TRIM for the bulk SnTe and PbTe, as shown in Table \ref{table:parity}. As expected, both strong topological index and weak topological indices are trivial for SnTe and PbTe, which is consistent with the previous analysis.\cite{PhysRevB76045302} The underlying reason is that there are four L points with $\delta_{\Gamma_i}=+$ for SnTe and $\delta_{\Gamma_i}=-$ for PbTe ($\Gamma_i=\mathrm{L_{1,2,3,4}}$), as shown in Table \ref{table:parity}. However, since both strong topological index (\ref{equ:strongTI}) and weak topological indices ({\ref{equ:weakTI}}) contain an even number of L points, the product of $\delta_{\Gamma_i}$ always gives a $+$ sign. Therefore, in order to achieve topological non-trivial phases, it is essential to reduce the number of equivalent L points, which can be achieved by a superlattice structure, as discussed in detail below. \begin{table}[htbp] \caption{Parity and irreducible representation table $\xi(\Gamma_{i})$ of occupied states at TRIM $\Gamma _{i}$. $\delta_{\Gamma_i}$ is the parity product of occupied states at $\Gamma _{i}$. There are eight TRIM, one $\Gamma$ point, 3 equivalent X points, 4 equilvalent L points.} \begin{tabular}{p{2.75cm}p{3.5cm}p{2cm}} \hline \hline PbTe & $\xi(\Gamma_{i})$ & $\delta_{\Gamma_i}$ \\ 1$\Gamma$ & $\Gamma_{6}^{+}\Gamma_{6}^{+}\Gamma_{6}^{-}2\Gamma_{8}^{-}$ & $-$ \\ 3X & $\mathrm{X_{6}^{+} X_{6}^{+} X_{6}^{-} X_{6}^{-} X_{7}^{-}}$ & $-$ \\ 4L & $\mathrm{L_{6}^{-} L_{6}^{+} L_{6}^{+} L_{45}^{+} L_{6}^{+}}$ & $-$ \\ $Z_{2}$ index & (0;000) & \\ \hline SnTe & $\xi(\Gamma_{i})$ & $\delta_{\Gamma_i}$ \\ 1$\Gamma$ & $\Gamma_{6}^{+}\Gamma_{6}^{+}\Gamma_{6}^{-}2\Gamma_{8}^{-}$ & $-$ \\ 3X & $\mathrm{X_{6}^{+} X_{6}^{+} X_{6}^{-} X_{6}^{-} X_{7}^{-}}$ & $-$ \\ 4L & $\mathrm{L_{6}^{-} L_{6}^{+} L_{6}^{+} L_{45}^{+} L_{6}^{-}}$ & + \\ $Z_{2}$ index & (0;000) & \\ \hline \hline \end{tabular} \label{table:parity} \end{table} \section{Weak topological insulators in PbTe/SnTe superlattices} \begin{figure}[h] \subfigure{ \includegraphics[width=3in]{Fig1.eps} } \caption{(a) The primitive cell of $\mathrm{(PbTe)_{1}(SnTe)_{1}}$ [001] superlattice. (b) Brillouin zone (BZ) and TRIM for PbTe and $\mathrm{(PbTe)_{1}(SnTe)_{1}}$ [001] superlattice: The black lines are the BZ of PbTe primitive cell with the eight TRIM marked by unprimed Greek letters (the red parallelopiped); The vertices of the blue parallelopiped are TRIM marked with primed Greek letters in the BZ of the superlattice. } \label{pic:fig1} \end{figure} The band structure of a PbTe/SnTe superlattice can be understood from a simple physical picture in two steps. Let us consider the (PbTe)$_1$(SnTe)$_1$ superlattice along the [001] direction as an example. The first step is to transform from the primitive cell of bulk PbTe to the $\mathrm{(PbTe)_{1}(PbTe)_{1}}$ superlattice cell along the [001] direction. The corresponding primitive lattice vectors are changed from ${\vec{a}_{1}=a(\hat{y} +\hat{z});\vec{a}_{2}=a(\hat{z}+\hat{x});\vec{a}_{3}=a(\hat{x}+\hat{y})}$ of a face-centered cubic lattice, where $a$ is the distance between Pb and nearest Te atom, to ${\vec{\alpha}_{1}=a(\hat{x}-\hat{y});\vec{\alpha}_{2}=a(\hat{x}+\hat{y});\vec{\alpha}_{3}=2a\hat{z}}$ for the superlattice cell, as shown in Fig. \ref{pic:fig1}(a). Since the superlattice cell is twice the primitive cell, the corresponding BZ of the superlattice is folded and becomes half the original one. As shown in Fig. \ref{pic:fig1}, eight TRIM ($\Gamma_i=\Gamma$, 3X, 4L) are transformed to four TRIM $\Gamma_i^{\prime}$ ($i=1,\cdots,4$) in the folded BZ: $\Gamma$ and $\mathrm{X_{1}}$ is projected to a single point $\Gamma^{\prime}_1=\Gamma^{\prime}$, $\mathrm{X_{2}}$ and $\mathrm{X_{3}}$ to $\Gamma^{\prime}_2=\mathrm{X_{2}^{\prime}}$, $\mathrm{L_{0}}$ and L$_{3}$ to $\Gamma^{\prime}_3=\mathrm{L}_{0}^{\prime}$, L$_{1}$ and L$_{2}$ to $\Gamma^{\prime}_4=\mathrm{L}_{1}^{\prime}$. Besides, there will be four new TRIM in the folded BZ, denoted as $\Lambda^{\prime}_{1,2,3,4}=\mathrm{Z}^{\prime},\mathrm{R}^{\prime},\mathrm{M}_{1}^{\prime},\mathrm{M}_{2}^{\prime}$ in Fig. \ref{pic:fig1}(b). Since the BZ is reduced, band dispersion should also be folded (see Fig. \ref{pic:fig2}(a)). Consequently, the $\delta_{\Gamma^{\prime}_i}$s in the folded BZ are just the product of the $\delta_{\Gamma_i}$ at the corresponding TRIM in the original BZ, e.g. $\delta_{\Gamma^{\prime}}=\delta_{\Gamma}\delta_{\mathrm{X}_{1}}=1$. The new emerging TRIM $\Lambda_i^{\prime}$ are at the boundary of the folded BZ, so one can combine the wavefunction at $\Lambda_i^{\prime}$ with that at $-\Lambda_i^{\prime}$ to form the bonding and anti-bonding states that are the eigenstates of inversion operation. Since the bonding and anti-bonding states have opposite parities, the $\delta_{\Lambda^{\prime}_i}$s at these new TRIM take the value of $-1$. Thus, from $\delta_{\Gamma^{\prime}_i}=1$ ($i=1,2,3,4$) and $\delta_{\Lambda^{\prime}_i}=-1$ ($i=1,2,3,4$), one finds that both strong topological index and weak topological indices remain unchanged, which is expected since the lattice remains the same in this step. \begin{figure}[h] \centering \subfigure{ \includegraphics[width=3.2in]{Fig2.eps} } \caption{(a) Band structure evolution from PbTe to $\mathrm{(PbTe)_{1}(SnTe)_{1}}$ superlattice, by taking Te 5s band along the $\Gamma X_1$ as an example. (b) Schematic evolution of energy levels around the Fermi surface, which leads to band inversion. In a superlattice structure, the states from L$_{1}$ and L$_{2}$ are brought together and the degeneracy is removed by doping Sn, resulting in a band inversion. The $+$ or $-$ sign denotes the parity of state at TRIM.} \label{pic:fig2} \end{figure} Next, we substitute one Pb atom by one Sn atom in the superlattice cell as shown in Fig. \ref{pic:fig1}(a). This does not break inversion symmetry and thus we can still use Eqs. (\ref{equ:strongTI}) and (\ref{equ:weakTI}) to evaluate topological invariants. Introducing Sn atoms induces the interaction between the states at $\mathrm{L}^{\prime}_{0,1}$ and splits the degeneracy. When the splitting is large enough, band inversion occurs at these momenta. In Fig. \ref{pic:fig2}(b), we denote the wavefunctions of conduction bands as $\varphi_{1,2}$ (odd parity) and those of valence bands as $\psi_{1,2}$ (even parity) at L$_{1,2}$. After replacing atoms, both $\varphi_{1,2}$ and $\psi_{1,2}$ are no longer the eigen-states of the (PbTe)$_1$(SnTe)$_1$ superlattice and they will hybridize to form new eigenstates $\varphi_{a,b}$ and $\psi_{a,b}$, as shown in Fig. \ref{pic:fig2}(b). Due to level repulsion, the state $\psi_a$ of the valence band maximum will be pushed up while the state $\varphi_b$ of the conduction band minimum will be pushed down. Since $\psi_a$ and $\varphi_b$ have opposite parities, band inversion occurs when $\psi_a$ and $\varphi_b$ change their sequences, as shown in the second step of Fig. \ref{pic:fig2}(b). A similar situation happens at L$_0^{\prime}$. Thanks to reduction in the number of L points from four to two in the first step, the band inversion at $\mathrm{L}_{0,1}^{\prime}$ can lead to a weak TI phase in PbTe/SnTe superlattices. The replacement by Sn atoms will also split the degeneracy at TRIM $\Lambda^{\prime}_i$, as shown in Fig. \ref{pic:fig2}(a). However, since the initial gaps at these TRIM are huge, the splitting will not change any band sequences. It should be emphasized that this mechanism does \emph{not} rely on the inverted band structure of SnTe. Instead, band inversion originates from the strong coupling between the states at equivalent L points due to the folding of the BZ in a superlattice structure. Therefore, the replacement of Pb atoms by Sn atoms will change the sign of $\delta_{\mathrm{L}^{\prime}_0}$ and $\delta_{\mathrm{L}^{\prime}_1}$ but leave the $\delta_{\Gamma^{\prime}_i}$ and $\delta_{\Lambda^{\prime}_i}$ at other TRIM unchanged. Strong topological index should still be zero since there are always even number times of band inversion. Nevertheless, weak topological indices can be nonzero, so we carry out an {\it ab initio} calculation for the (PbTe)$_1$(SnTe)$_1$ superlattice and the energy dispersion is shown in Fig. \ref{pic:fig3}. A Mexican-hat shape of dispersion appears around L$_0^{\prime}$, indicating the occurrence of band inversion. Furthermore, we check the $\delta_{\Gamma_i}$ at all TRIM $\Gamma^{\prime}_i$ and $\Lambda^{\prime}_i$, as shown in the table \ref{table:parity2}. We find the weak TI indices $\bar{\nu}=(110)$, so the present system can be viewed as a stacking of two dimensional TIs along x direction. ( Since $\mathrm{\vec{\beta_{1}}=\pi(\hat{x}-\hat{y})/a}$, $\mathrm{\vec{\beta_{2}}=\pi(\hat{x}+\hat{y})/a}$, $\mathrm{\vec{\beta_{3}}=\pi\hat{z}/a}$ are the reciprocal lattice vectors for the (PbTe)$_{1}$(SnTe)$_{1}$ superlattice, $\vec\beta_{1}+\vec\beta_{2}$ equals $2\pi\hat{x}/a$). \begin{figure}[t] \centering \subfigure{ \includegraphics[width=3in]{Fig3.eps} } \caption{The band structure of $\mathrm{(PbTe)_{1}(SnTe)_{1}}$ [001] superlattice, with the high symmetry points defined in Figure \ref{pic:fig1}(b). The Mexican-hat shape of dispersion around $L_0'$ indicates a band inversion. The band gap is at the $L_0'-R'$ line. } \label{pic:fig3} \end{figure} For a weak TI with $\bar{\nu}=(110)$, surface states with even number of Dirac cones are expected on the surface with Miller index $h \neq \bar{\nu}(mod2)$. Thus, we consider a slab configuration of (PbTe)$_1$(SnTe)$_1$ superlattice along [001] direction and directly calculate surface states with the {\it ab initio} method. Indeed, as shown in Fig. \ref{pic:fig5}, surface states are found around $\rm\bar{X}^{\prime}_1$, which is the projection of $\mathrm{L_{0}^{'}}$ at the surface BZ of the superlattice. The Dirac point is located exactly at TRIM $\rm\bar{X}_1^{\prime}$ (The tiny gap of surface states at $\rm\bar{X}_1^{\prime}$ is due to the finite size effect of a slab configuration). According to the mirror symmetry or the four-fold rotation symmetry that relates $\rm\bar{X}_1^{\prime}$ to $\rm\bar{X}_2^{\prime}$, we expect another Dirac point at TRIM $\rm\bar{X}_2^{\prime}$. The degeneracy at $\rm\bar{X}^{\prime}_{1,2}$ is protected by TR symmetry according to Kramers' theorem. Therefore, our calculation of topological surface states is consistent with the analysis of bulk topological invariants, confirming that the (PbTe)$_1$(SnTe)$_1$ superlattice is a weak TI. Similar to the case of strong TIs, the backscattering in one Dirac cone is completely suppressed due to the helical nature of spin texture. Since two $\rm\bar{X}^{\prime}$ points are well separated in momentum space, the scattering between two Dirac cones is negligible for impurities with smooth pontentials. Remarkably, the surface states here are qualitatively different from the surface states of SnTe, which consist of \emph{four} Dirac points at \emph{non-}TRIM and are protected by \emph{mirror} symmetry instead of TR symmetry. For the (PbTe)$_1$(SnTe)$_1$ superlattice, mirror symmetry can also play a role. Actually, there is additional protection of the gapless Dirac points at $\rm\bar{X}_1^{\prime}$ and $\rm\bar{X}_2^{\prime}$ by the mirror symmetry with respect to $(1\bar{1}0)$ plane (the plane along the line $\bar{\Gamma}'-\bar{X}'_1$ in Fig. \ref{pic:fig5} and perpendicular to the surface) and (110) plane (the plane along the line $\bar{\Gamma}'-\bar{X}'_2$ in Fig. \ref{pic:fig5}), respectively. Since there is only one Dirac cone at one mirror plane, the mirror Chern number $C_m$ should be $1$ in the present system , in contrast to $C_m=2$ in bulk SnTe. Therefore, the (PbTe)$_1$(SnTe)$_1$ superlattice can also be regarded as a TCI with mirror Chern number $C_m=$1. When TR symmetry is broken but mirror symmetry is preserved, e.g. with an in-plane magnetic field along the [1$\bar{1}0$] or [110] direction, the gapless nature of Dirac cones should still remain. 3D weak TIs are usually constructed by stacking 2D TIs, such as layered semiconductors discussed in Ref. [\onlinecite{PhysRevLett109116406,2013arXiv13078054T}]. In these cases, if we take the stacking direction as $z$ direction, the corresponding weak topological indices are $(001)$. In contrast, the weak topological indices $(110)$ of our system are different from the growth direction $(001)$ of superlattices. Thus, the underlying mechanism of our system is not the stacking of 2D TIs, but the folding of BZ, as discussed above. In our system, two surface Dirac cones appearing at $\bar{X}'_1$ and $\bar{X}'_2$ of $(001)$ surfaces are related to each other by four-fold rotation symmetry or mirror symmetry. When there is scattering between two Dirac cones, charge density waves can occur at (001) surfaces, giving rise to the half quantum spin Hall effect proposed in Ref. [\onlinecite{Liu2012906}]. \begin{figure}[t] \centering \subfigure{ \includegraphics[width=3in]{Fig4.eps} } \caption{The energy dispersion of a slab configuration for the $\mathrm{(PbTe)_{1}(SnTe)_{1}}$ [001] superlattice. The shadow indicates the regime of bulk dispersion. A surface state with a Dirac cone appears in the bulk band gap at $\bar{\rm X}'_1$. The inset shows the surface BZ of the slab. ${\bar{\Gamma }}^{\prime}$, $\mathrm{{\bar{{\rm X}}_{1}}^{\prime}}$, $\mathrm{{\bar{M}_{1}}^{\prime}}$ and $\mathrm{{\bar{{X}}_{2}}^{\prime}}$ are the projections from $\Gamma ^{\prime}$, M$_{1}^{\prime}$, X$_{2}^{\prime}$ and M$_{2}^{\prime}$ on the (001) surface BZ. L$_{0}^{\prime}$ is projected to ${\mathrm{\bar{X}_{1}}^{\prime}}$. ${\mathrm{\bar{X}_{1}}^{\prime}}$ and ${\mathrm{\bar{\rm X}_{2}}^{\prime}}$ are equivalent due to the mirror symmetry with respect to the (100) plane (the plane along the $\bar{\Gamma}'-\bar{\rm M}_1'$ line).} \label{pic:fig5} \end{figure} \begin{table}[htbp] \caption{Parity table $\xi(\Gamma_{i})$ of occupied states at TRIM $\Gamma _{i}$ for the $\mathrm{(PbTe)_{1}(SnTe)_{1}}$ superlattice. $\delta_{\Gamma_i}$ is the parity product of occupied states at $\Gamma _{i}$. There are eight TRIM: $\mathrm{\Gamma ^{\prime}, X_{2}^{\prime}, L_{0}^{\prime}, L_{1}^{\prime}, Z^{\prime}, R^{\prime}, M_{1}^{\prime}, M_{2}^{\prime}}$. L$_{0}^{\prime}$ and L$_{1}^{\prime}$ are equivalent; M$_{1}^{\prime}$ and M$_{2}^{\prime}$ are equivalent. } \begin{tabular}{p{2.75cm}p{3.75cm}p{1.6cm}} \hline \hline $\mathrm{(PbTe)_{1}(SnTe)_{1}}$ & $\xi(\Gamma_{i})$ & $\delta_{\Gamma_i}$ \\ $\Gamma ^{\prime}$ & $+ + + + - - - - - -$ & $+$ \\ X$_{2}^{\prime}$ & $+ + + + - - - - - -$ & $+$ \\ $\mathrm{L_{0}^{\prime}(L_{1}^{\prime})}$ & $- - + + + + + + + -$ & $-$ \\ Z$^{\prime}$ & $+ - + - - + + - + -$ & $-$ \\ R$^{\prime}$ & $- + + - + - - + + -$ & $-$ \\ $\mathrm{M_{1}^{\prime}(M_{2}^{\prime})}$ & $+ - + - - + - + + -$ & $-$ \\ $Z_{2}$ index & (0;110) & \\ \hline \hline \end{tabular} \label{table:parity2} \end{table} Similar discussion can also be applied to other (PbTe)$_m$(SnTe)$_{2n-m}$ superlattices ($n$ and $m$ are integers) and it turns out that the weak TI phase is quite robust. The BZ of (PbTe)$_m$(SnTe)$_{2n-m}$ superlattice can be obtained by simply folding the BZ of a (PbTe)$_{1}$(SnTe)$_{1}$ superlattice along the z direction. Two L$^{\prime}$ points in a (PbTe)$_{1}$(SnTe)$_{1}$ superlattice will be still mapped to two separate TRIM in the new BZ. Thus, the machanism for the weak TI phase is still applicable. As shown in Table \ref{table:super}, for a large range of the ratio $x=\frac{m}{2n}$, the system keeps in the weak TI phase with $\bar{\nu}=(110)$ and the corresponding band gaps vary around tens of meV. Thus, a fine tuning of layer numbers of the superlattice is not necessary. Similar superlattices have been fabricated in early experiments. \cite{Springholz2013263,Fujiyasu1984579,PhysRevB303394,ishida1901} Therefore, PbTe/SnTe superlattices along the [001] direction provide us an experimentally feasible and controllable platform to investigate the exotic phenomena of weak TIs. Moreover, since our basic mechanism is quite general, it is also worthwhile to investigate GeTe/SnTe \cite{PhysRevB88045207} and PbSe/SnSe superlattices\cite{nmat3449}. \begin{table}[tbp] \caption{Gaps at the momenta $\mathrm{L'}$ and bulk gaps for different superlattices along the [001] direction. Here $\mathrm{L'}$ denotes the momenta in the folded BZ of the superlattice that are projected from the L points in the original BZ of a bulk system. } \begin{tabular}{p{2.9cm}ll} \hline \hline Composition & Gap at $\mathrm{L'}$ (meV) & Bulk Gap (meV) \\ $\mathrm{(PbTe)_{1}(SnTe)_{1}}$ & 370.8 & 33.0 \\ $\mathrm{(PbTe)_{3}(SnTe)_{1}}$ & 144.6 & 31.8 \\ $\mathrm{(PbTe)_{1}(SnTe)_{3}}$ & 154.6 & 1.0 \\ $\mathrm{(PbTe)_{5}(SnTe)_{1}}$ & 87.9 & 26.5 \\ $\mathrm{(PbTe)_{3}(SnTe)_{3}}$ & 96.9 & 27.5 \\ $\mathrm{(PbTe)_{1}(SnTe)_{5}}$ & 69.0 & 17.6 \\ \hline \hline \end{tabular} \label{table:super} \end{table} \section{Conclusion} In summary, we propose a series of ${\mathrm{(PbTe)}_{m}\mathrm{(SnTe)}_{2n-m}}$ superlattice systems to realize weak TIs. Due to the BZ folding, we reduce the number of equivalent L points so that weak TI phases can be realized in PbTe/SnTe superlattices, which cannot be achieved in the bulk Pb$_x$Sn$_{1-x}$Te with uniform doping. We notice that the PbTe/SnTe superlattice along the [111] direction has been investigated with the effective Hamiltonian at four equivalent L points.\cite{PhysRevB85205319} In this case, four L points are projected into different momenta in the folded BZ, so that they can be treated separately and the effect of BZ folding is not important. But for the superlattice along the [001] direction, different L points will be mapped to the same momentum in the folded BZ. Therefore, the coupling between different L points cannot be neglected and instead shows a new mechanism to engineer topological phases. This idea can be generalized to search for new topological phases in other systems where band gap occurs at several equivalent momenta. \begin{acknowledgments} We acknowledge support from the Ministry of Science and Technology of China (Grant Nos. 2011CB921901 and 2011CB606405) and the National Natural Science Foundation of China (Grant No. 11074139). LF is supported by the DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under award DE-SC0010526. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,960
Q: Unable to select a value from a drop-down in selenium python Im trying to select a value from dropdown : <select name="ctl00$ContentPlaceHolder1$_ddl_sheet_name" id="ctl00_ContentPlaceHolder1__ddl_sheet_name"> <option selected="selected" value="0">--Select--</option> <option value="tbl_E_RATES">E RATES</option> <option value="tbl_F_RATES">F RATES</option> <option value="tbl_B_RATES">B RATES</option> </select> But none of these commands are working driver.find_element_by_css_selector("select#ctl00$ContentPlaceHolder1$_ddl_sheet_name > option[value='B_RATES']").click() driver.find_element_by_xpath("//select[@id='ctl00_ContentPlaceHolder1__ddl_sheet_name']/option[text()='B RATES']").click() driver.find_element_by_css_selector("select#ctl00_ContentPlaceHolder1__ddl_sheet_name > option[value='B_RATES']").click() ERROR :- NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//select[@id='ctl00_ContentPlaceHolder1__ddl_sheet_name']/option[text()='B RATES']"} A: B_RATES is a link text of option but not it's value (value is tbl_B_RATES). Try this one: driver.find_element_by_xpath('//select/option[text()="B RATES"]').click() UPDATE NoSuchElementException issue could be caused by page rendering completion delay. Try Explicit wait to wait until element is present on page: from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.XPATH, '//select/option[text()="B RATES"]'))).click() UPDATE 2 Target drop-down menu located inside an iframe, so you need to switch to it first and then handle required elements: driver.switch_to_frame(driver.find_element_by_xpath('//iframe[@src="http://rate.poultrybazaar.net/show_rates.aspx"]')) driver.find_element_by_xpath('//select/option[text()="BROILER RATES (WEST BENGAL)"]').click() # Replace text with required value driver.switch_to_default_content() # to quit from iframe
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,165
Rule 5 -- The Good Solid American Values Continue! As I continue to foreswear the crass, vulgar lubriciousness of Spanish language television at the the urging of concerned Christian conservative Americans, the folks who warned us in all our blogs about the evils of John McCain and Sarah Palin. Let the good taste roll!
{ "redpajama_set_name": "RedPajamaC4" }
8,187
Birds Eye Polar Bear – Asylum Models & Effects Ltd. The polar bear that lives in your fridge! Designed and fabricated in house by Lee Sutton, this cute but insistent bear was voiced by none other than Willem Defoe. Puppeteered by Steve Clark and Lynn Robertson Bruce. The bear was being used for so many shoots that it became a worry for the client that only one puppet existed. So we were commissioned to make a further two for the campaign's duration. We aided in the design process and fully fabricated the puppets for this campaign. A toy monkey costume for this adorable advert.
{ "redpajama_set_name": "RedPajamaC4" }
1,760
require 'test_helper' class LocaleTest < ActiveSupport::TestCase test "turning locale without nested phrases into a hash" do assert_equal({ "se" => { "hello_world" => "Hejsan Verdon" } }, tolk_locales(:se).to_hash) end test "turning locale with nested phrases into a hash" do assert_equal({ "en" => { "number"=>{"human"=>{"format"=>{"precision"=>1}}}, "hello_world" => "Hello World", "nested" => { "hello_world" => "Nested Hello World", "hello_country" => "Nested Hello Country" } }}, tolk_locales(:en).to_hash) end test "phrases without translations" do assert tolk_locales(:en).phrases_without_translation.include?(tolk_phrases(:cozy)) end test "searching phrases without translations" do # assert tolk_locales(:en).search_phrases_without_translation("cozy").include?(tolk_phrases(:cozy)) assert !tolk_locales(:en).search_phrases_without_translation("cozy").include?(tolk_phrases(:hello_world)) end test "paginating phrases without translations" do Tolk::Phrase.per_page = 2 locale = tolk_locales(:se) page1 = locale.phrases_without_translation assert_equal [4, 3], page1.map(&:id) page2 = locale.phrases_without_translation(2) assert_equal [2, 5], page2.map(&:id) page3 = locale.phrases_without_translation(3) assert page3.blank? end test "paginating phrases with translations" do Tolk::Phrase.per_page = 4 locale = tolk_locales(:en) page1 = locale.phrases_with_translation assert_equal [1, 3, 2, 5], page1.map(&:id) page2 = locale.phrases_with_translation(2) assert page2.blank? end test "counting missing translations" do assert_equal 2, tolk_locales(:da).count_phrases_without_translation assert_equal 4, tolk_locales(:se).count_phrases_without_translation end test "dumping all locales to yml" do Tolk::Locale.primary_locale_name = 'en' Tolk::Locale.primary_locale(true) begin FileUtils.mkdir_p(File.join(Rails.root, "tmp/locales")) Tolk::Locale.dump_all(File.join(Rails.root, "tmp/locales")) %w( da se ).each do |locale| assert_equal \ File.read(File.join(Rails.root, "test/locales/basic/#{locale}.yml")), File.read(File.join(Rails.root, "tmp/locales/#{locale}.yml")) end # Make sure dump doesn't generate en.yml assert ! File.exists?(File.join(Rails.root, "tmp/locales/en.yml")) ensure FileUtils.rm_rf(File.join(Rails.root, "tmp/locales")) end end test "human language name" do assert_equal 'English', tolk_locales(:en).language_name assert_equal 'pirate', Tolk::Locale.new(:name => 'pirate').language_name end end
{ "redpajama_set_name": "RedPajamaGithub" }
4,115
\section{Introduction.} One of the most challenging questions motivated by the anomalous normal state of the high temperature superconductors (HTc's) is to explain the linearity of the resistivity $\rho$ as a function of temperature $T$ down to the superconducting transition temperature $T_c$. In the case of the one-layer Bi-based material, $T_c\approx 10 K$. This temperature is well below the Debye temperature and, for a three-dimensional material, one should expect that $\rho\propto T^5$ if the resistivity is dominated by electron-phonon scattering. Moreover, electron-electron scattering (if treatable in perturbation theory) is known to yield $\rho\propto T^2$ for temperatures much less than the Fermi energy $E_F$. Thus, conventional theories do not explain the observed behavior of the resistivity and one has to search for other mechanisms which could lead to a higher scattering rate for the electrons. Since superconductivity appears in the HTc's close to a metal-insulator transition driven by strong correlations, it is widely believed that some aspect of electron-electron interactions should be responsible for the anomalous normal state and, finally, also for the high superconducting transition temperatures in the copper oxides. Another feature specific to the HTc's is their layered structure and the consequent two-dimensional nature of the electronic states. The resulting problem of interacting electrons in two dimensions has not been solved yet and so far, only phenomenological theories attempting to describe the low-energy behavior of such models are available. In the literature, two fundamentally different routes for explaining the anomalies of the normal state of the HTc's were followed. In one class of theories it is assumed that in the HTc's, Landau's Fermi-liquid theory breaks down completely and an exotic metallic state with some features of one-dimensional solutions is realized \cite{Anderson,NL}. In the other class of theories, it is assumed that Landau's concept of quasiparticles does apply. However, in order to explain the deviations from the usual metallic behavior, anomalous scattering mechanisms are assumed and treated in perturbation theory. In the present paper, we will study the temperature dependence of the resistivity in two models of the latter type: nearly antiferromagnetic Fermi liquids \cite{MP,Ueda} and models with van Hove singularities \cite{Pattnaik,Markiewicz}. In the theory of nearly antiferromagnetic Fermi liquids, it is assumed that the effect of the strong local repulsion between electrons can be described in the low-energy sector by coupling the electrons to an overdamped low-lying paramagnon mode. The parameters of the spin-fluctuation spectrum can be determined by fitting the magnetic properties of the HTc's. Standard weak-coupling calculations of the resistivity in the model of nearly antiferromagnetic Fermi liquids give $\rho\propto T$ for $T>T^\star$ where $T^\star$ measures the deviation from the antiferromagnetic critical point. More sophisticated strong-coupling calculations of the resistivity support the weak-coupling results in that there are only quantitative changes to the latter. In Section 2, we elaborate the following observation: since the spin-fluctuation spectrum is soft at the Brillouin-zone boundary, only electrons in the vicinity of special points on the Fermi line are strongly scattered. Along the rest of the Fermi line, the lifetime of an electron is $1/\tau\propto T^2$ even for $T>T^\star$. Thus the contribution of the strongly scattering special points to the resistivity is short-circuited by the remaining electrons and the resistivity has the standard Fermi-liquid form $\rho\propto T^2$ up to a new energy scale described in Section 2. A similar idea was exploited by Fujimoto, Kohno, and Yamada \cite{Fujimoto} in their discussion of the $T$-dependence of the resistivity for models where parts of the Fermi surface exhibit perfect nesting. Another interesting proposal to explain the anomalies of the HTc's is to assume that these are caused by the presence of van Hove singularities at or close to the Fermi line. For instance, the resistivity is enhanced according to these theories due to the increase of phase space available in the scattering process. Moreover, assuming a weak-coupling BCS formula for the superconducting transition temperature, $T_c$ becomes enhanced due to the large density of states at the Fermi level. It is considered as a success of the van Hove scenario that the anomalies of the normal state are strongest at that doping where $T_c$ is maximal. In Section 3 we show, however, that although the single-particle lifetime is anomalous, the resistivity is consistent with the standard result of Landau's Fermi-liquid theory with a logarithmic correction, $\rho\propto T^2\ln(1/T)$. \section{Nearly antiferromagnetic Fermi liquids.} Following Monthoux and Pines \cite{MP}, we consider electrons moving in a square lattice of Wannier orbitals with a simple tight-binding spectrum \begin{equation} \varepsilon_{\bf k}=-2t\left[\cos(k_xa)+\cos(k_ya)\right] +4t^\prime\cos(k_xa)\cos(k_ya), \label{eq:MPdispersion} \end{equation} where $a$ is the lattice constant. $t=0.25 eV$ and $t^\prime=0.45 t$ are nearest- and next-nearest-neighbor hoppings, respectively. The electrons are coupled to the spin-fluctuation operator ${\bf S}$. The Hamiltonian reads \begin{equation} H=\sum_{{\bf k},\sigma}\varepsilon_{\bf k} c^\dagger_{{\bf k},\sigma} c_{{\bf k},\sigma}+{{\bar g}\over 2}\sum_{{\bf k,q},\alpha,\beta} c^\dagger_{{\bf k+q},\alpha}c^\dagger_{{\bf k},\beta}\sigma_{\alpha, \beta}\cdot {\bf S_{-q}}, \end{equation} where $\bar g$ is a coupling constant. The spectrum of the spin-fluctuations is $\chi_{i,j}({\bf q},\omega)=\delta_{i,j} \chi({\bf q},\omega)$ with \begin{equation} \chi({\bf q},\omega)= {A\over{\omega_{\bf q}-i\omega}}, \label{eq:fluctuationspectrum} \end{equation} where $A\approx 1$ (in what follows we take $A=1$), $\omega_{\bf q}=T^\star+\alpha T+\omega_D\psi_{\bf q}$ and $\psi_{\bf q}=2+\cos(q_xa)+\cos(q_ya)$. $T^\star$, $\alpha$, and $\omega_D$ are temperature-independent parameters. Note that the energy of the spin-fluctuations is minimal for ${\bf q}={\bf Q}=(\pi/a,\pi/a)$ and $\omega_{\bf Q}= T^\star+\alpha T$. Thus the parameter $T^\star$ measures the deviation from the critical antiferromagnetic point. \subsection{Quasiparticle lifetime.} Before calculating the resistivity due to scattering on the spin-fluctuations, let us analyze first the single-particle lifetime. In the second order of perturbation theory in $\bar g$, the lifetime of an electron with momentum ${\bf k}$ at zero temperature is \begin{equation} {1\over \tau_{\bf k}}=2g^2 \sum_{\bf k^\prime}\int_0^{\varepsilon_{\bf k}}d\omega\: \Im \chi({\bf k}-{\bf k}^\prime,\omega)\:\delta(\varepsilon_{\bf k}- \varepsilon_{\bf k^\prime}-\omega), \label{eq:goldenrule1} \end{equation} where we have introduced $g^2=3{\bar g}^2/4$; $\Im\chi$ denotes the imaginary part of $\chi$. We can write $\int d^2k=\int dk_\perp dk_\parallel$ where $k_\perp$ and $k_\parallel$ are the directions perpendicular and parallel to the Fermi line. Since $\int dk_\perp=\int d\varepsilon/ |{\bf v}|$ where ${\bf v}=\nabla_{\bf k}\varepsilon$ is the group velocity we have \begin{equation} {1\over \tau_{\bf k}}=2\left({ga\over 2\pi}\right)^2\int_0^ {\varepsilon_{\bf k}} d\omega \oint{dk^\prime\over v^\prime}\: \Im\chi({\bf k-k^\prime},\omega), \label{eq:goldenrule2} \end{equation} where we write $k^\prime$ instead of $k^\prime_\parallel$. In evaluating $\Im \chi({\bf k}-{\bf k}^\prime,\omega)$ we will assume that both ${\bf k}$ and ${\bf k^\prime}$ lie on the Fermi line, since corrections lead only to subleading terms in the denominator of Eq.\ref{eq:fluctuationspectrum}. After integrating over $\omega$ we have $${1\over \tau_{\bf k}}=\left({ga\over {2\pi}}\right)^2\oint {dk^\prime\over{v^\prime}}\ln\left( {{\varepsilon_{\bf k}^2+\omega_{{\bf k}-{\bf k}^\prime}^2}\over{ \omega_{{\bf k}-{\bf k}^\prime}^2}}\right).$$ We assume that along the whole Fermi line the group velocity is finite (the opposite case will be treated in Section 3). It is seen that $1/\tau_{\bf k}\propto\varepsilon_{\bf k}^2$ for $\varepsilon_{\bf k}\ll T^\star$. Thus $T^\star$ defines an energy scale below which Landau's Fermi-liquid theory applies. Let us investigate now the behavior of the lifetime for energies $T^\star\ll\varepsilon_{\bf k}\ll\omega_D$. Strong scattering occurs only for those states ${\bf k}$ (hot spots) on the Fermi line for which \begin{equation} \varepsilon_{\bf k}=\varepsilon_{\bf k+Q}. \label{eq:hotspot} \end{equation} For a wide range of fillings, there are only 8 points with anomalous scattering (see Fig.1). Note that for a commensurate ${\bf Q}=(\pi/a,\pi/a)$ the hot spots are by no means special points of the Fermi line, since the group velocity in a point ${\bf k}$ satisfying Eq.\ref{eq:hotspot} is not parallel to ${\bf Q}$. The incommensurate case when ${\bf Q}$ is a locally extremal vector connecting 2 points on the Fermi line (like the $2k_F$-processes in continuum) will not be studied here. Let $\delta {\bf k}$ be the distance between the projection of a point ${\bf k}$ to the Fermi line and a nearby hot spot. The lifetime of an electron in the state ${\bf k}$ is $${1\over\tau_{\bf k}}\propto{g^2\over E_F}\: \sqrt{{\varepsilon_{\bf k}\over \omega_D}}\: \min\left[1,\left({1\over \varphi} \sqrt{{\varepsilon_{\bf k}\over \omega_D}}\:\right)^3\right],$$ where $\varphi=\delta k\:a$ and $E_F\sim v_F/a$; $v_F$ is the Fermi velocity in the hot spot. Thus, in agreement with our expectations, the lifetime in the neighborhood of a hot spot becomes anomalously large: $1/\tau_{\bf k}\propto\sqrt {\varepsilon_{\bf k}}$. Away from a hot spot, $1/\tau_{\bf k}\propto\varepsilon_{\bf k}^2$ as in standard Landau's Fermi-liquid theory. It is interesting to note that if one calculates the average of $1/\tau_{\bf k}$ for states of fixed energy $\varepsilon>T^\star$ along the Fermi line, one obtains $$\langle{1\over\tau_{\bf k}}\rangle\propto{g^2\over E_F}\: {\varepsilon\over \omega_D},$$ {\it i.e.}, the average lifetime of an electron is linear in energy! It is basically this feature which has led in previous calculations to the result $\rho\propto T$ for $T>T^\star$. \subsection{Resistivity.} In calculating the temperature dependence of the resistivity due to spin-fluctuations, we will assume that a simple description in terms of a Boltzmann equation (BE) captures the essential physics. Let the stationary solution of the BE be $f_{\bf k}=f_{\bf k}^\circ-\Phi_{\bf k} (\partial f_{\bf k}^\circ/\partial\varepsilon),$ where $f_{\bf k}^\circ$ is the equilibrium Fermi-Dirac distribution function and $\Phi_{\bf k}$ is a function to be determined. The linearized (in $\Phi_{\bf k}$) collision term of the BE reads \begin{eqnarray*} \left({\partial f_{\bf k}\over{\partial t}} \right)_{\rm coll}&=&{2 g^2\over T} \sum_{\bf k^\prime}\int_{-\infty}^\infty d\omega\:n(\omega)\: f_{\bf k^\prime}^\circ(1-f_{\bf k}^\circ)\nonumber\\ &\times&\Im \chi({\bf k-k^\prime},\omega)\: (\Phi_{\bf k^\prime}-\Phi_{\bf k})\: \delta(\varepsilon_{\bf k}-\varepsilon_{\bf k^\prime}-\omega), \end{eqnarray*} where $n(\omega)$ is the Bose-Einstein distribution function. Note that we have assumed that the spin-fluctuations are in equilibrium. Following the standard arguments as described by Ziman \cite{Ziman}, the resistivity $\rho$ can be found as the minimum of a functional of $\Phi_{\bf k}$: \begin{equation} {\rho\over\rho_0}= \min\left[{\langle\Phi|W|\Phi\rangle\over \langle\Phi|X\rangle^2}\right], \label{eq:Zim} \end{equation} where we have introduced $\langle\Phi|W|\Phi\rangle=\sum_{\bf k,k^\prime} W_{\bf k,k^\prime}(\Phi_{\bf k^\prime}-\Phi_{\bf k})^2$, $\langle\Phi|X\rangle=\sum_{\bf k}\Phi_{\bf k}X_{\bf k}$ and $X_{\bf k}=(-\partial f_{\bf k}^\circ/\partial\varepsilon) {\bf v}_{\bf k}\cdot {\bf n}$. $\rho_0=\hbar/e^2$ is the quantum of resistivity and ${\bf n}$ is a unit vector in the direction of the applied electric field. Assuming that $\varepsilon_{\bf k}=\varepsilon_{\bf -k}$ and $\Phi_{\bf k}= -\Phi_{\bf -k}$, we have $$W_{\bf k,k^\prime}= {2(ga)^2\over T} f_{\bf k}^\circ(1-f_{\bf k^\prime}^\circ)\: n(\varepsilon_{\bf k^\prime}-\varepsilon_{\bf k})\: \Im \chi({\bf k^\prime-k}, \varepsilon_{\bf k^\prime}-\varepsilon_{\bf k}).$$ Similarly as in the discussion of the quasiparticle lifetime, we write $\int d^2k=\int d\varepsilon\int dk/v$ where the $k$-integration runs along the Fermi surface, and an analogous expression for $\int d^2k^\prime$. Defining $\varepsilon^\prime=\varepsilon+\omega$, we can perform the $\varepsilon$-integration and obtain \begin{eqnarray} {\rho\over\rho_0}&=&\min\left[{{\oint{dk\over v} \oint{dk^\prime\over v^\prime}F_{\bf k-k^\prime}\: (\Phi_{\bf k^\prime}-\Phi_{\bf k})^2}\over{ \left(\oint{dk\over v} {\bf v}_{\bf k}\cdot {\bf n}\: \Phi_{\bf k}\right)^2}}\right]\\ F_{\bf k-k^\prime}&=&{2(ga)^2\over T}\int_0^\infty d\omega\:\omega\: n(\omega)[n(\omega)+1] \:\Im \chi({\bf k-k^\prime},\omega).\nonumber \end{eqnarray} Note that Eq.2.8 is a generalization of Eq.3.1 from Ref.\cite{Ueda} to the case of an arbitrary variational function $\Phi_{\bf k}$. Using Eq.\ref{eq:fluctuationspectrum} for the spectrum of spin-fluctuations, we find $F_{\bf k-k^\prime} =2(ga)^2I(\omega_{\bf k-k^\prime}/T)$, where $$I(x)=\int_0^\infty{dt\:e^t\over (e^t-1)^2}{t^2\over t^2+x^2} \approx{\pi^2/3\over{x(x+2\pi/3)}}.$$ The last equality is an interpolation formula which becomes exact for $x\rightarrow 0$ and $x\rightarrow \infty$. Summarizing, the sheet resistivity can be written in the following dimensionless form: \begin{eqnarray} &&{\rho\over\rho_0}={\pi^2\over 6}\left({g\over t}\right)^2 \Theta^2\nonumber\\ &&\times\min\left[ {{\oint{dk\over u}\oint{dk^\prime\over u^\prime} {{(\Phi_{\bf k^\prime}-\Phi_{\bf k})^2}\over {(\Theta^\star+\alpha\Theta+\psi_{\bf k-k^\prime}) (\Theta^\star+\beta\Theta+\psi_{\bf k-k^\prime})}}} \over{\left(\oint{dk\over u}{\bf u}_{\bf k}\cdot{\bf n} \Phi_{\bf k}\right)^2}}\right], \label{eq:afmresistivity} \end{eqnarray} where ${\bf u}_k={\bf v}_{\bf k}/2ta$ is a dimensionless group velocity, $\Theta=T/\omega_D$, $\Theta^\star=T^\star/\omega_D$, and $\beta=\alpha+2\pi/3$. The integrations run along the Fermi line. Note that for $T\ll T^\star/\beta$, the resistivity has a standard Landau-Fermi-liquid form $\rho\propto T^2$ in agreement with our results for the quasiparticle lifetime. In what follows, we will study the resistivity as given by Eq.\ref{eq:afmresistivity} for $T^\star/\alpha\ll T\ll \omega_D$. A standard ansatz for the variational function is \begin{equation} \Phi_{\bf k}={\bf u}_{\bf k}\cdot{\bf n}. \label{eq:standard} \end{equation} For such $\Phi_{\bf k}$, there always exists a pair of hot spots $k,k^\prime$ such that $\Phi_{\bf k^\prime}-\Phi_{\bf k}$ is finite. In that case, the integral in the nominator of Eq.\ref{eq:afmresistivity} is dominated by $k,k^\prime$ close to the hot spots; $\psi_{\bf k-k^\prime}$ can be approximated by a homogeneous quadratic polynomial in the deviations $\delta k,\delta k^\prime$ from the hot spots and by scaling, one obtains $\rho\propto T$ in agreement with Moriya {\it et al.} \cite{Ueda}. We have seen that the quasiparticle lifetime is extremely anisotropic along the Fermi line. It is therefore natural to assume that the resistivity will be dominated by that part of the Fermi line where the scattering is weakest and the contribution from the hot spots will be short-circuited. To prove this, we take another ansatz for $\Phi_{\bf k}$ and show that it leads to a lower resistivity. Let us consider \begin{equation} \Phi_{\bf k}={{{\bf u}_{\bf k}\cdot{\bf n}} \over{e^{\beta[\Delta-\varphi]}+1}}, \label{eq:newansatz} \end{equation} where $\varphi=\delta k\:a$ is the deviation from a hot spot and $\Delta$ and $\beta$ are variational parameters. The case $\beta=0$ corresponds to the standard ansatz, while $\beta\gg 1$ and $\Delta\neq 0$ describe the situation when finite parts of the Fermi line around the hot spots do not contribute to the transport. If $\beta\rightarrow\infty$ and $\sqrt{T/\omega_D}\ll\Delta\ll 1$, we obtain at low temperatures $\rho\propto T^2$ even for $T^\star=0$, since $\langle\Phi|W|\Phi\rangle$ becomes temperature-independent. With increasing temperature, one is forced to choose larger $\Delta$ in order to exclude the hot-spot regions. This leads, however, to a decrease of $\langle\Phi|X\rangle$ and finally at high enough temperatures, the solution Eq.\ref{eq:newansatz} with a large $\beta$ becomes unfavourable compared to the standard ansatz Eq.\ref{eq:standard}. This happens if $T/\omega_D>c$, where $c$ is a numerical factor which depends on the details of the geometry of the Fermi line and of the hot spots. We were unable to make a reliable estimate of $c$ analytically and therefore we calculated $\rho$ as a function of $T$ numerically. In Fig.2, we show the results of a numerical calculation of the resistivity according to Eq.\ref{eq:afmresistivity} using both the standard and improved ansatz for $\Phi_{\bf k}$. For the spin-fluctuation spectrum we take $T^\star=0,\alpha=2.0$, and $\omega_D=1760 K$. The density of electrons is $n=0.75$ and the coupling constant $g=0.64 eV$. Note that with the standard ansatz Eq.\ref{eq:standard}, we obtain for this critical system $\rho\propto T$ down to $T=0$ in agreement with previous studies \cite{Ueda} (see also \cite{Hartmut}). With the improved ansatz Eq.\ref{eq:newansatz}, the resistivity is lower for all studied temperatures and it is proportional to $T^2$ up to $T\approx 70 K$. For $T>70 K$, $\rho$ is a linear function of $T$ with a similar slope as for the standard ansatz. However, the extrapolation of the linear part down to $T=0$ is negative. Finally, let us consider the spin-fluctuation spectrum with $T^\star=110 K$, $\alpha=0.55$, and $\omega_D=1760 K$. We take again $n=0.75$ and $g=0.64 eV$. These are the same parameters as those used in Ref.\cite{MP} (see their Eq.37 and note that $\omega_D=2\omega_{SF}(\xi/a)^2$; $\omega_{SF}$ and $\xi$ are the parameters for the spin-fluctuation spectrum used in Ref.\cite{MP}). The results of our numerical calculation are shown in Fig.3. The standard ansatz Eq.\ref{eq:standard} yields a resistivity-vs.-temperature curve qualitatively similar to that obtained by Monthoux and Pines \cite{MP}, but our resistivity is approximately three times larger \cite{Note}. $\rho$ is a linear function of $T$ down to $T\approx 100K$. Our improved solution Eq.\ref{eq:newansatz} yields smaller values of resistivity: for instance, $\rho(T_c)$ calculated using our ansatz is only $\approx 0.6$ of the value obtained with the standard ansatz. More importantly, the shape of the $\rho(T)$ curve changes: it is linear only above $T\approx 180 K$. We believe that the latter feature will hold true also in a more sophisticated calculation than in our Boltzmann-equation approach. In order to proceed further in this direction it will be necessary to find a translation of the variational principle used here to the Green's-function formulation of transport problems. \subsection{Influence of impurity scattering on the resistivity.} In the presence of impurities, one can expect that the anisotropy of the quasiparticle lifetime will be suppressed. Thus, a question arises what is the actual temperature dependence of the resistivity in such a case \cite{Kazuo}. We will address this question by assuming that the impurity scattering can be described by the Boltzmann equation (thus disregarding all fully quantum-mechanical effects like weak localization, etc.) In that case, the resistivity can be described by Eq.2.8 where $F_{\bf k-k^\prime}$ acquires an additional contribution $F_{\bf k-k^\prime}^{\rm imp}$ from impurity scattering. In the Born approximation, $F_{\bf k-k^\prime}^{\rm imp}=\pi a^2 |H^\prime_{\bf k,k^\prime}|^2$, where $H^\prime$ describes the interaction of an electron with impurities. Since we are not interested here in a microscopic calculation of the resistivity due to impurities $\rho_{\rm imp}$, we take $|H^\prime_{\bf k,k^\prime}|=V$ where $V$ is a free parameter to be chosen so as to give a realistic $\rho_{\rm imp}$. Under these assumptions, we have \begin{equation} {\rho_{\rm imp}\over\rho_0}={\pi\over 2}\left({V\over t}\right)^2 {{\langle\langle\Phi_{\bf k}^2\rangle\rangle}\over {\langle\langle\Phi_{\bf k}{\bf u_k}\cdot{\bf n}\rangle\rangle^2}}, \end{equation} where $\langle\langle A\rangle\rangle=\oint{dk\over v}A_k/ \oint{dk\over v}$ is an average of $A$ along the Fermi surface. It is easy to see that $\rho_{\rm imp}$ is minimized by the standard ansatz Eq.\ref{eq:standard}. Since the resistivity due to impurities is finite down to $T=0$ while the contribution of spin-fluctuations vanishes in that limit, it is clear that the standard ansatz will be favourable for $T\rightarrow 0$. At higher temperatures, however, the decrease of the spin-fluctuation contribution to the resistivity for $\Phi_{\bf k}$ given by Eq.\ref{eq:newansatz} may outweigh the increase of the contribution due to impurities. In order to test this possibility, we performed a calculation of the resistivity with the same parameters as those used in Fig.3; we assumed a residual resistivity $\rho_{\rm imp}(T=0)=0.25\rho_0$. The result of this calculation is shown in Fig.4. It is seen that the presence of impurity scattering decreases the difference between the resistivity as calculated by the standard ansatz Eq.\ref{eq:standard} and our variational function Eq.\ref{eq:newansatz}. However, even for the relatively large impurity scattering we have chosen, the resistivity is still not a linear function of temperature for $T>100K$. Summarizing the results of the present Section we can say that although the quasiparticle lifetime is anomalous around special points on the Fermi line (hot spots), the resistivity is proportional to $T^2$ at low enough temperatures. The energy scale where a crossover to $\rho\propto T$ occurs is determined not only by the parameter $T^\star$ as found in previous studies, but also by some fraction of $\omega_D$. \section{Models with van Hove singularities on the Fermi line.} In the literature there appeared a number of attempts to explain the anomalies of the normal state of the HTc's in the framework of a weak-coupling theory under the assumption of a special single-particle dispersion. The most promising among these are the models which assume that at the optimal doping, there is a van Hove singularity on the Fermi line. Such singularities always exist in a periodic energy band for topological reasons (see, {\it e.g.}, Ref.\cite{JM}). The special feature of the HTc's according to these theories simply is that a crossing of the Fermi line with a van Hove singularity can occur. In the present Section, we will calculate the quasiparticle lifetime and the resistivity in the van Hove scenario. We consider electrons with the Hamiltonian \begin{equation} H=\sum_{{\bf k},\sigma}\varepsilon_{\bf k} c^\dagger_{{\bf k},\sigma} c_{{\bf k},\sigma}+g\sum_i n_{i,\uparrow} n_{i,\downarrow}, \end{equation} where $\varepsilon_{\bf k}$ is the single-particle dispersion Eq.\ref{eq:MPdispersion} and $g$ is a weak screened interaction among the electrons. It is assumed that $t>2t^\prime>0$. In that case, there exist two saddle-points $(\pi/a,0)$ and $(0,\pi/a)$ which lead to a van Hove singularity at energy $-4t^\prime$. The Fermi line for a filling when it goes through a van Hove singularity is shown in Fig.5. The single-particle dispersion around the saddle points becomes anomalous: e.g., in the neighborhood of $(\pi/a,0)$, we have \begin{equation} \varepsilon_{\bf k}\approx k_y^2/2m_y-(k_x-\pi/a)^2/2m_x, \label{eq:SPspectrum} \end{equation} where $1/m_x=(t-2t^\prime)a^2$ and $1/m_y=(t+2t^\prime)a^2$. \subsection{Quasiparticle lifetime.} Before considering the temperature dependence of the resistivity in the van Hove scenario, let us first calculate the quasiparticle lifetime. We rederive here the results of Gopalan {\it et al.} \cite{Sudha} in a simpler way which will enable us to calculate the resistivity in the next subsection. The quasiparticle lifetime is given by Eq.\ref{eq:goldenrule1} where $\Im\chi({\bf q},\omega)=\pi\sum_{\bf k} f_{\bf k}(1-f_{\bf k+q})\delta(\varepsilon_{\bf k+q}-\varepsilon_{\bf k}-\omega)$ is the imaginary part of the bare susceptibility. In what follows, we study the lifetime of an electron ${\bf k}$ which is scattered to ${\bf k^\prime}$ by exciting a particle-hole pair ${\bf K}\rightarrow{\bf K^\prime}$. Let us first consider the case when all involved states ${\bf k,k^\prime,K}$, and ${\bf K^\prime}$ are in the neighborhood of the saddle points. There are two types of such scatterings: intra- and inter-saddlepoint scatterings (with or without umklapp), respectively. If both ${\bf K}$ and ${\bf K^\prime}$ lie close to the same saddle point, we can use the expression for the susceptibility found by Gopalan {\it et al.} \cite{Sudha} (for $T=0$): \begin{equation} \Im\chi({\bf q},\omega)\propto \min\left[1,{\omega\over{|\varepsilon_{\bf q}|}}\right], \label{eq:SPsusceptibility} \end{equation} where $\varepsilon_{\bf q}$ is given by Eq.\ref{eq:SPspectrum}. To simplify the analysis of the susceptibility in the case when ${\bf K}$ and ${\bf K^\prime}$ are close to different saddle points, let us assume for the moment that the single-particle spectrum is \begin{equation} \varepsilon_{\bf k}={1\over 2m}\left(|k_x|-{G\over 2}\right)k_y. \label{eq:umklappspectrum} \end{equation} The Fermi line for electrons with the dispersion Eq.\ref{eq:umklappspectrum} is shown in Fig.6. The spectrum consists of two saddle points whose distance is ${\bf G}=(G,0)$, where $2{\bf G}$ is assumed to be a vector of the inverse lattice. The susceptibility at a wavevector ${\bf G-q}$ where ${\bf q}$ is small then is $$ \Im\chi({\bf G-q},\omega)\propto| \ln|{2\omega\over{|\varepsilon_{\bf q}|}}-{\rm sgn} (\varepsilon_{\bf q})||, $$ where $\varepsilon_{\bf q}=q_xq_y/2m$. The lifetime of the electron in the state ${\bf k}$ now reads $${1\over \tau_{\bf k}}=2g^2\sum_{\bf k^\prime}^\prime\Im\chi({\bf k-k^\prime},\varepsilon_{\bf k}-\varepsilon_{\bf k^\prime}),$$ where the prime on the sum means a restriction to those states ${\bf k^\prime}$ which satisfy $0<\varepsilon_{\bf k^\prime}<\varepsilon_{\bf k}$. One finds from here that at zero temperature $1/\tau_{\bf k}\propto\varepsilon_{\bf k}$ in agreement with the result of Gopalan {\it et al.}\cite{Sudha}. Let us calculate now the lifetime of an electron ${\bf k}$ away from the saddle points when two of the states ${\bf k^\prime,K}$, and ${\bf K^\prime}$ are close to a saddle point. We make use of Eq.\ref{eq:goldenrule2}. Let us consider first the contribution of ${\bf k^\prime}$ close to ${\bf k}$ (forward-scattering channel) for which the expression Eq.\ref{eq:SPsusceptibility} is valid. Let $|{\bf k-k^\prime}|=q$. Since none of the asymptotes of the hyperbolas Eq.\ref{eq:SPspectrum} is parallel to the Fermi line (away from the saddle points), we have $${1\over\tau_{\bf k}}\propto\int_0^{\varepsilon_{\bf k}} d\omega\int_0^\Lambda dq\min\left[1,|M\omega/q^2|\right],$$ where $M$ is a constant and $\Lambda$ a cut-off in momentum space. Taking the integral, we have $1/\tau_{\bf k}\propto \varepsilon_{\bf k}^{3/2}$ in agreement with Ref.\cite{Sudha}. Let us consider now ${\bf k^\prime}$ close to a saddle point. If we require that one of the points ${\bf K}$ and ${\bf K^\prime}$ is close to a saddle point, we find that either ${\bf K}\approx{\bf k^\prime}$ and ${\bf K^\prime}\approx{\bf k}$ (exchange channel) or ${\bf K}\approx{-\bf k}$ and ${\bf K^\prime}\approx{-\bf k^\prime}$ (Cooper channel). The contribution to the lifetime of the exchange channel is analogous to that of the forward-scattering channel. In order to calculate the contribution of the Cooper channel we need to calculate the susceptibility $\Im\chi({\bf K^\prime-K},\omega)$ where ${\bf K}$ and ${\bf K^\prime}$ are momenta away and close to a saddle point, respectively. Let ${\bf P}$ and ${\bf Q}$ be points on the Fermi line close to ${\bf K}$ and ${\bf K^\prime}$, respectively such that ${\bf Q-P}={\bf K^\prime-K}$. Let the spectrum in the vicinity of the saddle point and of ${\bf P}$ be $\varepsilon_{\bf k}=k_xk_y/2m$ and $\varepsilon_{\bf k}={\bf v}\cdot({\bf k-P})$, respectively and let ${\bf Q}=(Q,0)$. Then the susceptibility is \begin{equation} \Im\chi({\bf K^\prime-K},\omega)\approx {1\over(2\pi)^2v\cos\phi} \left(\sqrt{Q^2+{8m\omega\over\tan\phi}}-Q\right), \label{eq:newsusc} \end{equation} where $\phi$ is the angle between the tangents to the Fermi line in the points ${\bf P}$ and ${\bf Q}$. Now we can calculate the lifetime in the Cooper channel according to Eq.\ref{eq:goldenrule1} by first integrating over $\omega$ and introducing hyperbolic coordinates ${\bf k^\prime}=(\varepsilon^\prime, \phi^\prime)$ such that $k^\prime_x\propto\sqrt{\varepsilon^\prime/\tan\phi^\prime}$ and $k^\prime_y\propto \sqrt{\varepsilon^\prime\tan\phi^\prime}$. $Q$ is dominated by the position of ${\bf k^\prime}$ and we find $Q\propto\sqrt{\varepsilon^\prime/\tan\phi^\prime} |\tan\phi^\prime-\tan\phi|$. The resulting integral can be performed by scaling and we find $1/\tau_{\bf k}\propto \varepsilon_{\bf k}^{3/2}$. Finally, in case when ${\bf k}$ is away from saddle points and at most one of the points ${\bf k^\prime,K}$, and ${\bf K^\prime}$ is close to a saddle point we obtain a lifetime analogous to the result for an isotropic spectrum \cite{Wilkins}. As an example, let us consider the case when it is the momentum ${\bf K^\prime}$ which is close to a saddle point. We calculate the lifetime according to Eq.\ref{eq:goldenrule2} where the susceptibility is given by Eq.\ref{eq:newsusc}. Let ${\bf k^\prime_0}$ be that value of ${\bf k^\prime}$ for which $Q=0$. For a general ${\bf k^\prime}$ we have $Q=\alpha q$ where $q=|{\bf k^\prime-k^\prime_0}|$ and $\alpha$ is a constant. The lifetime is $${1\over \tau_{\bf k}}\propto \int_0^{\varepsilon_{\bf k}}d\omega\int_0^\Lambda dq \left(\sqrt{q^2+{8m\omega\over{\alpha^2\tan\phi}}} -q\right),$$ where $\Lambda$ is a cut-off in momentum space. The integration is straightforward and we find $1/\tau_{\bf k}\propto\varepsilon_{\bf k}^2\ln(1/\varepsilon_{\bf k})$. Thus the contribution to $1/\tau_{\bf k}$ of the processes with one of the scattering states close to a saddle point is subleading compared to the processes in the forward-scattering, exchange, and Cooper channels. Summarizing, we have found that due to the presence of van Hove singularities on the Fermi line, the scattering rate is anomalously enhanced compared to the isotropic case; for electrons close to a saddle point $1/\tau\propto\varepsilon$, whereas for the remaining electrons $1/\tau\propto\varepsilon^{3/2}$. \subsection{Resistivity.} Let us calculate the resistivity in the van Hove scenario. We will work again in the quasiclassical formalism of the Boltzmann equation. Analogously to the discussion in Section 2, the resistivity can be found \cite{Ziman} as a minimum of the functional Eq.\ref{eq:Zim} where \begin{eqnarray} &&\langle\Phi|W|\Phi\rangle={\pi g^2\over 2T}\sum_{\bf G} \sum_{\bf k,k^\prime,K,K^\prime} f_{\bf k}f_{\bf K}(1-f_{\bf k^\prime})(1-f_{\bf K^\prime}) \nonumber\\ &&\times\left(\Phi_{\bf k}+\Phi_{\bf K}-\Phi_{\bf k^\prime} -\Phi_{\bf K^\prime}\right)^2\nonumber\\ &&\times\delta(\varepsilon_{\bf k}+\varepsilon_{\bf K} -\varepsilon_{\bf k^\prime}-\varepsilon_{\bf K^\prime}) \delta({\bf k+K-k^\prime-K^\prime-G}) \label{eq:Zimresistivity} \end{eqnarray} and ${\bf G}$ is a reciprocal-lattice vector. In what follows we consider the standard ansatz $\Phi_{\bf k}={\bf v}_{\bf k}\cdot{\bf n}$ and we neglect the weak temperature dependence of $\langle\Phi|X\rangle$. Then we can write Eq.\ref{eq:Zim} in the following form: $$\rho\propto{1\over T}\sum_{\bf k}{f_{\bf k}\over\tau_{\bf k}^{TR}}\sim\oint{dk\over v}{1\over \tau^{TR}(k,T)},$$ {\it i.e.}, the resistivity can be found as an average over the Fermi line of the transport scattering rate at energy $\sim T$. For the transport lifetime we find an expression similar to that for the quasiparticle lifetime: \begin{eqnarray*} {1\over \tau_{\bf k}^{TR}}&\propto&\sum_{\bf k^\prime} \int_0^{\varepsilon_{\bf k}}d\omega\: \Im\chi^{TR}({\bf k-k^\prime},\omega; \Phi_{\bf k}-\Phi_{\bf k^\prime})\\ &&\times\delta (\varepsilon_{\bf k}-\varepsilon_{\bf k^\prime}-\omega),\\ \Im \chi^{TR}({\bf q},\omega;u)&=&\pi\sum_{\bf k}f_{\bf k} (1-f_{\bf k+q})(\Phi_{\bf k+q}-\Phi_{\bf k}-u)^2\\&&\times \delta(\varepsilon_{\bf k+q}-\varepsilon_{\bf k}-\omega), \end{eqnarray*} where we have defined the `transport susceptibility' $\Im \chi^{TR}({\bf q},\omega;u)$. Let us study first the transport lifetime for the processes when all electron states involved in the scattering are close to the saddle points. Note that assuming intra-saddle-point scatterings and a dispersion $\varepsilon_{\bf q}=q_x q_y/2m$, the conservation of momentum implies $\Phi_{\bf k}+\Phi_{\bf K}-\Phi_{\bf k^\prime} -\Phi_{\bf K^\prime}=0$ similarly as in the case of an isotropic dispersion and the resistivity vanishes \cite{Newns}. Thus we have to take into account inter-saddle-point scatterings. Unfortunately, the transport lifetime for the model dispersion Eq.\ref{eq:umklappspectrum} is different from the actual result for the spectrum Eq.\ref{eq:MPdispersion}, since the asymptotes of the hyperbolas of the two saddle points in the latter spectrum are not parallel to each other and we have to calculate more carefully. Let us assume without loss of generality that ${\bf k^\prime}$ lies in the vicinity of the point $(\pi/a,0)$ where the dispersion is described by Eq.\ref{eq:SPspectrum}. The energy conservation together with the Pauli principle require $0<\varepsilon_{\bf k^\prime}<\varepsilon_{\bf k}$ and the allowed ${\bf k^\prime}$-points lie between two branches of a hyperbola centered at $(\pi/a,0)$. The dominant contribution to $1/\tau_{\bf k}^{TR}$ comes from the ${\bf k^\prime}$-points in the tails of the hyperbolas, where the transport susceptibility $\Im\chi^{TR}({\bf k-k^\prime},\varepsilon_{\bf k}-\varepsilon_{\bf k^\prime})\approx \varepsilon_{\bf k}$. Thus the transport lifetime reads $${1\over\tau_{\bf k}^{TR}}\propto\varepsilon_{\bf k}\int_0^\Lambda dk^\prime_x\int_{\alpha k^\prime_x}^{\sqrt{(\alpha k^\prime_x)^2+2m_y\varepsilon_{\bf k}}}dk^\prime_y,$$ where $\alpha=\sqrt{m_y/m_x}$, $\Lambda$ is a cut-off in momentum space (typically some fraction of $\pi/a$) and we have shifted the position of the saddle point to $(0,0)$. Taking the integral we find $1/\tau_{\bf k}^{TR}\propto \varepsilon_{\bf k}^2\ln(\Lambda^2/2m_x\varepsilon_{\bf k})$. Now we consider processes when two states lie close to a saddle point. As discussed in the previous subsection there are three types of such processes, namely scatterings in the forward, exchange, and Cooper channel. In the forward-scattering channel, the exchanged momentum ${\bf q}$ is small and we can estimate $(\Phi_{\bf k+q}-\Phi_{\bf k}-u)^2\sim {\bf q}^2$. Repeating the analysis of the previous subsection we find $${1\over\tau^{TR}_{\bf k}}\propto\int_0^{\varepsilon_{\bf k}} d\omega\int_0^\Lambda dq q^2\min\left[1,|M\omega/q^2|\right] \propto\varepsilon_{\bf k}^2.$$ Scattering in the exchange channel leads to a similar result. In the Cooper channel the relevant transport susceptibility has an additional factor $(\Phi_{\bf k}+\Phi_{\bf K} -\Phi_{\bf k^\prime}-\Phi_{\bf K^\prime})^2\propto\varepsilon_{\bf k}$ compared to Eq.\ref{eq:newsusc} and hence $1/\tau^{TR}_{\bf k}\propto \varepsilon_{\bf k}^{5/2}$. Finally, if only one of the states ${\bf k,k^\prime,K}$, and ${\bf K^\prime}$ is close to a saddle point there is in general no additional small factor distinguishing $1/\tau$ from $1/\tau^{TR}$ and thus the resistivity is $\rho\propto T^2\ln(1/T)$. Note that although processes of this type give only a subdominant contribution to the quasiparticle lifetime, they provide a leading contribution to relaxation of momentum. Summarizing, in Section 2 we calculated the quasiparticle lifetime and the resistivity in the van Hove scenario. We found that although the quasiparticle lifetime is anomalously short the resistivity exhibits the standard temperature dependence with a logarithmic correction $\rho\propto T^2\ln(1/T)$ for $T\ll E_F$. \section{Conclusions.} In this paper we have analyzed the resistivity as a function of temperature for two two-dimensional models with hot spots: nearly antiferromagnetic Fermi liquids and a model with van Hove singularities on the Fermi line. To simplify the treatment, we decided to formulate the transport problem on the level of a Boltzmann equation. In the case of nearly antiferromagnetic Fermi liquids, we have shown that the standard treatment which does not take into account the anisotropy of the electron lifetime along the Fermi line leads to $\rho\propto T$ for $T>T^\star$. However, we constructed better variational solutions of the BE which exclude highly resistive points on the Fermi line and yield $\rho\propto T^2$ even above $T^\star$. We have found a new energy scale for the crossover to the $\rho\propto T$ behavior at higher temperatures. This energy scale does not vanish even if the spectrum of the spin-fluctuations is critical. The presence of disorder was shown to decrease the difference between the standard solution and our ansatz; however, even for relatively strong disorder, the resistivity is not a linear function of temperature above 100K, if we use the parameters proposed by Monthoux and Pines \cite{MP}. More generally, our analysis suggests that if the electrons couple to a bosonic excitation which is soft at some {\it finite} wavevector ${\bf Q}$, the quasiparticle lifetime will be very anisotropic and there will be hot spots on the Fermi line where the scattering may become anomalous. However, the resistivity will be dominated by the lifetime in the generic points of the Fermi line away from the hot spots. Thus, only anomalous scattering in a generic point on the Fermi line implies anomalous resistivity. This can be achieved, {\it e.g.}, by coupling the electrons to a mode which is soft at long wavelengths. For example, consider electrons with the spectrum $\varepsilon_{\bf k}={\bf k}^2/2m$ coupled to bosons described by the propagator $$\chi({\bf q},\omega)\propto {q^\beta\over{\omega_{\bf q}-i\omega}} \hspace{0.3cm}{\rm or}\hspace{0.3cm} \chi({\bf q},\omega)\propto {q^\beta\omega_{\bf q}\over{ \omega^2_{\bf q}-\omega^2}},$$ where $\omega_{\bf q}=T^\star+Aq^\alpha$; we take $\alpha\geq 1$, $\beta\geq 0$. A golden-rule calculation of the quasiparticle lifetime and resistivity for $T>T^\star$ gives $1/\tau\propto T^{(D+\beta-1)/\alpha}$ and $\rho\propto T^{(D+\beta+1)/\alpha}$, respectively. $D$ is the spatial dimension (we assume that $D$ is the same for both electrons and bosons). {\it E.g.}, for scattering of spinons on the gauge field we have $D=2$, $\alpha=3$, $\beta=1$ and we obtain $1/\tau\propto T^{2/3}$ and $\rho\propto T^{4/3}$ in agreement with Ref.\cite{NL}. For $T\ll T^\star$, we have $1/\tau\propto T^2$. Thus the concept of quasiparticles is valid and our use of a semiclassical approximation is supported. It is interesting to note that for $T^\star=0$, $\rho\propto T$ would require $1/\tau\propto T^\nu$ with $\nu<1$; coupling the electrons to a bosonic excitation and requiring that Landau's Fermi-liquid theory is applicable would lead to $\rho\propto T^\mu$ where $\mu>(2+\alpha)/\alpha$. In the second part of this paper, we considered a model with van Hove singularities close to the Fermi line and with weak screened electron-electron interactions. If the van Hove singularity is located at energy $E_F\pm T^\star$, then the anomalous quasiparticle lifetime reported in Section 3 is valid for $\varepsilon\gg T^\star$. At energies smaller than $T^\star$ we obtain the standard results for interacting electrons in two dimensions \cite{Wilkins} and the concept of quasiparticles should be applicable. Thus the existence of a nonvanishing $T^\star$ provides, similarly as in the case of nearly antiferromagnetic Fermi liquids, support for our use of the semiclassical approach. At $\varepsilon\gg T^\star$, small-angle scatterings or Cooper-channel processes in which electrons close to saddle points take part are responsible for the anomalous behavior of the quasiparticle lifetime. However, using the standard ansatz $\Phi_{\bf k}={\bf v}_{\bf k}\cdot{\bf n}$ for the variational solution of the Boltzmann equation their contribution to the transport lifetime becomes regularized by the appearance of an additional small factor $(\Phi_{\bf k}+\Phi_{\bf K}-\Phi_{\bf k^\prime}-\Phi_{\bf K^\prime})^2$ in Eq.\ref {eq:Zimresistivity} and we obtain finally $\rho\propto T^2\ln(1/T)$. Such a reduction of the transport scattering rate compared to the quasiparticle scattering is in fact quite common as can be seen, {\it e.g.}, from our results for electrons interacting with a bosonic mode. Another example is a one-dimensional system away from half filling: the quasiparticle lifetime behaves as $1/\tau\propto T$, while the resistivity is exponentially small \cite{Giamarchi}. A similar tendency holds for electrons with the usual isotropic dispersion in two dimensions: Hodges {\it et al.} \cite{Wilkins} found that $1/\tau\propto T^2\ln(1/T)$, whereas it can be shown \cite{Fujimoto} that the resistivity $\rho\propto T^2$. \acknowledgements We would like to thank G. Blatter, H. Monien, A. Ruckenstein, H. Tsunetsugu, and K. Ueda for interesting discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,912
Taťjana Michajlovová (, , Tacjana Michajlava; * 18. ledna 1987 Minsk, Běloruská SSR, SSSR) je běloruská rychlobruslařka. Roku 2005 se zúčastnila Zimní univerziády, mezi lety 2006 a 2008 nezávodila. Na mezinárodní scénu se vrátila v roce 2010, kdy začala startovat ve Světovém poháru. Zúčastnila se Zimních olympijských her 2018 (závod s hromadným startem – vyřazena v semifinále). Na Mistrovství Evropy 2020 získala bronzovou medaili ve stíhacím závodě družstev. Jejím manželem je rychlobruslař Vitalij Michajlov. Externí odkazy Běloruské rychlobruslařky Běloruští olympionici Narození 18. ledna Narození v roce 1987 Narození v Minsku Žijící lidé Ženy
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,622
\section{Introduction} Optical circuits make use of light to process information. They operate at the speed of light with almost no energy dissipation, unlike electronic analogs. Optical fibres~\cite{Mollenauer80,Hasegawa95} and photonic crystal fibers~\cite{Russel03} have already found important applications in optical communications and optoelectronic devices. Implementing ultrafast optical sources and all-optical switches based on novel (quantum-confined) materials, such as organic thin films and quantum dots,~\cite{Wada04} as well as silicon-based structures,~\cite{Almeida04} is now in progress. The realizability of a single-photon optical switch based on warm rubidium vapor has recently been demonstrated.~\cite{Dawes05} A key element of any optical logic device is the optical switch, which either passes or reflects the incoming light, depending on its intensity. One possibility to design an optical switch is to utilize the phenomenon of optical bistability. Since the theoretical prediction of this effect by McCall~\cite{McCall74} and its experimental demonstration by Gibbs, McCall, and Venkatesan~\cite{Gibbs76} for a cavity filled with potassium atoms, an extensive literature, both theoretical and experimental, has developed on this topic (see Refs.~\onlinecite{Abraham82,Lugiato84,Gibbs85} for historical overviews and Ref.~\onlinecite{Rosanov96} for recent developments on optical instability in wide aperture laser systems). A generic optical bistable element exhibits two stable stationary transmission states for the same input intensity, a property which in principle opens the door to applications such as all-optical switches, optical transistors, and optical memories. Nonlinearity and feedback are the two necessary ingredients in order to enable optical bistable response of an optical system. The former can be provided, e.g., by a saturable medium, while a cavity (mirrors) can serve to build up a feedback. This arrangement has been used in the first demonstration of {\it controlling light with light}.~\cite{Gibbs76} Sometimes, however, the nonlinearity itself plays the role of the feedback. Here, bistability is an {\it intrinsic} property of the material; no {\it external} feedback, like a cavity, is needed. Thus, mirrorless (or cavityless) optical bistability can be realized, which is even more advantageous from the viewpoint of designing all-optical devices. During the past decade, this type of bistability has been observed in a variety of inorganic materials heavily doped with rare-earth ions.~\cite{Hehlen94,Gamelin00,Noginov01,Goldner04} In Refs.~\onlinecite{Hehlen94}, a population dependent dipole-dipole interaction in ion pairs has been put forward as a nonlinearity and feedback mechanism to explain the effect. This interpretation has been debated in a number of papers.~\cite{Bodenschatz96,Guillot-Noel01,% Malyshev98,Gamelin00,Noginov01,Goldner04,Ciccarello03} Another class of materials, promising from the viewpoint of all-optical manipulation of light, are molecular aggregates and conjugated polymers. These systems commonly exhibit narrow absorption bands and suppression of exciton-phonon coupling, superradiance and giant optical nonlinearities, fast collective optical response and efficient energy or charge transport (see for an overview Refs.~\onlinecite{Spano94,Kobayashi96, Hadzii99,% vanAmerongen00,Knoester02,Spano06,Scholes06}), which are ingredients necessary to design optoelectronics or all-optical devices. Molecular aggregates and conjugated polymers have already been used to fabricate light emitting diodes~\cite{Greenham93} and organic solid-state lasers.~\cite{Kranzelbinder00} One particularly interesting effect, which has already received a considerable amount of theoretical discussion, but still awaits experimental realization, is the mirrorless optical bistability of a single molecular aggregate~\cite{Malyshev96} or an assembly of molecular aggregates.~\cite{Malyshev00,Jarque01,Glaeske01} The bistable behavior of a {\it single} linear aggregate consists of a sudden switching of the aggregate's excited state population from a low level to a higher one upon a small change of the input intensity around a critical point. The effect originates from a dynamic resonance frequency shift, which depends on the number of excited monomers in the aggregate. The origin of this shift lies in the quasi-fermionic nature of Frenkel excitons in one dimension.~\cite{Chesnut63,Agranovich68,Spano91} This nonlinearity plays the role of {\it intrinsic} feedback, necessary for bistability to occur. There exists, however, a restriction on the aggregate length: an aggregate exhibits bistable behavior only if its coherence length is larger than the emission wavelength, which makes experimental realization problematic. An {\it assembly} of molecular aggregates arranged in an ultrathin film geometry (with the film thickness small compared to the emission wavelength) may display intrinsic optical bistability governed by another mechanism, where the density of molecules becomes the driving parameter. The same mechanism holds for an ultrathin film of homogeneously broadened two-level systems.~\cite{Zakharov88} When the density in the film is high enough, the on-resonance refractive index can get sufficiently large to totally reflect an incoming field of low intensity. Then the incoming field is almost completely compensated by a secondary field of opposite phase, which is generated by the aggregate dipoles. The dipole-induced field is bounded in magnitude, meaning that this picture only holds if the incoming field intensity is smaller than a certain value, determined by the density of aggregates. When this value is exceeded, the aggregates become saturated, which suppresses the dipole-induced field and abruptly changes the (nonlinear) refractive index and transmittivity of the film. The field produced by the aggregate dipoles plays the role of {\it intrinsic} feedback. The output field depends nonlinearly on the input field of the film. In Refs.~\onlinecite{Malyshev00}~-~\onlinecite{Glaeske01} a thin film arrangement of oriented linear J-aggregates was considered, where the localization segments of a single disordered aggregate were modeled as independent homogeneous chains of fluctuating size. Each segment was considered as a few-level system, with an individual ground state and one or two excited states corresponding to the dominant optical states of the segment. Within this framework, both the ground state to one-exciton~\cite{Malyshev00} and one-to-two~\cite{Glaeske01} exciton transitions were taken into account, and bistable behavior was found in a certain region in the parameter space. The approach used in Refs.~\onlinecite{Malyshev00} and~\onlinecite{Glaeske01} assumed full correlation of fluctuations of the lowest exciton energy and the transition dipole moment, taking both magnitudes as solely depending on the segment size. The real picture, however, is quite different.~\cite{Malyshev95,Malyshev01} In practise the optical response of J-aggregates is strongly affected by disorder in the molecular transition energies. The band-edge of the exciton energy spectrum of such a disordered aggregate is formed by states that are localized on segments with small overlap. The lowest state of a segment is optically dominant, whereas the other states have a much smaller oscillator strength. The energy of the lowest state is not correlated with the size of the segment; it is determined by uncorrelated well-like fluctuations of the site potential.~\cite{Lifshits68} Therefore, the optically dominant states of non-overlapping segments can be arbitrarily close in energy, having at the same time completely different transition dipoles.~\cite{Bednarz04} In other words, the transition dipoles and energies of the relevant states turn out to be uncorrelated rather than correlated. In this paper, we exploit the two-level model, implemented in Refs.~\onlinecite{Malyshev00} and~\onlinecite{Jarque01}, to describe the film's optical response. However, unlike Refs.~\onlinecite{Malyshev00} and~\onlinecite{Jarque01}, we will account properly for the statistical fluctuations of the transition dipole moment and the transition energy, as they appear after diagonalizing the Frenkel exciton Hamiltonian with uncorrelated on-site disorder. We calculate the joint probability distribution of these quantities and use it to compute the electric polarization of the film, which features in the Maxwell equation for the field. The aggregate segment dynamics is described within the $2\times 2$-density matrix formalism. We derive a novel steady-state equation for the output field intensity as a function of the input intensity in terms of the joint probability distribution of the energy and the transition dipole moment. On this basis, the bistability phase diagram of the film is calculated. The critical parameter for bistability to occur turns out to be different (larger) than that found in Refs.~\onlinecite{Malyshev00}. By numerically solving the truncated Maxwell-Bloch equations in the time domain, we study the stability of the different branches of the three-valued solution for the output field intensity. The calculation of an optical hysteresis loop (an adiabatic up-and-down-scan of the field) demonstrates that only two of them are stable. A new element in the paper is that we also analyze switching time between both stable branches, and show that it slows down dramatically close to the switching point. The outline of this paper is as follows. In section~\ref{Sec: Model} we present the model and mathematical formalism. Section~\ref{Sec: Linear regime} deals with the linear regime of the transmission. The steady state equation for the output intensity in the nonlinear regime is derived in Sec.~\ref{Sec: Steady-state analysis}. In Sec.~\ref{Sec: Time-domain analysis}, the stability of different branches is considered, together with a study of the switching time. In Sec.~\ref{Sec: Driving Parameters} we discuss the possibility to achieve optical bistability using J-aggregates of polymethine dyes. Section~\ref{Sec: Summary} summarizes the paper. Finally, in the Appendix we address the effect of interference of the ground state to one-exciton transitions, originating from the fact that excitons are born from the same ground state, with all monomers being unexcited. \section{Model and formalism} \label{Sec: Model} We aim to study the transmittivity of an assembly of linear J-aggregates arranged in a thin film geometry (with the film thickness $L$ small compared to the emission wavelength $\lambda^{\prime}$ inside the film). All aggregates are aligned in one direction, parallel to the film plane. Such an arrangement can be achieved, e.g., by spin-coating.~\cite{Misawa93} The limit of $L \ll \lambda^{\prime}$ allows one to neglect the inhomogeneity of the field inside the film. The aggregates in the film are assumed to be decoupled from each other. This finds its justification in the strongly anisotropic nature of the system we have in mind. As we will see later (Sec. VI), films of interest for bistability should have a molecular density of the order of $10^{19}$ cm$^{-3}$. With a typical separation of 1 nm between molecules within a single aggregate, this implies that neighboring aggregates are separated by 10 nm. Thus, the dominant dipole-dipole interactions between molecules of different chains are a factor of 1000 weaker than those within chains. As a consequence, we expect that the former interactions will merely result in small shifts of resonance energies, away from the single-chain exciton energies considered below. On the other hand, the effect of interactions of the aggregate molecules with the surrounding host molecules is important, because as a consequence of the usually inhomogeneous nature of the host media, they lead to disorder in the molecular transition energies and in the molecular transfer integrals, both of which give rise to localization of the exciton states on segments of the aggregates. Finally, thermal fluctuations in the environment result in intraband scattering of the excitons that causes two effects: equilibration of the exciton population and homogeneous broadening of the exciton levels. In this paper, we neglect the former effect. This finds its justification in many experimental studies, which have shown that the fluorescence Stokes shift of J-aggregates of cyanine dyes usually is very small.~\cite{Fidder90,Minoshima94,Moll95,Kamalov96}) \subsection{A single aggregate} \label{Single_aggregate} We model a single aggregate as a linear array of $N$ two-level monomers with parallel transition dipoles. In this paper, we restrict ourselves to optical transitions between the ground state an the one-exciton manifold, described by the Frenkel exciton Hamiltonian \begin{equation} H_0 = \sum_{n=1}^N \> \epsilon_n |n\rangle \langle n| + \sum_{n,m}^N\> J_{nm} \> |n\rangle \langle m| \ , \label{H} \end{equation} where $|n \rangle$ denotes the state in which the $n$th site is excited and all the other sites are in the ground state and $\epsilon_n$ is the excitation energy of site $n$. The $\epsilon_n$ are taken at random and uncorrelated from each other from a Gaussian distribution with mean $\epsilon_0$ (the excitation energy of an isolated monomer) and standard deviation $\sigma$. The transfer interactions $J_{nm}$ are considered to be of dipolar origin and non fluctuating: $J_{nm} = - J/|n-m|^{3}$ \, $(J_{nn} \equiv 0)$. Here the parameter $J$ represents the nearest-neighbor transfer interaction, which will be chosen positive (as is appropriate for J-aggregates). The exciton energies $\varepsilon_\nu$ ($\nu = 1,\ldots , N$) and wavefunctions $|\nu\rangle = \sum_{n=1}^N \varphi_{\nu n}|n\rangle$, are obtained as eigenvalues and eigenvectors of the $N \times N$ Hamilton matrix $H_{nm} = \langle n| H |m \rangle$. From the set of exciton states $|\nu^\prime\rangle$ we only take into account the optically dominant states which, for $J > 0$, reside in the neighborhood of the low-energy bare band edge at $\varepsilon_0 = \epsilon_0 - 2.404 J$. These states are located at different segments of the aggregate, which overlap weakly, and have a wavefunction with no node. Therefore, they are called $s$-like states. To find all such states, we use the selection rule proposed in Ref.~\onlinecite{Malyshev01}. It reads \big|$\sum_n \varphi_{\nu n} |\varphi_{\nu n}|\big| > C_0$, where we set $C_0 = 0.8$. This rule selects states with a wavefunction consisting of mainly one peak. From now on, the state index $\nu$ will count only such $s$-like states. The number of these states is roughly equal to $N/N^*$, where $N^*$ is their typical localization size. We assume that the vibration-induced coherence length of excitons is much larger than the disorder-induced localization length, a condition that can be fulfilled at low temperature.~\cite{Heijs05} In this limit, the exciton eigenstates $|\nu\rangle$ form a good basis. The above picture implies that an aggregate is modeled as a set of independent segments, each of which has its own ground state $|0 \rangle$ and an $s$-like excited state $|\nu \rangle$. The optical transition between these states is governed by the segment dipole operator $\hat{d}_{\nu} = d_0 (|0\rangle \langle \nu| + |\nu \rangle \langle 0|)$, where $d_0$ is the transition dipole moment of a monomer. The corresponding transition dipole moment of a segment is calculated as $d_{\nu} = d_0 \sum_{n} \varphi_{\nu n} \equiv d_0 \mu_{\nu}$, where $\mu_{\nu} = \sum_{n} \varphi_{\nu n}$ is the dimensionless transition dipole moment. The optical dynamics of a segment is described in terms of the $2\times 2$-density matrix ($\rho_{\nu\nu}, \rho_{\nu 0}, \rho^*_{\nu 0}, \rho_{00}$) which obeys the Bloch-like equations (see the Appendix) \begin{subequations} \label{Eq: Density matrix} \begin{equation} \dot{\rho}_{\nu\nu} = -\gamma_{\nu} \rho_{\nu\nu} + i d_{\nu}\mathcal{E}\left(\rho_{0\nu} - \rho_{\nu 0}\right) \ , \end{equation} \begin{equation} \dot{\rho}_{\nu 0} = -\left(i\varepsilon_{\nu} + \Gamma_{\nu} \right)\rho_{\nu 0} - i d_{\nu} \mathcal{E} (\rho_{\nu\nu} - \rho_{00})\ , \end{equation} \begin{equation} \label{norma} \rho_{00} + \rho_{\nu\nu} = 1 \ . \end{equation} \end{subequations} Here we set the Plank constant $\hbar = 1$ and introduced the following notations: $\gamma_{\nu} = \gamma_0 |\mu_{\nu}|^2$ is the radiative rate of the exciton state $\nu$ ($\gamma_0$ being the monomer radiative rate), and $\Gamma_{\nu} = \frac{1}{2} \gamma_{\nu} + \gamma_{\nu 0}$ is the dephasing rate of the state $\nu$, which includes a pure dephasing term, $\gamma_{\nu 0}$. Finally, $\cal{E}$ is the total electric field inside the film (see below). Owing to the disorder, the transition energy $\varepsilon_{\nu}$, the relaxation constant $\Gamma_{\nu}$, and the transition dipole moment $\mu_{\nu}$ are stochastic variables, which differ from segment to segment. Because of the fluctuations in $\varepsilon_{\nu}$, $\Gamma_{\nu}$, and $d_{\nu}$, the density matrix elements $\rho_{\nu\nu}$, $\rho_{\nu 0}$, and $\rho_{00}$ fluctuate as well. \subsection{The Maxwell equation} \label{ME} In this section, we specify the field $\mathcal{E}$ which enters Eqs.~(\ref{Eq: Density matrix}). It consists of two contributions: the incoming field $\mathcal{E}_i$ and a part produced by the aggregate dipoles. The incoming field is considered to be a plane wave $\mathcal{E}_i = E_i(x,t) \cos(k_i x - \omega_i t)$ with a frequency $\omega_i$ and an amplitude $E_i(x,t)$, normally incident and polarized along the aggregate transition dipoles. Under these conditions, all the vectorial variables (transition dipole moments, incoming and outgoing fields, and the field inside the film) can be considered as scalars. The amplitude $E_i(x,t)$ is assumed to vary slowly on the scale of the optical period $2\pi/\omega_i$ and wavelength $\lambda_i = 2\pi/k_i$. We assume without loss of generality that the film is located in the ZY plane ($x = 0$). Then the total field at $x = 0$ (inside the film) is given by~\cite{Benedict88,Benedict96} \begin{equation} \label{Eq:Maxwell for a thin film} \mathcal{E} = \mathcal{E}_i - \frac{2\pi L}{c}\dot{\mathcal{P}} \ , \end{equation} where $\mathcal{P}$ is the electric polarization of the film (electric dipole moment per unit volume), the dot denotes the time derivative, and $c$ stands for speed of light. The second term in the right hand side of Eq.~(\ref{Eq:Maxwell for a thin film}) represents the field produced by the dipoles in the film, emitted perpendicular to the film in both directions. The part propagating to the left is the reflected (plane wave) field, given at $x = 0$ by $\mathcal{E}_{r} = -(2\pi L/c)\dot{\mathcal{P}}$, while the part propagating to the right is the emitted (also plane wave) field, which forms, together with the incident field $\mathcal{E}_i$, the transmitted signal, determined at $x = 0$ by Eq.~(\ref{Eq:Maxwell for a thin film}). The electric polarization $\mathcal{P}$ is calculated as follows. First, we introduce the expectation value of the dipole operator of an aggregate, $d = d_0 \sum_{\nu \in s}\> \mu_{\nu} (\rho_{\nu 0} + \rho_{0 \nu})$, where the summation is performed only over the $s$-like states of the aggregate. Furthermore, this value is averaged over a physical volume $V$, containing $M$ aggregates, which, in fact, is equivalent to obtaining the average $\langle d \rangle$ over disorder realizations. After that, ${\cal P}$ is obtained by multiplying $\langle d \rangle$, with the number density $M/V$ of the aggregates. The final formula for the electric polarization reads: \begin{equation} \label{P} \mathcal{P} = d_0 n_0 \frac{N_s}{N}\int d\varepsilon d\mu \, \mathcal{G}_s(\varepsilon,\mu)\, \mu \, [\, \rho_{10}(\varepsilon,\mu,t) + \mathrm{c.c.}] \ . \end{equation} Here, $n_0 = NM/V$ is the number density of monomers, $N_s = \Big\langle \sum_{\nu \in s} \> 1 \Big\rangle$ a normalization constant (having the meaning of the average number of $s$-like states in an aggregate), and $\rho_{10}(\varepsilon,\mu,t)$ is the off-diagonal density matrix element, where the indices 0 and 1 label the ground and the excited $s$-state of the segment, respectively. In our present formulation this element, as well as $\rho_{00}$ and $\rho_{11}$, are ordinary (not stochastic) functions of $\varepsilon$ and $\mu$; which formally follow from solving Eqs.(\ref{Eq: Density matrix}). All stochastic aspects of the segment's properties are taken into account through the function $\mathcal{G}_s(\varepsilon,\mu)$, which represents the joint probability distribution of the transition energy $\varepsilon$ and the dimensionless transition dipole moment $\mu$ of the segment. The latter is defined as \begin{equation} \label{G} \mathcal{G}_s(\varepsilon,\mu) = \frac{1}{N_s} \left\langle \sum_{\nu \in s} \delta\Big(\varepsilon - \varepsilon_{\nu}\Big) \delta \Big(\mu - \mu_{\nu} \Big)\right\rangle. \end{equation} It is worth to notice that at a given disorder strength $\sigma$, $N_s$ scales linearly with the aggregate size $N$. Hence, the ratio $N_s/N$ in Eq.~(\ref{P}) is $N$-independent. From our simulations we found that $N_s/N = 0.074 (\sigma/J)^{0.8}$, which nicely agrees with the disorder scaling of the typical localization size $N^*$.~\cite{Malyshev01} \begin{figure} \begin{center} \includegraphics[width = .48\textwidth,scale=1]{REAL_NEW_Figure_Distribution_a.eps} \end{center} \begin{center} \includegraphics[width = .48\textwidth,scale=1]{REAL_NEW_Figure_Distribution_bc.eps} \end{center} \caption{(a) The joint probability distribution $\mathcal{G}_s(\varepsilon,\mu)$ of the transition energy $\varepsilon$ and dimensionless transition dipole moment $\mu$ for $s$-like states on localization segements, obtained for a disorder strength $\sigma = 0.1 J$ according to Eq.~(\ref{G}). We used chains of length $N=500$ with the monomer transition energy $\epsilon_0 = 0$. The sampling was performed over 300 000 disorder realizations. Contour lines correspond to 10\% of the peak value of the distribution. (b) - The absorption spectrum $\mathcal{A}_s(\varepsilon) = \int d\mu \> \mu^2 \mathcal{G}_s(\varepsilon,\mu)$. (c) - The distribution $\mathcal{M}_s(\mu) = \int d\varepsilon \, \mathcal{G}_s(\varepsilon,\mu)$ of the transition dipole moment $\mu$. The solid lines represent the results of calculations, whereas the open circles are fits by a Gaussian.} \label{fig: Example G} \end{figure} After the ${\cal G}_s$-distribution is obtained by straightforward sampling of a sufficient number of disorder realizations, one can easily calculate the two important quantities: ${\cal A}_s(\varepsilon) = N_s^{-1} \big\langle \sum_{\nu \in s} \mu^2_{\nu} \delta\big(\varepsilon - \varepsilon_{\nu}\big) \big\rangle = \int d\mu \> \mu^2 \, \mathcal{G}_s(\varepsilon,\mu)$, which represents the absorption spectrum, not accounting for homogeneous broadening (i.e., close to the zero-temperature spectrum), and $\mathcal{M}_s(\mu) = N_s^{-1} \big\langle \sum_{\nu \in s} \delta \big(\mu - \mu_{\nu} \big)\big\rangle = \int d\varepsilon \, \mathcal{G}_s(\varepsilon,\mu)$, which represents the probability density of the transition dipole moment. As we are mostly interested in the limit of dominating ingomogeneous broadening, we will refer from now on to ${\cal A}_s(\varepsilon)$ as to the absorption spectrum, assuming that its half width at half maximum (HWHM) $\sigma^*$ is larger than the homogenous HWHM (resulting from $\Gamma_{\nu}$). An example of the distributions $\mathcal{G}_s(\varepsilon,\mu), {\cal A}_s(\varepsilon)$, and $\mathcal{M}_s(\mu)$ computed for an ensemble of chains with $N = 500$ and a disorder strength $\sigma = 0.1 J$, is depicted in Fig.~\ref{fig: Example G} [panels~(a),(b), and (c), respectively]. Because $\mathcal{G}_s(\varepsilon,\mu) = \mathcal{G}_s(\varepsilon,-\mu)$, only $\mu > 0$ is considered in the plots. Note that in our model, the absorption spectrum $\mathcal{A}_s(\varepsilon)$ is almost symmetric with respect to the peak position, except the tails, which show a small asymmetry. It can be fitted well by a Gaussian, unlike the case when all the exciton states are taken into account. The latter gives rise to a Lorentzian high-energy tail of $\mathcal{A}_s(\varepsilon)$, reproducing the asymmetric lineshape commonly seen in experiments. The shape of the $\mathcal{M}_s$-distribution can also be fitted by a Gaussian, but with a lesser accuracy than the absorption spectrum. The distribution $\mathcal{G}_s(\varepsilon,\mu)$ exhibits interesting scaling properties with regard to the disorder strength $\sigma$. A detailed study will be presented elsewhere. \subsection{Truncated Maxwell-Bloch equations} \label{MBE} To proceed we seek the solution of Eqs.~(\ref{Eq: Density matrix}) in the standard manner: we set $\rho_{10} = -(i/2) R \exp{(-i\omega_i t)}$ and $\mathcal{E} = (1/2)E \exp{(-i \omega_i t)} + \mathrm{c.c.}$, where the complex amplitudes $R$ and $E$ vary slowly on the time scale $2\pi/\omega_i$, and we use the rotating wave approximation. It is straightforward to arrive at a set of truncated equations for the populations $\rho_{11}$ of the one-exciton states, and the amplitudes $R$ of the off-diagonal density matrix elements, and the field $\Omega = d_0 E$ (in frequency units): \begin{subequations} \label{Eq: Density matrix truncated} \begin{equation} \dot{\rho}_{11} = -\gamma \rho_{11} - \frac{1}{4} \mu \left(\Omega R^* + \Omega^*R \right) \ , \end{equation} \begin{equation} \dot{R} = -\left[i(\Delta - \Delta_0) + \Gamma \right]R + \mu \Omega (\rho_{11} - \rho_{00})\ , \end{equation} \begin{equation} \Omega =\Omega_i + \Gamma_R \> \frac{N_s}{N}\, \int d\Delta d\mu \, \mathcal{G}(\Delta,\mu)\, \mu R \ , \end{equation} \end{subequations} where $\Delta - \Delta_0 = \varepsilon - \omega_i$ is the frequency detuning between the exciton transition and the incoming field, which is decomposed into two parts: $\Delta = \varepsilon - \varepsilon_0$ and $\Delta_0 = \omega_i - \varepsilon_0$ indicating, respectively, the frequency detuning of the exciton state and the incoming field from the exciton band-edge frequency $\varepsilon_0 = \epsilon_0 - 2.404 J$. The constant $\Gamma_R = 2\pi n_0 {d_0}^2 k L$ is an important parameter of the model.~\cite{Malyshev00,Jarque01,Glaeske01} The physical meaning of $\Gamma_R$ can be explored by rewriting it in the form $\Gamma_R = \frac{3}{2\pi}\gamma_0 n_0 L (\lambda/2)^2$, where $\gamma_0 = 4 d_0^2 \omega^3/(3c^3)$ is the monomer spontaneous emission rate and $n_0 L$ is the surface density of monomers. The quantity $n_0 L(\lambda/2)^2$ can be interpreted as the number of monomers in a $(\lambda/2)^2$-square that oscillate in phase. In other words, $\Gamma_R$ can be considered as the radiative rate of a single monomer, $\gamma_0$, enhanced by the number of monomers within a $(\lambda/2)^2$-square.~\cite{Lee74} $\Gamma_R$ governs the Dicke superradiance of a thin film,~\cite{Benedict88,Benedict96} as well as the collective radiative damping in the linear regime (see the next section). Therefore it is usually referred to as the superradiant constant. The set of equations~(\ref{Eq: Density matrix truncated}) together with the normalization condition~(\ref{norma}) and the definition~(\ref{G}) form the basis of our analysis. In the remainder of this paper, we will be particularly interested in the dependence of the transmitted field intensity $|\Omega|^2$ on the input field intensity $|\Omega_i|^2$ following from these equations. \section{Linear regime} \label{Sec: Linear regime} In order to get insight into the effect and interplay of the parameters that govern the bistability, we first consider the linear regime of the system. We assume that a weak input field $\Omega_i$ = const is switched on at $t=0$, weakness implying that the depletion of the ground state population can be neglected. Thus, we set $\rho_{00}(t) = 1$ and $\rho_{11}(t) = 0$, which linearizes Eqs.~(\ref{Eq: Density matrix truncated}), \begin{subequations} \label{Eq: Density matrix_truncated linear} \begin{equation} \dot{R} = -\left[i(\Delta - \Delta_0) + \Gamma \right]R - \mu \Omega \ , \end{equation} \begin{equation} \Omega =\Omega_i + \Gamma_R \> \frac{N_S}{N}\int d\Delta d\mu \, \mathcal{G}_s(\Delta,\mu)\, \mu R. \end{equation} \end{subequations} These equations can be solved easily in the Laplace domain. The solution for the Laplace transform of the transmitted field $\tilde{\Omega}$ reads \begin{eqnarray} \label{Eq: Linear solution 1} \tilde{\Omega} = \Big[1 + \Gamma_R \> \frac{N_s}{N} \int d\Delta d\mu \, \mathcal{G}_s(\Delta,\mu)\, \mu^2 \nonumber\\ \nonumber \\ \times \frac{1}{p + \left[i(\Delta - \Delta_0) + \Gamma \right]}\Big]^{-1}\> \tilde{\Omega}_i \ , \end{eqnarray} where $p$ denotes the Laplace parameter. To evaluate this expression, we neglect the $\mu$-dependence of $\Gamma$. Then the integral over $\mu$ gives, by definition, the absorption spectrum $\mathcal{A}_s(\Delta)$. The latter is normalized now to $F_s/N_s$, where $F_s = \big \langle \sum_{\mu \in s}\> \mu_{\nu}^2 \big \rangle$ is the average total oscillator strength of the $s$-like states per aggregate. To perform the integration over $\Delta$ explicitly, we replace $\mathcal{A}_s(\Delta)$ by a Lorentzian centered at $\Delta^*$ and with a width $\sigma^*$: \begin{equation} \label{Eq: A Lorentzian} \mathcal{A}_s(\Delta) = \frac{F_s}{N_s} \> \frac{\sigma^*}{\pi} \> \frac{1}{\left[(\Delta - \Delta^*)^2 + {\sigma^*}^2 \right]} \end{equation} (in all our numerical results, we do not invoke this approximation and keep the exact form of the ${\cal G}_s$-distribution, i.e., of the absorption spectrum). With this substitution, the result of the integration over $\Delta$ reads: \begin{eqnarray} \label{Eq: Linear solution 2} \tilde{\Omega} = \tilde{\Omega}_i - \frac{\tilde{\Gamma}_R} {p + i(\Delta^* - \Delta_0) + \Gamma + \sigma^* + \tilde{\Gamma}_R } \> \tilde{\Omega}_i \ , \end{eqnarray} where we introduced the renormalized superradiant constant $\tilde{\Gamma}_R = (F_s/N)\Gamma_R$. As the total oscillator strength of $s$-like states $F_s < N$, the ratio $F_s/N < 1$. We also note that $\Gamma + \sigma^*$ denotes the total (homogeneous plus inhomogeneous) dephasing rate. Finally, by assuming $\Omega_i = \text{const}$, which corresponds to $\tilde{\Omega}_i = \Omega_i/s$ in the Laplace domain, we obtain the following time-domain behavior of the transmitted field \begin{eqnarray} \label{Eq: Linear solution 3} \Omega & = & \frac{i(\Delta^* - \Delta_0) + \Gamma + \sigma^*} {i(\Delta^* - \Delta_0) + \Gamma + \sigma^* + \tilde{\Gamma}_R} \> \Omega_i \nonumber\\ \nonumber\\ & + & \frac{\tilde{\Gamma}_R}{i(\Delta^* - \Delta_0) + \Gamma + \sigma^* + \tilde{\Gamma}_R } \> \Omega_i \nonumber\\ \nonumber\\ & \times & \exp\left[ -i(\Delta^* - \Delta_0)t - (\Gamma + \sigma^* + \tilde{\Gamma}_R) t \right] \ . \end{eqnarray} As is seen from this equation, the field in the film, $\Omega$, reaches its steady-state value (given by the first term in the right-hand side) after a time $1/(\Gamma + \sigma^* + \tilde{\Gamma}_R)$. If the dephasing dominates the relaxation of the dipoles, i.e., if $\Gamma + \sigma^* \gg \tilde{\Gamma}_R$, the steady state limit of the opposing dipole field, given by $- \Omega_i\tilde{\Gamma}_R/[i(\Delta^* - \Delta_0) + \Gamma + \sigma^* + \Gamma_R]$, is small in magnitude compared to the incoming field $\Omega_i$. As a consequence, the field inside the film $\Omega \approx \Omega_i$. In this limit, one finds a \textit{high} film transmittivity. When $\tilde{\Gamma}_R \gg \Gamma + \sigma^*$ the superradiant damping drives the relaxation. Now the film dipoles, having sufficient time to respond collectively, can produce an opposing field $- \Omega_i\tilde{\Gamma}_R/|i(\Delta^* - \Delta_0) + \Gamma + \sigma^* + \tilde{\Gamma}_R|$ of magnitude $\approx \Omega_i$. This field almost totally compensates the incoming field, and results in a low magnitude of the field inside the film, $|\Omega| \sim \Omega_i|i(\Delta^* - \Delta_0) +\Gamma + \sigma^*|/\tilde{\Gamma}_R \ll \Omega_i$, and, consequently, in a \textit{low} film transmittivity. Switching to a high transmission state now requires a field intensity $\Omega_i$ that saturates the system. In this case we can see optical bistable switching (see the next section). From the above, it is clear that the interplay of superradiance and dephasing determines the linear transmittivity of the film. Hence, the ratio $\tilde{\Gamma}_R/(\Gamma + \sigma^*)$ is an important parameter of the model. In the theory of bistability it is often referred to as the \textit{cooperative number}.~\cite{Lugiato84,Gibbs85} \section{Steady-state analysis} \label{Sec: Steady-state analysis} \subsection{Bistability equation} \label{Sec: Bistability equation} In this section, we consider the steady-state regime, when we set $\Omega_i = \text{const}$ and $\dot{R} = \dot{\rho}_{11} = 0$. It is a matter of simple algebra to derive the following equation for the output intensity $|\Omega|^2$: \begin{eqnarray} \label{Eq: Steady state output field} \Omega_i^2 & = & |\Omega|^2 \Big| 1 + \Gamma_R \> \frac{N_s}{N} \int d\Delta d\mu \, \mu^2 \mathcal{G}_s(\Delta,\mu) \nonumber\\ \nonumber\\ & \times & \frac{\Gamma - i(\Delta - \Delta_0)} {(\Delta - \Delta_0)^2 + \Gamma^2 + |\Omega|^2 \Gamma/\gamma_0} \Big|^2 , \end{eqnarray} Formally, Eq.~(\ref{Eq: Steady state output field}) differs from the one found previously~\cite{Malyshev00} by the small factor $N_s/N$. This smallness, however, is compensated by the $N_s$-scaling of the integral in~(\ref{Eq: Steady state output field}): the latter is proportional to $F_s/N_s \gg 1$ (see the preceding section). Thus, the actual numerical factor in Eq.~(\ref{Eq: Steady state output field}) is on the order of $F_s/N$. Numerically, we found that $F_s/N$ depends only weakly on the disorder strength $\sigma$, lying within an interval $0.75 \le F_s \le 0.83$ when the disorder strength $\sigma$ ranges from 0 to $0.5J$. This means that the linear optical response in a system with static disorder is dominated by the $s$-like states, independent of the disorder. We stress that, unlike previous works,~\cite{Malyshev00} Eq.~(\ref{Eq: Steady state output field}) properly accounts for the joint statistics of the transition energy and the transition dipole moment via the $\mathcal{G}_s$-distribution. \subsection{Phase diagram} \label{Sec: Phase diagram} Numerical analysis shows that Eq.~(\ref{Eq: Steady state output field}) can have three real roots in a certain region of the parameter space $(\Gamma_R,\sigma^*)$. In other words, our model can exhibit bistable behavior. In all simulations, we used linear chains of $N = 500$ sites and $\gamma_0 = 2\times 10^{-5}J$ (appropriate for monomers of polimethine dyes). The dephasing constant $\gamma_{\nu 0}$ was considered not fluctuating~\cite{Heijs05} and was set to $\gamma_{\nu 0} = 500 \gamma_0$. \begin{figure} \begin{center} \includegraphics[width = 0.45\textwidth,scale=1.1]{REAL_NEW_Figure_Steady_state.eps} \end{center} \caption{Examples of the input-output characteristics, demonstrating the occurrence of three-valued solutions to Eq.~(\ref{Eq: Steady state output field}). In simulations, chains of $N = 500$ sites and a disorder strength $\sigma = 0.1 J$ were used, corresponding to a HWHM $\sigma^* = 0.0156 J$. (a) - The results obtained for different superradiant constants $\Gamma_R$ at the optimal detuning $\Delta_0^{\mathrm{opt}} = -2.42 J$, which corresponds to an incoming field which is resonant with the absorption maximum. The open circles, dotted, and solid curves represent, respectively, the data calculated for $\Gamma_R = 16.61 \sigma^* $ (the bistability threshold for $\sigma = 0.1 J$), $\Gamma_R = 11.52 \sigma^*$ (below the bistability threshold), and $27.12 \sigma^*$ (above the bistability threshold). (b) - The results obtained for $\Gamma_R = 16.61 \sigma^*$ and various detunings $\Delta_0$. The dotted and solid curves represent, respectively, the data calculated for $\Delta_0 = \Delta_0^{\mathrm{opt}} - \sigma^*$, and $\Delta_0^{\mathrm{opt}} + \sigma^*$. The open circles show the same data as in panel (a). } \label{Fig: Steady State} \end{figure} Several examples of the $S$-shaped input-output characteristics calculated for the disorder degree $\sigma = 0.1 J$ are shown in Fig.~\ref{Fig: Steady State} for an input field that is resonant with the absorption maximum. We use the dimensionless intensities $I_{\mathrm{in}} = |\Omega_i|^2/(\gamma_0 \sigma^*)$ and $I_{\mathrm{out}} = |\Omega|^2/(\gamma_0 \sigma^*)$, which is convenient because the HWHM of the absorption spectrum $\sigma^*$ is an experimentally measurable quantity. Panel (a) shows how the input-output characteristics change when $\Gamma_R$ is below, at, or above its critical value. Panel (b) shows the input-output characteristics when the field is tuned through the resonance. \begin{figure}[ht] \begin{center} \includegraphics[width = 0.45\textwidth,scale=1.1]{REAL_NEW_Figure_Phase_Diagram_Delta_I.eps} \end{center} \caption{(a) - Dependence of the critical superradiant constant $\Gamma_R^c$ on the detuning $\Delta_0$ (solid line) calculated for the disorder strength $\sigma = 0.1 J$. The dashed line shows the absorption spectrum (absorption only due to $s$-states). The dotted horizontal line indicates $\Gamma_R^c$ calculated for the optimal detuning $\Delta^{\mathrm{opt}}_0 = -2.42 J$. (b) - Dependence of the switching intensity $I_{\mathrm{in}}^c$ on the detuning $\Delta_0$ calculated at the corresponding bistability threshold, i.e., with $\Gamma_R^c$ given in the panel (a). } \label{Fig: Phase diagram} \end{figure} At a given disorder strength $\sigma$, the minimal value of the superradiant constant $\Gamma_R$ needed for optical bistability (the \textit{critical} value $\Gamma_R^c$) depends on the detuning $\Delta_0$. Figure~\ref{Fig: Phase diagram}(a) explicitly demonstrates this effect: $\Gamma_R^c$ is almost constant within the absorption band, whereas it clearly increases outside it. Panel (b) shows the $\Delta_0$-dependence of the critical switching intensity $I_{\mathrm{in}}^c$ of the incoming field at the bistability threshold. This is the lowest intensity at which the film can switch, when the field is tuned at $\Delta_0$, and when the superradiance constant $\Gamma_R = \Gamma_R^c(\Delta_0)$. The data presented here is obtained for the disorder strength $\sigma = 0.1 J$. \begin{figure}[th] \begin{center} \includegraphics[width = 0.48\textwidth,scale=1.1]{REAL_NEW_Figure_Phase_Diagram_sigma.eps} \end{center} \caption{(a) - Phase diagram of the bistable optical response of a thin film in the $(\Gamma_R,\sigma^*)$-space obtained by solving Eq.~(\ref{Eq: Steady state output field}) for $\Delta=\Delta_0^{\mathrm{opt}}$. The open circles represent the numerical data points, whereas the solid line is a guide to the eye. Above (below) the solid line the film behaves in a bistable (stable) fashion. The solid line itself represents the $\sigma^*$-dependence of the critical superradiant constant $\Gamma_R^c$, calculated for the optimal detuning $\Delta_0^{\mathrm{opt}}$, i.e., when the incoming field is tuned to the absorption band maximum. This gives the minimal $\Gamma_R^c$ for each $\sigma^*$. (b) - The same data points as in the panel (a), only replotted as a function of $\Gamma^*$, where $\Gamma^*$ is the mean value of the relaxation constant $\Gamma$.} \label{Fig: Phase diagram sigma} \end{figure} As is seen from Fig.~\ref{Fig: Phase diagram}(a), there exists a detuning $\Delta_0^{\mathrm{opt}}$, referred to as {\it the optimal one}, at which $\Gamma_R^c$ takes its minimal value. The detuning is optimal if the imaginary term in Eq.~(\ref{Eq: Steady state output field}) vanishes: this term opposes a three-valued solution for the output field. For a symmetric absorption band, the optimal detuning corresponds to the incoming field being resonant with the absorption maximum. In our case, owing to a small asymmetry of the absorption band [see Fig.~\ref{fig: Example G}({\it b})], $\Delta_0^{\mathrm{opt}} = - 2.42 J$ is shifted slightly to the blue from the position of the absorption peak. We calculated $\Gamma_R^c$ as a function of the HWHM $\sigma^*$ for the optimal detuning. The result is shown in Fig.~\ref{Fig: Phase diagram sigma}. The plot represents, in fact, the phase diagram of the optical response: below the curve, the output-input characteristic of the film is always single-valued (stable), while - depending on the detuning - it can become three-valued (bistable) above it. The nonmonotonic behavior of $\Gamma_R^c$ at small magnitudes of $\sigma^*$, presented in the panel {\it a}, is simply explained by the fact that the disorder-induced (inhomogeneous) broadening becomes smaller than the homogeneous one $\sigma^* < \Gamma^*$, where $\Gamma^*$ is defined as $\Gamma^* = \int d\mu d\varepsilon \, \Gamma {\cal G}_s(\varepsilon,\mu)$. The ratio $\Gamma_R/\Gamma^*$ is now the relevant parameter, governing the occurrence of bistability. The panel ({\it b}) shows the $\sigma^*$-dependence of $\Gamma_R^c$ replotted in units of $\Gamma^*$, which is monotonic. When $\sigma \to 0$, the ratio $\Gamma_R^c/\Gamma^* \to 9.64$. This value is deduced from Eq.~(\ref{Eq: Steady state output field}). Indeed, in the limit of $\sigma \to 0$ we can move the Lorenztian outside the integral and use $\int d\Delta d\mu \, \mu^2 {\mathcal G}_s(\Delta,\mu) = F_s/N_s$. The resulting equation is the same as for a thin film of homogeneously broadened two-level systems, only with the renormalized cooperative number ${\tilde \Gamma}_R/\Gamma^* = (\Gamma_R/\Gamma^*)(F_s/N)$, where $F_s/N = 0.83$. Bearing in mind that the critical value of the ratio ${\tilde \Gamma}_R/\Gamma^*$ is equal to 8,~\cite{Zakharov88} we recover $\Gamma_R^c/\Gamma^* = 9.64$. \subsection{Spectral distribution of the exciton population} \label{Sec: Spectral distribution} More insight into what occurs at the switching threshold is obtained by studying the population distribution \begin{equation} \label{Eq: Population distribution} r_{11}(\Delta) = \int d\mu {\cal G}_s(\Delta,\mu) \rho_{11}(\Delta,\mu) \ , \end{equation} with $\rho_{11}$ the steady-state solution of Eqs.~(\ref{Eq: Density matrix truncated}). This distribution enables us to visualize the relevant spectral range around the switching point. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{REAL_NEW_Figure_Populations.eps} \end{center} \caption{Population-distributions $r_{11}(\Delta)$ (solid curves), calculated according to Eq.~(\ref{Eq: Population distribution}) for $\sigma = 0.1J$ and $\Gamma_R = 27.12 \sigma^*$, with the optimal detuning $\Delta_0^{\mathrm{opt}} = -2.42 J$ indicated by the vertical dashed line. Open circles show the absorption spectrum $\mathcal{A}_s(\Delta)$. Panel (a) represents $r_{11}(\Delta)$ below the upper switching threshold. The plotted distributions were calculated for the input intensities $I_{\mathrm{in}} = |\Omega_i|^2/(\gamma_0\sigma^*) = 3.33, 64.34$, and $81.78$ (from bottom to top). Panel (b) shows $r_{11}(\Delta)$ above the upper switching threshold. In the inset, the dependence of the full width at half maximum (FWHM) of $r_{11}(\Delta)$ on $I_{\mathrm{in}}$ is plotted in units of the FWHM of the absorption spectrum.} \label{Fig: Excited state population} \end{figure} In Fig.~\ref{Fig: Excited state population}, we plotted $r_{11}(\Delta)$ calculated for the optimal detuning $\Delta_0^{\mathrm{opt}}$ and $\Gamma_R = 27.12 \sigma^*$ (above the critical value $\Gamma_R^c$). Panels (a) and (b) show the results obtained for the incoming field intensities $I_{\mathrm{in}} = \Omega_{i}^2/(\gamma_0 \sigma^*)$ below and above the switching threshold, respectively. Below the switching threshold, only a relatively narrow spectral region around $\Delta^{\mathrm{opt}}_0$ acquires population. This is because, in spite of the intensities of the incoming field $I_{\mathrm{in}} = 3.33, 64.34$, and $81.78$ being far above the saturation value, the intensity of the field inside the film, $I_{\mathrm{out}} = \Omega_{i}^2/(\gamma_0 \sigma^*)$ $= 0.025$, $0.5$, and $1.5$, is below or on the order of it. For these intensities, the one-exciton approximation, with only one $s$-like excited state considered in each localization segment, is reasonable. Figure~\ref{Fig: Excited state population}(b) represents the population distribution $r_{11}(\Delta)$ after switching, when the field inside the film $I_{\mathrm{out}}$ exceeds the switching threshold and becomes much larger than the saturation magnitude. In this limit, we can replace $\rho_{11}(\Delta,\mu)$ in Eq.~(\ref{Eq: Population distribution}) by 0.5 and get $r_{11}(\Delta) = 0.5 \int d\mu {\cal G}_s(\Delta,\mu) = 0.5 {\cal D}_s(\Delta)$, where ${\cal D}_s(\Delta)$ is the density of $s$-like states. The latter is plotted in Fig.~\ref{Fig: Excited state population} (b) by the solid line and appears to be wider than the absorption band. For such field intensities, it is likely that the two-level model should be corrected by including the one-to-two exciton transitions. This work is now in progress. \section{Time-domain analysis} \label{Sec: Time-domain analysis} \subsection{Hysteresis loop} \label{Sec: Hysterisis loop} It is well known that the S-shaped output-input dependence and, as a consequence, the existence of two switching thresholds results in optical hysteresis.~\cite{Lugiato84,Gibbs85} To investigate this, we numerically integrated Eqs.~(\ref{Eq: Density matrix truncated}) while slowly sweeping up-and-down the input intensity $I_{\mathrm{in}}$ above the bistability threshold ($\Gamma_R > \Gamma_R^c$). The result for the transmitted intensity $I_{\mathrm{out}}$ is shown in Fig.~\ref{Fig: Hysteresis} by the solid curve with arrows. The parameters used in the calculations are specified in the figure caption. The input field intensity was swept from zero to 110 and back to zero. The open circles indicate the steady-state solution obtained by solving Eq.~(\ref{Eq: Steady state output field}) for the same set of parameters. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{REAL_NEW_Figure_Hysteresis.eps} \end{center} \caption{An example of the stable optical hysteresis loop of the transmitted intensity $I_{\mathrm{out}} = |\Omega|^2/(\gamma_0\sigma^*)$ (the solid curve with arrows) obtained by numerically solving Eqs.~(\ref{Eq: Density matrix truncated}) for a linear sweeping up-and-down of the input field intensity $I_{\mathrm{in}} = |\Omega_i|^2/(\gamma_0\sigma^*)$. The sweeping time is $3000/ \sigma^*$. The open circles represent the steady-state solution, Eq.~(\ref{Eq: Steady state output field}). The calculations were performed for the following set of parameters: $\Gamma_2 = 500\gamma_0$, $\sigma = 0.1 J$, $\Gamma_R = 27.12 \sigma^*$, and $\Delta_0 = \Delta_0^{\mathrm{opt}} = -2.42 J$.} \label{Fig: Hysteresis} \end{figure} As can be seen from Fig.~\ref{Fig: Hysteresis}, the solid curve almost perfectly follows the lower and upper branches of the steady-state three-valued solution, nicely demonstrating the optical hysteresis. The intermediate branch is not revealed, which is clear evidence of its instability. Note also that switching from the lower branch to the upper one occurs for an input field intensity larger than the critical value. This indicates that when the input field intensity is only slightly above the switching intensity, the response of the film slows down. A much less pronounced but similar effect can be observed at the lower switching threshold, where the field switches from the upper branch to the lower one. This is consistent with our study of the relaxation time presented below. \subsection{Switching time} \label{Sec: Switching time} Of great importance from a practical point of view, is the relaxation time $\tau$ which is required for the output intensity to approach its steady-state value after the input intensity has changed. If this time is much shorter than the characteristic time of changing the input intensity, then the output signal will adiabatically follow it, remaining all the time close to the steady-state level. Only in the limit of short $\tau$, an abrupt switching from low to high transmittivity can be realized. This especially concerns the region in the vicinity of the switching thresholds (see Fig.~\ref{Fig: Hysteresis}). In other words, the relaxation time $\tau$ limits the usage of the optical bistable element as an instantaneous switcher. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{REAL_NEW_Figure_Example_Switching.eps} \end{center} \caption{Kinetics of the transmitted field intensity $I_{\mathrm{out}} = |\Omega|^2/(\gamma_0\sigma^*)$ approaching its stationary value (dashed line) after the incident field with intensity $I_{\mathrm{in}}= |\Omega_i|^2/(\gamma_0\sigma^*) = 150$ is turned on abruptly at $t = 0$. The value $I_{\mathrm{in}} = 150$ exceeds the upper switching threshold $I_{\mathrm{in}}^c = 82.16$. The other parameters were chosen as in Fig.~\ref{Fig: Hysteresis}.} \label{Fig: Example Switching} \end{figure} Motivated by the above observations, we performed a study of the relaxation time $\tau$. Figure~\ref{Fig: Example Switching} shows an example of how the transmitted field intensity approaches its stationary value when an input field intensity with a value of $I_{\mathrm{in}} = 150$ is instantaneously switched on at $t=0$. This field is above the upper switching threshold $I_{\mathrm{in}}^c = 82.16$. Calculations were carried out for the set of parameters of Fig.~\ref{Fig: Hysteresis}. As is observed, for the set of parameters used, the output intensity stays low during a waiting time of about $20/\sigma^*$, before it rapidly (on a time scale much shorter than $20/\sigma^*$) increases to its steady state value. This behavior allows one to define $\tau$ as the time which the output intensity takes to reach its first peak ($17.3/\sigma^*$ in the current example). \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{REAL_NEW_Figure_Switching_Time.eps} \end{center} \caption{Relaxation time $\tau$ as a function of the excess input intensity $I_{\mathrm{in}} - I_{\mathrm{in}}^c$ at the upper switching threshold (indicated by the vertical dotted line). $\tau$ was calculated by turning on abruptly the incoming field at $t=0$, and waiting until the transmitted field intensity $I_{\mathrm{out}}$ approaches its steady-state value (for more details, see the text). The open circles show the numerical results, while the solid line represents a best power-law fit given by Eq.~\ref{Eq: Relaxation time}. The calculations were performed for the set of parameters of Fig.~\ref{Fig: Hysteresis}.} \label{Fig: Switching time versus I} \end{figure} Using the above definition, we calculated the relaxation time $\tau$ as a function of the excess input intensity $I_{\mathrm{in}} - I_{\mathrm{in}}^c$ at the upper switching threshold. The results are plotted in Fig.~\ref{Fig: Switching time versus I}. As is seen, $\tau$ drastically increases when $I_{\mathrm{in}}$ gets closer to $I_{\mathrm{in}}^c$. The numerical data (open circles) is well fitted by the formula \begin{equation}\label{Eq: Relaxation time} \tau = 870 \left(I_{\mathrm{in}}-I_{\mathrm{in}}^c\right)^{-0.83} \ , \end{equation} shown by the solid curve. \section{Discussion of driving parameters} \label{Sec: Driving Parameters} To get insight into the possibility to realize optical bistable behavior for a film of J-aggregates, we consider the typical parameters for this type of systems. First, we estimate the superradiant constant $\Gamma_R = (3/8\pi) \gamma_0 n_0 \lambda^2 L$, considering the low-temperature experimental data of J-aggregates of polymethine dyes. For these species, typically, $\gamma_0 \approx (1/3)$ ns$^{-1}$ and $\lambda \approx 600$ nm.~\cite{deBoer90,Fidder90,Minoshima94,Moll95,Kamalov96,Scheblykin00} With this in mind and choosing $L = \lambda/2\pi$ \ (or $kL = 1$),we obtain the following estimate: $\Gamma_R \approx 10^{-18} n_0$ cm$^3$ ps$^{-1}$. This value for $L$ is easily accessible with the spin-coating method~\cite{Misawa93} and guarantees the applicability of the mean-field approach for the description of the thin film optical response.~\cite{Jarque01} The typical width of the J-band of polymethine dyes is on the order of several tens of cm$^{-1}$ or approximately 1 ps$^{-1}$ (in frequency units).~\cite{deBoer90,Fidder90,Minoshima94,Moll95,Kamalov96,Scheblykin00} Thus, for the set of parameters we chose, the number density of molecules $n_0$ must be on the order of $10^{19}$ cm$^{-3}$ to get the ratio $\Gamma_R/\sigma^*$ required for bistability to occur. This concentration is usually achieved in spin-coated films. Another option to adjust the parameters favoring bistability is to consider $J$-aggregates composed of monomers with higher radiative constant $\gamma_0$ and a larger emission wavelength $\lambda$. From this point of view, J-aggregates of squarylium dyes may be suitable candidates.~\cite{Furuki01,Tatsuura01,Pu02} This type of aggregates, spin-coated on a substrate, shows a sharp absorption peak at $\lambda \approx 800$ nm with HWHM = 20 nm at room temperature and a fast ($\sim 100$ fs) optical response~\cite{Furuki01,Pu02} combined with a giant cubic succeptibility,~\cite{Tatsuura01} both attributed to the excitonic nature of the optical excitations. The monomer decay time has been reported to be $\sim 100$ ps,~\cite{Furuki01}), although no information about the quantum yield has been presented. If we assume that this time is of radiative nature, the superradiant constant $\Gamma_R$ can be adjusted to values above the bistability threshold even for smaller concentration of monomers in the film. On the other hand, for larger $\gamma_0$ also the intensity required for switching increases, which is not desired because of the limited photostability of most J-aggregates. \section{Summary and concluding remarks} \label{Sec: Summary} We theoretically studied the optical response of an ultrathin film of oriented J-aggregates, with the goal to examine the possibility of bistable behavior of the system. The standard Frenkel exciton model was used for a single aggregate: an open linear chain of monomers coupled by delocalizing dipole-dipole excitation transfer interactions, in combination with uncorrelated on-site disorder, which tends to localize the exciton states. We considered a single aggregate as a meso-ensemble of two-level systems, each one composed of an $s$-like localized one-exciton state and its own ground state. The one-to-two exciton transitions have been neglected. As a tool to describe the transmission properties of the film, we employed the optical Maxwell-Bloch equations adapted for a thin film. The electric polarization of the film was calculated by making use of a joint probability distribution of exciton energies and transition dipole moments, properly taking into account the correlation properties of these two stochastic variables. The joint distribution function was calculated by numerically diagonalizing the Frenkel Hamiltonian and averaging over many disorder realizations. We derived a novel steady-state equation for the transmitted signal in terms of the joint distribution function, and demonstrated that three-valued solutions to this equation exist in a certain domain of the parameter space $(\Gamma_R,\sigma^*)$,where $\Gamma_R$ is the superradiant constant and $\sigma^*$ is the half-width-at-half-maximum of the absorption band. Our approach allowed us to generalize previous results~\cite{Malyshev00,Jarque01} to correctly account for the stochastic nature of the exciton energy and transition dipole moment. Using the new steady-state equation, we have found that the critical value of the so-called "cooperative number" $\Gamma_R/\sigma^*$,~\cite{Lugiato84} which governs the occurrence of bistability of the film, is higher than obtained before.~\cite{Malyshev00} Moreover, in contrast to Refs.~\onlinecite{Malyshev00} and~\onlinecite{Jarque01}, we have analyzed the switching time, which show a dramatic increase for input intensities close to the switching point. We also found that the "cooperative number" $\Gamma_R/\sigma^*$ increases with $\sigma^*$, but only slightly, varying between 12 and approximately 25 within a wide range of $\sigma^*$. Estimating the parameters of our model for aggregate of polymethin dyes shows that these are a promising candidate to measure the effect. Finally, we note that also the microcavity arrangement of molecular aggregates~\cite{Lidzey98,Litinskaya04,Beltyugov04,Agranovich05,Zoubi05} is of interest for applications. During the last decade, organic microcavities have received a great deal of attention because of the strong coupling of the excitons to cavity photons, leading to giant polariton splitting in these devices.~\cite{Lidzey03} The recent observation of optical bistability in planar {\it inorganic} microcavities~\cite{Baas04} in the strong coupling regime suggests that {\it organic} microvavities can exhibit a similar behavior. \acknowledgments This work is part of the research program of the Stichting voor Fundamenteel Onderzoek der Materie (FOM), which is financially supported by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO). Support was also received from NanoNed, a national nanotechnology programme coordinated by the Dutch Ministry of Economic Affairs.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,341
{"url":"https:\/\/www.aimsciences.org\/article\/doi\/10.3934\/dcds.2016056","text":"# American Institute of Mathematical Sciences\n\nOctober\u00a0 2016,\u00a036(10):\u00a05817-5835. doi:\u00a010.3934\/dcds.2016056\n\n## On the global well-posedness to the 3-D Navier-Stokes-Maxwell system\n\n 1 Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing 211106 2 Department of Mathematics, Nanjing University, Nanjing 210093\n\nReceived\u00a0 September 2015 Revised\u00a0 March 2016 Published\u00a0 July 2016\n\nThe present paper is devoted to the well-posedness issue of solutions of a full system of the $3$-$D$ incompressible magnetohydrodynamic(MHD) equations. By means of Littlewood-Paley analysis we prove the global well-posedness of solutions in the Besov spaces $\\dot{B}_{2,1}^\\frac1{2}\\times B_{2,1}^\\frac3{2}\\times B_{2,1}^\\frac3{2}$ provided the norm of initial data is small enough in the sense that \\begin{align*} \\big(\\|u_0^h\\|_{\\dot{B}_{2,1}^\\frac1{2}} +\\|E_0\\|_{B_{2,1}^\\frac{3}{2}}+\\|B_0\\|_{B_{2,1}^\\frac{3}{2}}\\big)\\exp \\Big\\{\\frac{C_0}{\\nu^2}\\|u_0^3\\|_{\\dot{B}_{2,1}^\\frac1{2}}^2\\Big\\}\\leq c_0, \\end{align*} for some sufficiently small constant $c_0.$\nCitation: Gaocheng Yue, Chengkui Zhong. On the global well-posedness to the 3-D Navier-Stokes-Maxwell system. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5817-5835. doi: 10.3934\/dcds.2016056\n##### References:\n\nshow all references\n\n##### References:\n [1] Jishan Fan, Yueling Jia. Local well-posedness of the full compressible Navier-Stokes-Maxwell system with vacuum. Kinetic & Related Models, 2018, 11 (1) : 97-106. doi: 10.3934\/krm.2018005 [2] Zhichun Zhai. Well-posedness for two types of generalized Keller-Segel system of chemotaxis in critical Besov spaces. Communications on Pure & Applied Analysis, 2011, 10 (1) : 287-308. doi: 10.3934\/cpaa.2011.10.287 [3] Qunyi Bie, Qiru Wang, Zheng-An Yao. On the well-posedness of the inviscid Boussinesq equations in the Besov-Morrey spaces. Kinetic & Related Models, 2015, 8 (3) : 395-411. doi: 10.3934\/krm.2015.8.395 [4] Hartmut Pecher. Almost optimal local well-posedness for the Maxwell-Klein-Gordon system with data in Fourier-Lebesgue spaces. Communications on Pure & Applied Analysis, 2020, 19 (6) : 3303-3321. doi: 10.3934\/cpaa.2020146 [5] Radjesvarane Alexandre, Mouhamad Elsafadi. Littlewood-Paley theory and regularity issues in Boltzmann homogeneous equations II. Non cutoff case and non Maxwellian molecules. Discrete & Continuous Dynamical Systems, 2009, 24 (1) : 1-11. doi: 10.3934\/dcds.2009.24.1 [6] Zhong Tan, Leilei Tong. Asymptotic behavior of the compressible non-isentropic Navier-Stokes-Maxwell system in $\\mathbb{R}^3$. Kinetic & Related Models, 2018, 11 (1) : 191-213. doi: 10.3934\/krm.2018010 [7] Jishan Fan, Fucai Li, Gen Nakamura. Convergence of the full compressible Navier-Stokes-Maxwell system to the incompressible magnetohydrodynamic equations in a bounded domain. Kinetic & Related Models, 2016, 9 (3) : 443-453. doi: 10.3934\/krm.2016002 [8] Weike Wang, Xin Xu. Large time behavior of solution for the full compressible navier-stokes-maxwell system. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2283-2313. doi: 10.3934\/cpaa.2015.14.2283 [9] Xiaofeng Hou, Limei Zhu. Serrin-type blowup criterion for full compressible Navier-Stokes-Maxwell system with vacuum. Communications on Pure & Applied Analysis, 2016, 15 (1) : 161-183. doi: 10.3934\/cpaa.2016.15.161 [10] Jingjing Zhang, Ting Zhang. Local well-posedness of perturbed Navier-Stokes system around Landau solutions. Electronic Research Archive, 2021, 29 (4) : 2719-2739. doi: 10.3934\/era.2021010 [11] Daoyuan Fang, Ruizhao Zi. On the well-posedness of inhomogeneous hyperdissipative Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3517-3541. doi: 10.3934\/dcds.2013.33.3517 [12] Reinhard Racke, J\u00fcrgen Saal. Hyperbolic Navier-Stokes equations I: Local well-posedness. Evolution Equations & Control Theory, 2012, 1 (1) : 195-215. doi: 10.3934\/eect.2012.1.195 [13] Matthias Hieber, Sylvie Monniaux. Well-posedness results for the Navier-Stokes equations in the rotational framework. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5143-5151. doi: 10.3934\/dcds.2013.33.5143 [14] Xiaoping Zhai, Yongsheng Li, Wei Yan. Global well-posedness for the 3-D incompressible MHD equations in the critical Besov spaces. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1865-1884. doi: 10.3934\/cpaa.2015.14.1865 [15] Yao Nie, Jia Yuan. The Littlewood-Paley $pth$-order moments in three-dimensional MHD turbulence. Discrete & Continuous Dynamical Systems, 2021, 41 (7) : 3045-3062. doi: 10.3934\/dcds.2020397 [16] Minghua Yang, Jinyi Sun. Gevrey regularity and existence of Navier-Stokes-Nernst-Planck-Poisson system in critical Besov spaces. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1617-1639. doi: 10.3934\/cpaa.2017078 [17] Jinyi Sun, Zunwei Fu, Yue Yin, Minghua Yang. Global existence and Gevrey regularity to the Navier-Stokes-Nernst-Planck-Poisson system in critical Besov-Morrey spaces. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3409-3425. doi: 10.3934\/dcdsb.2020237 [18] Quanrong Li, Shijin Ding. Global well-posedness of the Navier-Stokes equations with Navier-slip boundary conditions in a strip domain. Communications on Pure & Applied Analysis, 2021, 20 (10) : 3561-3581. doi: 10.3934\/cpaa.2021121 [19] Hongmei Cao, Hao-Guang Li, Chao-Jiang Xu, Jiang Xu. Well-posedness of Cauchy problem for Landau equation in critical Besov space. Kinetic & Related Models, 2019, 12 (4) : 829-884. doi: 10.3934\/krm.2019032 [20] Jianjun Yuan. On the well-posedness of Maxwell-Chern-Simons-Higgs system in the Lorenz gauge. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 2389-2403. doi: 10.3934\/dcds.2014.34.2389\n\n2020\u00a0Impact Factor:\u00a01.392","date":"2021-12-03 04:05:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6128032803535461, \"perplexity\": 7200.8279045408435}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964362589.37\/warc\/CC-MAIN-20211203030522-20211203060522-00317.warc.gz\"}"}
null
null
Valley21 project speaks to the tradition of living with nature in an ancient landscape and ancient way of life fiew kilometers far from the city of Brașov, it will be the first truly ecology friendly second home complex in Romania, with a unique condominium concept. That's a place where people can seek privacy in their homes with their family, or take part in the community of owners to share good time or share common governance over the space of their Valley. Valley21, through its executing company, Dalghiaș Development srl., is sponsoring a process that will lead to a new strategic plan for the municipality of Vama Buzâului, to be approved by the mayor and council in 2016. The plan is about marketing of the location, it is about ensuring that people will actually gain from the development through higher prosperity and grass roots empowerment. For millennia Valley21 has been landscaped by nature and flocks of sheep. This has formed a new ecology and shaped a park like environment with meadows and patches of forest. The masterplan follows what we found, a landscape formed by the symbiosis of mankind and nature. The design of a Masterplan is the result of the complexity. But the complexity is not the result, just the starting point. Our general approach is to condense all the informations that we have in a conceptual work, an operation that try to bring all the aspects to the aesthetic level. Paint as parametric line leads the design, strengthens the decision in one vision. And if from one side there are numbers, costs and technical issues to solve. On the other there are the characterization, landscape details, lifestyles, qualities that we want to bring. All the elements were never taken compartmentalized by the decision team and the design team together. In fact the Masterplan follows the guidelines inside the Joint Venture agreement between Dalghiaș Development srl. and the Municipality of Vama Buzâului, in which is required to split the realisation and the use of the land in different expansions phases, the first is foundational and preparatory for the next two. It is not a case if the main areas of the development of the Masterplan are the 3 meadows embraced by the forest. This strategy was choosen also to a lower scale, defining groups of dwelling in clusters with the objective to reduce the number of gravel roads, optimize the infrastructures and gives just the pros factor of the Neighborhood Unit always mantaining the same high privacy, having almoust 1000sqm around of each dwelling. Terra House is one of the two typologies chosen to be a landmark for the Valley. We have not investigated just a shape a facade that could fit in a pristine landscape. Everytime, step by step, we put in the center of the concept the individual and his surrounding. The design wants to achive the total and constant perception of the landscape as if you were on outside, with the sun, rain and snow. But without sacrificing comfort and well-being. In order to transform this feeling in architecture, the house is partly built into the earth, the earth which can give also protection from extremely temperature, as climatic strategy. Solis HouseI has the same concept foundation as the Terra House, but with this typology we want to offer the opportunity to the potential clients, to have a two level house, with a second floor designed into a recognizable iconic shape, totally out of the earth and a ground floor sleeping area integrated into the natural slopes, with a free facade looking into the landscape. The upper volume is like a telescope oriented to catch the best view and privacy condition. The design is a contemporary reinterpretation of the traditional shepard houses,as for the material, using wood for structure and coating, and for shape. The link with the tradition coexist with a modern sense of living the domestic space underlined from the outside by wide glass openings. The Solis House was chosen as "first prototype house" and is built and it is operative. It has also obtained the "Green Homes" certification released by the Romania Green Building Council, for its low energy consumptions but also for the eco-friendly material used, healthy for the end users. The Hospitality Center is a meeting place, a place that needs to surprise, inspires the visitors who come there for the first time and all the costumers of Valley21. In the Hospitality Center you can have experience of landscape, nature and water,local food and production, sound, music, theater, land art. The HC area is organized as a village, a cluster made by three clear volumes – watching three different directions, hugged by a common path. The result is a hierarchy of volumes that creates an open common area, iconic sign on the land, clearly unique. All the paths create a promenade into the nature, suggesting a particular relation and activity with the surroundings and the buildings. Starting as simple extension of the internal activities (like the terrace of the cafè), the signs of the catwalks become independent external functions, like natural pools,pergolas, and an open air theater resting on the slope. The rhomboidal shapes of the main volumes are pure,monolithic geometry into the pristine landscape, which underline the view to the river,optimize the solar captation, invite the guests to the wide terraces and to the internal spaces. The design of the dwelling has a long story of prototypes with different number of rooms, footprints, construction systems, facades and materials. But always with a heartfelt feeling of connectedness with the place.
{ "redpajama_set_name": "RedPajamaC4" }
1,432
{"url":"https:\/\/www.physicsforums.com\/threads\/solving-x-ax-g-using-undetermined-coefficients.401688\/","text":"# Solving x'=Ax+g using undetermined coefficients\n\n1. May 8, 2010\n\n### Jamin2112\n\n1. The problem statement, all variables and given\/known data\n\nProblem #4 from here: http:\/\/www.math.washington.edu\/~jtittelf\/ExtraCredit.pdf\n\n2. Relevant equations\n\n?\n\n3. The attempt at a solution\n\nThere's only one example of using undetermined coefficients in my textbook. I'm not sure where to start. x = t2a + tb + c ?????? That's just a ball park guess, since then x' will have just have constant vectors and vectors multiplied by t, like in the problem.\n\n2. May 8, 2010\n\n### gabbagabbahey\n\nI'd start by finding the eigenvalues and eigenvectors of $$\\begin{pmatrix}2 & 2 \\\\ 1 & 3\\end{pmatrix}$$....\n\n3. May 8, 2010\n\n### Jamin2112\n\nFine. det(AI)=0 ----> (2-\u00df)*(3-\u00df)-2*1 = 0 ----> \u00df2 - 5\u00df + 4 = 0 ----> (\u00df-4)(\u00df-1)=0 ----> \u00df=4, 1\n\n-----> \u00df(1) = (1 1)T, B(2)=(-2 1)T\n\nAh-ha! So I know part of the solution is e4t(1 1)T + et(-2 1)T.\n\nMaybe I assume the full solution is e4t(1 1)T + et(-2 1)T + at2 + bt + c????\n\n4. May 8, 2010\n\n### gabbagabbahey\n\nYou don't get the zero vector for your first eigenvector...show your calculations for that part\n\n5. May 8, 2010\n\n(see edit)\n\n6. May 8, 2010\n\n### gabbagabbahey\n\nClose...I think you want to assume the solution is of the form\n\n$$\\textbf{x}(t)=c_1e^{4t}\\begin{pmatrix} 1 \\\\ 1\\end{pmatrix}+c_2e^{t}\\begin{pmatrix} -2 \\\\ 1\\end{pmatrix}+\\textbf{a}t+\\textbf{b}$$\n\nThe $c_1$ and $c_2$ are needed since any linear combination of your two independent homogeneous solutions will also be a solution to the homogeneous equation. And I don't think you need a $t^2$ term for your particular solution.\n\n7. May 8, 2010\n\n### Jamin2112\n\nIf you just have at + b then you'll just have a when you take the derivative. Ultimately I'm trying up an x whose derivative looks something like it does in the problem. Right? (t t)T = t(1 1)T.\n\n8. May 8, 2010\n\n### gabbagabbahey\n\nRight, but $$\\begin{pmatrix}2 & 2 \\\\ 1 & 3\\end{pmatrix}\\textbf{x}(t)$$ will also have a $t$ in it which should cancel the at for an appropriate choice of a.\n\n9. May 8, 2010\n\n### Jamin2112\n\na and b have constant entries, right?\n\n10. May 8, 2010\n\n### gabbagabbahey\n\nThey'd better!\n\n11. May 8, 2010\n\n### gabbagabbahey\n\nKeep in mind, that if you get an equation like Ma=-a for some matrix M, either -1 is one of its eigenvalues or a=0\n\n12. May 8, 2010\n\n### Jamin2112\n\nThanks, buddy. One more quick question!\n\nOn problem 3, I have a 3x3 matrix eigenvalues -2,-1 and corresponding eigenvectors (1 0 0)T and (0 1 0)T. Now, I know what with a 2x2 matrix, if I have only one eigenvector, giving me a solution x=eat$, then I guess a second solution x=teat$ + eat#. With a 3x3 matrix, do I try that with each of my two eigenvectors? Doing so seems to give me a solution for one but not the other.\n\n13. May 8, 2010\n\n### gabbagabbahey\n\nThe easiest way to solve problem 3 is to not do it in matrix form...just let $$\\textbf{x}(t)=\\begin{pmatrix}x_1(t) \\\\ x_2(t) \\\\ x_3(t) \\end{pmatrix}$$ and solve the 3 DEs you get.","date":"2017-08-18 12:14:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6268222332000732, \"perplexity\": 1685.7531643324396}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886104634.14\/warc\/CC-MAIN-20170818102246-20170818122246-00002.warc.gz\"}"}
null
null
<?php namespace Newscoop\Entity; use Doctrine\ORM\Mapping AS ORM; use Newscoop\Utils\Validation; use Newscoop\Entity\Entity; /** * @ORM\Entity(repositoryClass="Newscoop\Entity\Repository\ThemeRepository") * @ORM\Table(name="Theme") */ class Theme extends AbstractEntity { /** @var string */ protected $path; /** @var string */ protected $name; /** @var string */ protected $designer; /** @var string */ protected $version; /** @var string */ protected $minorNewscoopVersion; /** @var string */ protected $description; /* --------------------------------------------------------------- */ /** * Provides the path of the theme. * * @return string * The path of the theme. */ public function getPath() { return $this->path; } /** * Set the path of the theme. * * @param string $path * The path of the theme, must not be null or empty. * * @return Newscoop\Entity\Theme * This object for chaining purposes. */ public function setPath($path) { Validation::notEmpty($path, 'path'); $this->path = str_replace('\\', '/', $path); return $this; } /* --------------------------------------------------------------- */ /** * Provides the name of the theme, must be a user frendly name used for displaying it on the UI. * * @return string * The name of the theme. */ public function getName() { return $this->name; } /** * Set the name of the theme, must be a user frendly name used for displaying it on the UI. * * @param string $name * The name of the theme, must not be null or empty. * * @return Newscoop\Entity\Theme * This object for chaining purposes. */ public function setName($name) { Validation::notEmpty($name, 'name'); $this->name = $name; return $this; } /* --------------------------------------------------------------- */ /** * Provides the designer name of the theme. * * @return string * The designer name of the theme. */ public function getDesigner() { return $this->designer; } /** * Set the designer name of the theme. * * @param string $designer * The designer name of the theme, must not be null or empty. * * @return Newscoop\Entity\Theme * This object for chaining purposes. */ public function setDesigner($designer) { Validation::notEmpty($designer, 'designer'); $this->designer = $designer; return $this; } /* --------------------------------------------------------------- */ /** * Provides the version of the theme, this has to be a whell formated version name like '1.3'. * * @return string * The version of the theme. */ public function getVersion() { return $this->version; } /** * Set the version of the theme, this has to be a whell formated version name like '1.3'. * * @param string $version * TThe version of the theme, must not be null or empty. * * @return Newscoop\Entity\Theme * This object for chaining purposes. */ public function setVersion($version) { Validation::notEmpty($version, 'version'); $this->version = $version; return $this; } /* --------------------------------------------------------------- */ /** * Provides the minimum newscoop version for this theme, this has to be a whell formated version name like '3.6'. * * @return string * The minimum newscoop version of the theme. */ public function getMinorNewscoopVersion() { return $this->minorNewscoopVersion; } /** * Set the minimum newscoop version for this theme, this has to be a whell formated version name like '3.6'. * * @param string $minorNewscoopVersion * The minimum newscoop version of the theme, must not be null or empty. * * @return Newscoop\Entity\Theme * This object for chaining purposes. */ public function setMinorNewscoopVersion($minorNewscoopVersion) { Validation::notEmpty($minorNewscoopVersion, 'minorNewscoopVersion'); $this->minorNewscoopVersion = $minorNewscoopVersion; return $this; } /* --------------------------------------------------------------- */ /** * Provides the desciption of the theme. * * @return string * The description of the theme. */ public function getDescription() { return $this->description; } /** * Set the desciption of the theme. * * @param string $description * The description of the theme, must not be null or empty. * * @return Newscoop\Entity\Theme * This object for chaining purposes. */ public function setDescription($description) { $this->description = $description; return $this; } /* --------------------------------------------------------------- */ public function isInstalled() { return true; } public function getInstalledVersion() { return 1; } public function toObject() { return (object) array ( "id" => (int) $this->getId(), "name" => (string) $this->getName(), "description" => (string) $this->getDescription(), "designer" => (string) $this->getDesigner(), "path" => (string) $this->getPath(), "version" => (string) $this->getVersion(), "minorNewscoopVersion" => (string) $this->getMinorNewscoopVersion(), "installedVersion" => (string) $this->getInstalledVersion() ); } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,651
{"url":"https:\/\/www.deepdyve.com\/lp\/springer_journal\/sous-solutions-de-probl-mes-aux-limites-en-dimension-1-NuRiuW0mHU","text":"# Sous-Solutions de Probl\u00e8mes aux Limites en Dimension 1\n\nSous-Solutions de Probl\u00e8mes aux Limites en Dimension 1 We consider a boundary value problem $$- v = {\\text{f , }}v \\in \\phi (u){\\text{ }}on]0,1[ E(v\\left( 0 \\right),v\\left( 1 \\right)) \\mathrel\\backepsilon (v\\prime \\left( 0 \\right),\\user1{ - }v\\prime \\left( 1 \\right))$$ where \u03d5 is a maximal monotone operator in \u211d and E is a multivalued operator in \u211d2 with non decre asing resolvent. We introduce a condition on E which insures that the operator inL 1 (0,1) associated to this problem has non decreasing resolvent, and caracterise the subsolutions of the problem. We give different examples of operatorsE satisfying this condition. http:\/\/www.deepdyve.com\/assets\/images\/DeepDyve-Logo-lg.png Positivity Springer Journals\n\n# Sous-Solutions de Probl\u00e8mes aux Limites en Dimension 1\n\n, Volume 1 (2) \u2013 Oct 14, 2004\n19 pages\n\n\/lp\/springer_journal\/sous-solutions-de-probl-mes-aux-limites-en-dimension-1-NuRiuW0mHU\nPublisher\nSubject\nMathematics; Fourier Analysis; Operator Theory; Potential Theory; Calculus of Variations and Optimal Control; Optimization; Econometrics\nISSN\n1385-1292\neISSN\n1572-9281\nD.O.I.\n10.1023\/A:1009786324575\nPublisher site\nSee Article on Publisher Site\n\n### Abstract\n\nWe consider a boundary value problem $$- v = {\\text{f , }}v \\in \\phi (u){\\text{ }}on]0,1[ E(v\\left( 0 \\right),v\\left( 1 \\right)) \\mathrel\\backepsilon (v\\prime \\left( 0 \\right),\\user1{ - }v\\prime \\left( 1 \\right))$$ where \u03d5 is a maximal monotone operator in \u211d and E is a multivalued operator in \u211d2 with non decre asing resolvent. We introduce a condition on E which insures that the operator inL 1 (0,1) associated to this problem has non decreasing resolvent, and caracterise the subsolutions of the problem. We give different examples of operatorsE satisfying this condition.\n\n### Journal\n\nPositivitySpringer Journals\n\nPublished: Oct 14, 2004\n\n## You\u2019re reading a free preview. Subscribe to read the entire article.\n\n### DeepDyve is your personal research library\n\nIt\u2019s your single place to instantly\nthat matters to you.\n\nover 12 million articles from more than\n10,000 peer-reviewed journals.\n\nAll for just $49\/month ### Explore the DeepDyve Library ### Unlimited reading Read as many articles as you need. Full articles with original layout, charts and figures. Read online, from anywhere. ### Stay up to date Keep up with your field with Personalized Recommendations and Follow Journals to get automatic updates. ### Organize your research It\u2019s easy to organize your research with our built-in tools. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. ### DeepDyve Freelancer ### DeepDyve Pro Price FREE$49\/month\n\n\\$360\/year\nSave searches from","date":"2018-05-27 03:49:41","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5993450880050659, \"perplexity\": 2881.8895650443314}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-22\/segments\/1526794867995.55\/warc\/CC-MAIN-20180527024953-20180527044953-00159.warc.gz\"}"}
null
null
The Integrity Party of Aotearoa New Zealand (TIPANZ) is an unregistered political party in New Zealand. It is a progressive-centrist party, with an ideology of Hauora (well-being), equality, and integrity. It is led by Helen Cartwright with Troy Mihaka as deputy. Foundation The party was founded by former Sustainable New Zealand Party secretary Helen Cartwright and former Wellington local body candidate Troy Mihaka. Mihaka stood for election to Wellington City Council in 2019 for the centre-right Wellington Party. 2020 general election The party intended to run both list and electorate candidates in New Zealand's 2020 election, but did not register so was unable to receive party votes. It ran two electoral candidates: Cartwright in Mana and Mihaka in Rongotai. In July 2020 Mihaka's candidate signs were painted with racist abuse, apparently due to the authorisation statement being written in Te Reo. Cartwright said in September 2020 that "If 100 people vote for me, I will be rapt; if 1000 people vote for me, I will do somersaults." Neither candidate was successful; Cartwright received 360 votes, coming 7th, and Mihaka received 162, coming 8th. References 2020 establishments in New Zealand Centrist parties in New Zealand Political parties established in 2020
{ "redpajama_set_name": "RedPajamaWikipedia" }
106
This document defines a high level roadmap for Rook development and upcoming releases. The features and themes included in each milestone are optimistic in the sense that some do not have clear owners yet. Community and contributor involvement is vital for successfully implementing all desired items for each release. We hope that the items listed below will inspire further engagement from the community to keep Rook progressing and shipping exciting and valuable features. Any dates listed below and the specific issues that will ship in a given milestone are subject to change but should give a general idea of what we are planning. See the [GitHub project boards](https://github.com/rook/rook/projects) for the most up-to-date issues and their status. ## Rook Ceph 1.11 The following high level features are targeted for Rook v1.11 (January 2023). For more detailed project tracking see the [v1.11 board](https://github.com/rook/rook/projects/27). * Use more specific cephx accounts to better differentiate the source of Ceph configuration changes [#10169](https://github.com/rook/rook/issues/10169) * Enable restoring a cluster from the OSDs after all mons are lost [#7049](https://github.com/rook/rook/issues/7049) * Automate node fencing for application failover in some scenarios [#1507](https://github.com/rook/rook/issues/1507) * Support RBD mirroring across clusters with overlapping networks [#11070](https://github.com/rook/rook/issues/11070) * OSD encryption on partitions [#10984](https://github.com/rook/rook/issues/10984) * Object Store * Service account authentication with the Vault agent [#9872](https://github.com/rook/rook/pull/9872) * Add alpha support for COSI (Container object storage interface) with K8s 1.25 [#7843](https://github.com/rook/rook/issues/7843) * Support the immutable object cache [#11162](https://github.com/rook/rook/issues/11162) * Ceph-CSI v3.8 * CephFS encryption support [#3460](https://github.com/ceph/ceph-csi/pull/3460) * Update the operator sdk for internal CSV generation [#10141](https://github.com/rook/rook/issues/10141) * [Krew plugin](https://github.com/rook/kubectl-rook-ceph) features planned in the 1.11 time frame * Recover the CephCluster CR after accidental deletion [#68](https://github.com/rook/kubectl-rook-ceph/issues/68) * Collect details to help troubleshoot the csi driver [#69](https://github.com/rook/kubectl-rook-ceph/issues/69) * Restore mon qourum from OSDs after all mons are lost [#7049](https://github.com/rook/rook/issues/7049) ## Themes The general areas for improvements include the following, though may not be committed to a release. * OSD encryption key rotation [#7925](https://github.com/rook/rook/issues/7925) * Strengthen approach for OSDs on PVCs for a more seamless K8s management of underlying storage * CSI Driver improvements tracked in the [CSI repo](https://github.com/ceph/ceph-csi) * Support for Windows nodes
{ "redpajama_set_name": "RedPajamaGithub" }
7,348
Peter PetersenUndergraduate Texts in MathematicsLinear Algebra201210.1007/978-1-4614-3612-6(C) Springer Science+Business Media New York 2012 Undergraduate Texts in Mathematics Series EditorsS. Axler and K. A. Ribet For further volumes: http://www.springer.com/series/666 Peter Petersen Linear Algebra Peter Petersen Department of Mathematics, University of California, Los Angeles, CA, USA ISSN 0172-6056 ISBN 978-1-4614-3611-9e-ISBN 978-1-4614-3612-6 Springer New York Heidelberg Dordrecht London Library of Congress Control Number: 2012938237 (C) Springer Science+Business Media New York 2012 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher's location, in its current version, and permission for use must always be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com) Preface This book covers the aspects of linear algebra that are included in most advanced undergraduate texts. All the usual topics from complex vector spaces, complex inner products the spectral theorem for normal operators, dual spaces, quotient spaces, the minimal polynomial, the Jordan canonical form, and the Frobenius (or rational) canonical form are explained. A chapter on determinants has been included as the last chapter, but they are not used in the text as a whole. A different approach to linear algebra that does not use determinants can be found in [Axler]. The expected prerequisites for this book would be a lower division course in matrix algebra. A good reference for this material is [Bretscher]. In the context of other books on linear algebra it is my feeling that this text is about on a par in difficulty with books such as [Axler, Curtis, Halmos, Hoffman-Kunze, Lang]. If you want to consider more challenging texts, I would suggest looking at the graduate level books [Greub, Roman, Serre]. Chapter 1 contains all of the basic material on abstract vector spaces and linear maps. The dimension formula for linear maps is the theoretical highlight. To facilitate some more concrete developments we cover matrix representations, change of basis, and Gauss elimination. Linear independence which is usually introduced much earlier in linear algebra only comes towards to the end of the chapter. But it is covered in great detail there. We have also included two sections on dual spaces and quotient spaces that can be skipped. Chapter 2 is concerned with the theory of linear operators. Linear differential equations are used to motivate the introduction of eigenvalues and eigenvectors, but this motivation can be skipped. We then explain how Gauss elimination can be used to compute the eigenvalues as well as the eigenvectors of a matrix. This is used to understand the basics of how and when a linear operator on a finite-dimensional space is diagonalizable. We also introduce the minimal polynomial and use it to give the classic characterization of diagonalizable operators. In the later sections we give a fairly simple proof of the Cayley-Hamilton theorem and the cyclic subspace decomposition. This quickly leads to the Frobenius canonical form. This canonical form is our most general result on how to find a simple matrix representation for a linear map in case it is not diagonalizable. The antepenultimate section explains how the Frobenius canonical form implies the Jordan-Chevalley decomposition and the Jordan-Weierstrass canonical form. In the last section, we present a quick and elementary approach to the Smith normal form. This form allows us to calculate directly all of the similarity invariants of a matrix using basic row and column operations on matrices with polynomial entries. Chapter 3 includes material on inner product spaces. The Cauchy-Schwarz inequality and its generalization to Bessel's inequality and how they tie in with orthogonal projections form the theoretical centerpiece of this chapter. Along the way, we cover standard facts about orthonormal bases and their existence through the Gram-Schmidt procedure as well as orthogonal complements and orthogonal projections. The chapter also contains the basic elements of adjoints of linear maps and some of its uses to orthogonal projections as this ties in nicely with orthonormal bases. We end the chapter with a treatment of matrix exponentials and systems of differential equations. Chapter 4 covers quite a bit of ground on the theory of linear maps between inner product spaces. The most important result is of course the spectral theorem for self-adjoint operators. This theorem is used to establish the canonical forms for real and complex normal operators, which then gives the canonical form for unitary, orthogonal, and skew-adjoint operators. It should be pointed out that the proof of the spectral theorem does not depend on whether we use real or complex scalars nor does it rely on the characteristic or minimal polynomials. The reason for ignoring our earlier material on diagonalizability is that it is desirable to have a theory that more easily generalizes to infinite dimensions. The usual proofs that use the characteristic and minimal polynomials are relegated to the exercises. The last sections of the chapter cover the singular value decomposition, the polar decomposition, triangulability of complex linear operators (Schur's theorem), and quadratic forms. Chapter 5 covers determinants. At this point, it might seem almost useless to introduce the determinant as we have covered the theory without needing it much. While not indispensable, the determinant is rather useful in giving a clean definition for the characteristic polynomial. It is also one of the most important invariants of a finite-dimensional operator. It has several nice properties and gives an excellent criterion for when an operator is invertible. It also comes in handy in giving a formula (Cramer's rule) for solutions to linear systems. Finally, we discuss its uses in the theory of linear differential equations, in particular in connection with the variation of parameters formula for the solution to inhomogeneous equations. We have taken the liberty of defining the determinant of a linear operator through the use of volume forms. Aside from showing that volume forms exist, this gives a rather nice way of proving all the properties of determinants without using permutations. It also has the added benefit of automatically giving the permutation formula for the determinant and hence showing that the sign of a permutation is well defined. An * after a section heading means that the section is not necessary for the understanding of other sections without an *. Let me offer a few suggestions for how to teach a course using this book. My assumption is that most courses are based on 150 min of instruction per week with a problem session or two added. I realize that some courses meet three times while others only two, so I will not suggest how much can be covered in a lecture. First, let us suppose that you, like me, teach in the pedagogically impoverished quarter system: It should be possible to teach Chap. 1, Sects. 1.2-1.13 in 5 weeks, being a bit careful about what exactly is covered in Sects. 1.12 and 1.13. Then, spend 2 weeks on Chap. 2, Sects. 2.3-2.5, possibly omitting Sect. 2.4 covering the minimal polynomial if timing looks tight. Next spend 2 weeks on Chap. 3, Sects. 3.1-3.5, and finish the course by covering Chap. 4, Sect. 4.1 as well as Exercise 9 in Sect. 4.1. This finishes the course with a proof of the Spectral Theorem for self-adjoint operators, although not the proof I would recommend for a more serious treatment. Next, let us suppose that you teach in a short semester system, as the ones at various private colleges and universities. You could then add 2 weeks of material by either covering the canonical forms from Chap. 2, Sects. 2.6-2.8 or alternately spend 2 weeks covering some of the theory of linear operators on inner product spaces from Chap. 4, Sects. 4.1-4.5. In case you have 15 weeks at your disposal, it might be possible to cover both of these topics rather than choosing between them. Finally, should you have two quarters, like we sometimes do here at UCLA, then you can in all likelihood cover virtually the entire text. I would certainly recommend that you cover all of Chap. 4 and the canonical form sections in Chap. 2, Sects. 2.6-2.8, as well as the chapter on determinants. If time permits, it might even be possible to include Sects. 2.2, 3.7, 4.8, and 5.8 that cover differential equations. This book has been used to teach a bridge course on linear algebra at UCLA as well as a regular quarter length course. The bridge course was funded by a VIGRE NSF grant, and its purpose was to ensure that incoming graduate students had really learned all of the linear algebra that we expect them to know when starting graduate school. The author would like to thank several UCLA students for suggesting various improvements to the text: Jeremy Brandman, Sam Chamberlain, Timothy Eller, Clark Grubb, Vanessa Idiarte, Yanina Landa, Bryant Mathews, Shervin Mosadeghi, and Danielle O'Donnol. I am also pleased to acknowledge NSF support from grants DMS 0204177 and 1006677. I would also like to thank Springer-Verlag for their interest and involvement in this book as well as their suggestions for improvements. Finally, I am immensely grateful to Joe Borzellino at Cal Poly San Luis Obispo who used the text several times at his institution and supplied me with numerous corrections. Contents 1 Basic Theory 1 1.1 Induction and Well-Ordering 3 1.2 Elementary Linear Algebra 7 1.3 Fields 9 1.4 Vector Spaces 11 1.5 Bases 19 1.6 Linear Maps 25 1.7 Linear Maps as Matrices 39 1.8 Dimension and Isomorphism 49 1.9 Matrix Representations Revisited 55 1.10 Subspaces 63 1.11 Linear Maps and Subspaces 73 1.12 Linear Independence 81 1.13 Row Reduction 93 1.14 Dual Spaces 109 1.15 Quotient Spaces 119 2 Linear Operators 125 2.1 Polynomials 127 2.2 Linear Differential Equations 135 2.3 Eigenvalues 147 2.4 The Minimal Polynomial 165 2.5 Diagonalizability 175 2.6 Cyclic Subspaces 191 2.7 The Frobenius Canonical Form 203 2.8 The Jordan Canonical Form 211 2.9 The Smith Normal Form 217 3 Inner Product Spaces 227 3.1 Examples of Inner Products 229 3.1.1 Real Inner Products 229 3.1.2 Complex Inner Products 233 3.1.3 A Digression on Quaternions 234 3.2 Inner Products 239 3.3 Orthonormal Bases 247 3.4 Orthogonal Complements and Projections 261 3.5 Adjoint Maps 269 3.6 Orthogonal Projections Revisited 279 3.7 Matrix Exponentials 283 4 Linear Operators on Inner Product Spaces 293 4.1 Self-Adjoint Maps 295 4.2 Polarization and Isometries 301 4.3 The Spectral Theorem 307 4.4 Normal Operators 317 4.5 Unitary Equivalence 325 4.6 Real Forms 329 4.7 Orthogonal Transformations 337 4.8 Triangulability 347 4.9 The Singular Value Decomposition 353 4.10 The Polar Decomposition 359 4.11 Quadratic Forms 365 5 Determinants 371 5.1 Geometric Approach 371 5.2 Algebraic Approach 375 5.3 How to Calculate Volumes 383 5.4 Existence of the Volume Form 389 5.5 Determinants of Linear Operators 397 5.6 Linear Equations 405 5.7 The Characteristic Polynomial 411 5.8 Differential Equations 419 References429 Index431 Peter PetersenUndergraduate Texts in MathematicsLinear Algebra201210.1007/978-1-4614-3612-6_1(C) Springer Science+Business Media New York 2012 # 1. Basic Theory Peter Petersen1 (1) Department of Mathematics, University of California, Los Angeles, CA, USA Abstract In the first chapter, we are going to cover the definitions of vector spaces, linear maps, and subspaces. In addition, we are introducing several important concepts such as basis, dimension, direct sum, matrix representations of linear maps, and kernel and image for linear maps. We shall prove the dimension theorem for linear maps that relate the dimension of the domain to the dimensions of kernel and image. We give an account of Gauss elimin Ation and how it ties in with the more abstract theory. This will be used to define and compute the characteristic polynomial in Chap. 2. In the first chapter, we are going to cover the definitions of vector spaces, linear maps, and subspaces. In addition, we are introducing several important concepts such as basis, dimension, direct sum, matrix representations of linear maps, and kernel and image for linear maps. We shall prove the dimension theorem for linear maps that relate the dimension of the domain to the dimensions of kernel and image. We give an account of Gauss elimin Ation and how it ties in with the more abstract theory. This will be used to define and compute the characteristic polynomial in Chap. 2. It is important to note that Sects. 1.13 and 1.12 contain alternate proofs of some of the important results in this chapter. As such, some people might want to go right to these sections after the discussion on isomorphism in Sect. 1.8 and then go back to the missed sections. As induction is going to play a big role in many of the proofs, we have chosen to say a few things about that topic in the first section. ## 1.1 Induction and Well-Ordering ∗ A fundamental property of the natural numbers, i.e., the positive integers ![ $$\\mathbb{N} = \\left \\{1,2,3,\\ldots \\right \\}$$ ](A81414_1_En_1_Chapter_IEq1.gif), that will be used throughout the book is the fact that they are well ordered. This means that any nonempty subset S ⊂ ℕ has a smallest element s min ∈ S such that s min ≤ s for all s ∈ S. Using the natural ordering of the integers, rational numbers, or real numbers, we see that this property does not hold for those numbers. For example, the half-open interval ![ $$\\left \(0,\\infty \\right \)$$ ](A81414_1_En_1_Chapter_IEq2.gif) does not have a smallest element. In order to justify that the positive integers are well ordered, let S ⊂ ℕ be nonempty and select k ∈ S. Starting with 1, we can check whether it belongs to S. If it does, then s min = 1. Otherwise, check whether 2 belongs to S. If 2 ∈ S and 1∉S, then we have s min = 2. Otherwise, we proceed to check whether 3 belongs to S. Continuing in this manner, we must eventually find k 0 ≤ k, such that k 0 ∈ S, but 1, 2, 3,..., k 0 − 1∉S. This is the desired minimum: s min = k 0. We shall use the well-ordering of the natural numbers in several places in this text. A very interesting application is to the proof of the prime factorization theorem: any integer ≥ 2 is a product of prime numbers. The proof works the following way. Let S ⊂ ℕ be the set of numbers which do not admit a prime factorization. If S is empty, we are finished; otherwise, S contains a smallest element n = s min ∈ S. If n has no divisors, then it is a prime number and hence has a prime factorization. Thus, n must have a divisor p > 1. Now write n = p ⋅q. Since p, q < n both numbers must have a prime factorization. But then also n = p ⋅q has a prime factorization. This contradicts that S is nonempty. The second important idea that is tied to the natural numbers is that of induction. Sometimes, it is also called mathematical induction so as not to confuse it with the inductive method from science. The types of results that one can attempt to prove with induction always have a statement that needs to be verified for each number n ∈ ℕ. Some good examples are 1. ![ $$1 + 2 + 3 + \\cdots+ n = \\frac{n\\left \(n+1\\right \)} {2}$$ ](A81414_1_En_1_Chapter_IEq3.gif). 2. Every integer ≥ 2 has a prime factorization. 3. Every polynomial has a root. The first statement is pretty straightforward to understand. The second is a bit more complicated, and we also note that in fact, there is only a statement for each integer ≥ 2. This could be finessed by saying that each integer n + 1, n ≥ 1 has a prime factorization. This, however, seems too pedantic and also introduces extra and irrelevant baggage by using addition. The third statement is obviously quite different from the other two. For one thing, it only stands a chance of being true if we also assume that the polynomials have degree ≥ 1. This gives us the idea of how this can be tied to the positive integers. The statement can be paraphrased as: Every polynomial of degree ≥ 1 has a root. Even then, we need to be more precise as x 2 + 1 does not have any real roots. In order to explain how induction works abstractly, suppose that we have a statement ![ $$P\\left \(n\\right \)$$ ](A81414_1_En_1_Chapter_IEq4.gif) for each n ∈ ℕ. Each of the above statements can be used as an example of what ![ $$P\\left \(n\\right \)$$ ](A81414_1_En_1_Chapter_IEq5.gif) can be. The induction process now works by first ensuring that the anchor statement is valid. In other words, we first check that ![ $$P\\left \(1\\right \)$$ ](A81414_1_En_1_Chapter_IEq6.gif) is true. We then have to establish the induction step. This means that we need to show that if ![ $$P\\left \(n - 1\\right \)$$ ](A81414_1_En_1_Chapter_IEq7.gif) is true, then ![ $$P\\left \(n\\right \)$$ ](A81414_1_En_1_Chapter_IEq8.gif) is also true. The assumption that ![ $$P\\left \(n - 1\\right \)$$ ](A81414_1_En_1_Chapter_IEq9.gif) is true is called the induction hypothesis. If we can establish the validity of these two facts, then ![ $$P\\left \(n\\right \)$$ ](A81414_1_En_1_Chapter_IEq10.gif) must be true for all n. This follows from the well-ordering of the natural numbers. Namely, let ![ $$S = \\left \\{n : P\\left \(n\\right \)\\text{ is false}\\right \\}$$ ](A81414_1_En_1_Chapter_IEq11.gif). If S is empty, we are finished, otherwise, S has a smallest element k ∈ S. Since 1∉S, we know that k > 1. But this means that we know that ![ $$P\\left \(k - 1\\right \)$$ ](A81414_1_En_1_Chapter_IEq12.gif) is true. The induction step then implies that ![ $$P\\left \(k\\right \)$$ ](A81414_1_En_1_Chapter_IEq13.gif) is true as well. This contradicts that S is nonempty. Let us see if we can use this procedure on the above statements. For 1, we begin by checking that ![ $$1 = \\frac{1\\left \(1+1\\right \)} {2}$$ ](A81414_1_En_1_Chapter_IEq14.gif). This is indeed true. Next, we assume that ![ $$1 + 2 + 3 + \\cdots+ \\left \(n - 1\\right \) = \\frac{\\left \(n - 1\\right \)n} {2} ,$$ ](A81414_1_En_1_Chapter_Equa.gif) and we wish to show that ![ $$1 + 2 + 3 + \\cdots+ n = \\frac{n\\left \(n + 1\\right \)} {2}.$$ ](A81414_1_En_1_Chapter_Equb.gif) Using the induction hypothesis, we see that ![ $$\\begin{array}{rcl} \\left \(1 + 2 + 3 + \\cdots+ \\left \(n - 1\\right \)\\right \) + n& =& \\frac{\\left \(n - 1\\right \)n} {2} + n \\\\ & =& \\frac{\\left \(n - 1\\right \)n + 2n} {2} \\\\ & =& \\frac{\\left \(n + 1\\right \)n} {2}. \\end{array}$$ ](A81414_1_En_1_Chapter_Equ1.gif) Thus, we have shown that ![ $$P\\left \(n\\right \)$$ ](A81414_1_En_1_Chapter_IEq15.gif) is true provided ![ $$P\\left \(n - 1\\right \)$$ ](A81414_1_En_1_Chapter_IEq16.gif) is true. For 2, we note that two is a prime number and hence has a prime factorization. Next, we have to prove that n has a prime factorization if ![ $$\\left \(n - 1\\right \)$$ ](A81414_1_En_1_Chapter_IEq17.gif) does. This, however, does not look like a very promising thing to show. In fact, we need a stronger form of induction to get this to work. The induction step in the stronger version of induction is as follows: If ![ $$P\\left \(k\\right \)$$ ](A81414_1_En_1_Chapter_IEq18.gif) is true for all k < n, then ![ $$P\\left \(n\\right \)$$ ](A81414_1_En_1_Chapter_IEq19.gif) is also true. Thus, the induction hypothesis is much stronger as we assume that all statements prior to ![ $$P\\left \(n\\right \)$$ ](A81414_1_En_1_Chapter_IEq20.gif) are true. The proof that this form of induction works is virtually identical to the above justification. Let us see how this stronger version can be used to establish the induction step for 2. Let n ∈ ℕ, and assume that all integers below n have a prime factorization. If n has no divisors other than 1 and n, it must be a prime number and we are finished. Otherwise, n = p ⋅q where p, q < n. Whence, both p and q have prime factorizations by our induction hypothesis. This shows that also n has a prime factorization. We already know that there is trouble with statement 3. Nevertheless, it is interesting to see how an induction proof might break down. First, we note that all polynomials of degree 1 look like ax + b and hence have ![ $$-\\frac{b} {a}$$ ](A81414_1_En_1_Chapter_IEq21.gif) as a root. This anchors the induction. To show that all polynomials of degree n have a root, we need to first decide which of the two induction hypotheses are needed. There really is not anything wrong by simply assuming that all polynomials of degree < n have a root. In this way, we see that at least any polynomial of degree n that is the product of two polynomials of degree < n must have a root. This leaves us with the so-called prime or irreducible polynomials of degree n, namely, those polynomials that are not divisible by polynomials of degree ≥ 1 and < n. Unfortunately, there is not much we can say about these polynomials. So induction does not seem to work well in this case. All is not lost however. A careful inspection of the "proof" of 3 can be modified to show that any polynomial has a prime factorization. This is studied further in Sect. 2.1. The type of statement and induction argument that we will encounter most often in this text is definitely of the third type. That is to say, it certainly will never be of the very basic type seen in statement 1. Nor will it be as easy as in statement 2. In our cases, it will be necessary to first find the integer that is used for the induction, and even then, there will be a whole collection of statements associated with that integer. This is what is happening in the third statement. There, we first need to select the degree as our induction integer. Next, there are still infinitely many polynomials to consider when the degree is fixed. Fin Ally, whether or not induction will work or is the "best" way of approaching the problem might actually be questionable. The following statement is fairly typical of what we shall see: Every subspace of ℝ n admits a basis with ≤ n elements. The induction integer is the dimension n, and for each such integer, there are infinitely many subspaces to be checked. In this case, an induction proof will work, but it is also possible to prove the result without using induction. ## 1.2 Elementary Linear Algebra Our first picture of what vectors are and what we can do with them comes from viewing them as geometric objects in the plane and space. Simply put, a vector is an arrow of some given length drawn in the plane. Such an arrow is also known as an oriented line segment. We agree that vectors that have the same length and orientation are equivalent no matter where they are based. Therefore, if we base them at the origin, then vectors are determined by their endpoints. Using a parallelogram, we can add such vectors (see Fig. 1.1). We can also multiply them by scalars. If the scalar is negative, we are changing the orientation. The size of the scalar determines how much we are scaling the vector, i.e., how much we are changing its length (see Fig. 1.2). Fig. 1.1 Vector addition Fig. 1.2 Scalar multiplication This geometric picture can also be taken to higher dimensions. The idea of scaling a vector does not change if it lies in space, nor does the idea of how to add vectors, as two vectors must lie either on a line or more generically in a plane. The problem comes when we wish to investigate these algebraic properties further. As an example, think about the associative law ![ $$\\left \(x + y\\right \) + z = x + \\left \(y + z\\right \).$$ ](A81414_1_En_1_Chapter_Equc.gif) Clearly, the proof of this identity changes geometrically from the plane to space. In fact, if the three vectors do not lie in a plane and therefore span a parallelepiped, then the sum of these three vectors regardless of the order in which they are added is the diagonal of this parallelepiped. The picture of what happens when the vectors lie in a plane is simply a projection of the three-dimensional picture on to the plane (see Fig. 1.3). Fig. 1.3 Associativity The purpose of linear algebra is to clarify these algebraic issues by looking at vectors in a less geometric fashion. This has the added benefit of also allowing other spaces that do not have geometric origins to be included in our discussion. The end result is a somewhat more abstract and less geometric theory, but it has turned out to be truly useful and foundational in almost all areas of mathematics, including geometry, not to mention the physical, natural, and social sciences. Something quite different and interesting happens when we allow for complex scalars. This is seen in the plane itself which we can interpret as the set of complex numbers. Vectors still have the same geometric meaning, but we can also "scale" them by a number like ![ $$i = \\sqrt{-1}$$ ](A81414_1_En_1_Chapter_IEq22.gif). The geometric picture of what happens when multiplying by i is that the vector's length is unchanged as ![ $$\\left \\vert i\\right \\vert = 1,$$ ](A81414_1_En_1_Chapter_IEq23.gif) but it is rotated 90 ∘ (see Fig. 1.2). Thus it is not scaled in the usual sense of the word. However, when we define these notions below, one will not really see any algebraic difference in what is happening. It is worth pointing out that using complex scalars is not just something one does for the fun of it; it has turned out to be quite convenient and important to allow for this extra level of abstraction. This is true not just within mathematics itself. When looking at books on quantum mechanics, it quickly becomes clear that complex vector spaces are the "sine qua non"(without which nothing) of the subject. ## 1.3 Fields The "scalars" or numbers used in linear algebra all lie in a field. A field is a set ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq24.gif) of numbers, where one has both addition ![ $$\\begin{array}{rcl} & \\mathbb{F} \\times\\mathbb{F} \\rightarrow\\mathbb{F} & \\\\ & \\left \(\\alpha ,\\beta \\right \)\\mapsto \\alpha+ \\beta & \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ2.gif) and multiplication ![ $$\\begin{array}{rcl} & \\mathbb{F} \\times\\mathbb{F} \\rightarrow\\mathbb{F}& \\\\ & \\left \(\\alpha ,\\beta \\right \)\\mapsto \\alpha \\beta. & \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ3.gif) Both operations are assumed associative, commutative, etc. We shall mainly be concerned with the real numbers ℝ and complex numbers ℂ; some examples will be using the rational numbers ℚ as well. These three fields satisfy the axioms we list below. Definition 1.3.1. A field ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq25.gif) is a set whose elements are called numbers or when used in linear algebra scalars. The field contains two different elements 0 and 1, and we can add and multiply numbers. These operations satisfy 1. The associative law ![ $$\\alpha+ \\left \(\\beta+ \\gamma \\right \) = \\left \(\\alpha+ \\beta \\right \) + \\gamma.$$ ](A81414_1_En_1_Chapter_Equd.gif) 2. The commutative law ![ $$\\alpha+ \\beta= \\beta+ \\alpha.$$ ](A81414_1_En_1_Chapter_Eque.gif) 3. Addition by 0: ![ $$\\alpha+ 0 = \\alpha.$$ ](A81414_1_En_1_Chapter_Equf.gif) 4. Existence of negative numbers: For each α, we can find − α so that ![ $$\\alpha+ \\left \(-\\alpha \\right \) = 0.$$ ](A81414_1_En_1_Chapter_Equg.gif) 5. The associative law: ![ $$\\alpha \\left \(\\beta \\gamma \\right \) = \\left \(\\alpha \\beta \\right \)\\gamma.$$ ](A81414_1_En_1_Chapter_Equh.gif) 6. The commutative law: ![ $$\\alpha \\beta= \\beta \\alpha.$$ ](A81414_1_En_1_Chapter_Equi.gif) 7. Multiplication by 1: ![ $$\\alpha 1 = \\alpha.$$ ](A81414_1_En_1_Chapter_Equj.gif) 8. Existence of inverses: For each α≠0, we can find α − 1 so that ![ $$\\alpha {\\alpha }^{-1} = 1.$$ ](A81414_1_En_1_Chapter_Equk.gif) 9. The distributive law: ![ $$\\alpha \\left \(\\beta+ \\gamma \\right \) = \\alpha \\beta+ \\alpha \\gamma.$$ ](A81414_1_En_1_Chapter_Equl.gif) One can show that both 0 and 1 are uniquely defined and that the additive inverse − α as well as the multiplicative inverse α − 1 is unique. Occasionally, we shall also use that the field has characteristic zero this means that ![ $$n = \\frac{n\\text{ times}} {1 + \\cdots+ 1}\\neq 0$$ ](A81414_1_En_1_Chapter_Equm.gif) for all positive integers n. Fields such as ![ $${\\mathbb{F}}_{2} = \\left \\{0,1\\right \\}$$ ](A81414_1_En_1_Chapter_IEq26.gif) where 1 + 1 = 0 clearly do not have characteristic zero. We make the assumption throughout the text that all fields have characteristic zero. In fact, there is little loss of generality in assuming that the fields we work are the usual number fields ℚ, ℝ, and ℂ. There are several important collections of numbers that are not fields: ![ $$\\begin{array}{rcl} \\mathbb{N}& =& \\left \\{1,2,3,\\ldots \\right \\} \\\\ & \\subset & {\\mathbb{N}}_{0} = \\left \\{0,1,2,3,\\ldots \\right \\} \\\\ & \\subset & \\mathbb{Z} =\\left \\{0,\\pm 1,\\pm 2,\\pm 3,\\ldots \\right \\} \\\\ & =& \\left \\{0,1,-1,2,-2,3,-3,\\ldots \\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ4.gif) ## 1.4 Vector Spaces Definition 1.4.1. A vector space consists of a set of vectors V and a field ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq27.gif). The vectors can be added to yield another vector: if x, y ∈ V, then x + y ∈ V or ![ $$\\begin{array}{rcl} V \\times V & \\rightarrow & V \\\\ \\left \(x,y\\right \)& \\mapsto & x + y.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ5.gif) The scalars can be multiplied with the vectors to yield a new vector: if ![ $$\\alpha\\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq28.gif) and x ∈ V, then αx ∈ V ; in other words, ![ $$\\begin{array}{rcl} \\mathbb{F} \\times V & \\rightarrow & V \\\\ \\left \(\\alpha ,x\\right \)& \\mapsto & \\alpha x.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ6.gif) The vector space contains a zero vector 0, also known as the origin of V. It is a bit confusing that we use the same symbol for 0 ∈ V and ![ $$0 \\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq29.gif). It should always be obvious from the context which zero is used. We shall generally use the notation that scalars, i.e., elements of ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq30.gif), are denoted by small Greek letters such as α, β, γ,..., while vectors are denoted by small roman letters such as x, y, z,.... Addition and scalar multiplication must satisfy the following axioms: 1. The associative law: ![ $$\\left \(x + y\\right \) + z = x + \\left \(y + z\\right \).$$ ](A81414_1_En_1_Chapter_Equn.gif) 2. The commutative law: ![ $$x + y = y + x.$$ ](A81414_1_En_1_Chapter_Equo.gif) 3. Addition by 0: ![ $$x + 0 = x.$$ ](A81414_1_En_1_Chapter_Equp.gif) 4. Existence of negative vectors: For each x, we can find − x such that ![ $$x + \\left \(-x\\right \) = 0.$$ ](A81414_1_En_1_Chapter_Equq.gif) 5. The associative law for multiplication by scalars: ![ $$\\alpha \\left \(\\beta x\\right \) = \\left \(\\alpha \\beta \\right \)x.$$ ](A81414_1_En_1_Chapter_Equr.gif) 6. Multiplication by the unit scalar: ![ $$1x = x.$$ ](A81414_1_En_1_Chapter_Equs.gif) 7. The distributive law when vectors are added: ![ $$\\alpha \\left \(x + y\\right \) = \\alpha x + \\alpha y.$$ ](A81414_1_En_1_Chapter_Equt.gif) 8. The distributive law when scalars are added: ![ $$\\left \(\\alpha+ \\beta \\right \)x = \\alpha x + \\beta x.$$ ](A81414_1_En_1_Chapter_Equu.gif) Remark 1.4.2. We shall also allow scalars to be multiplied on the right of the vector: ![ $$x\\alpha= \\alpha x$$ ](A81414_1_En_1_Chapter_Equv.gif) The only slight issue with this definition is that we must ensure that associativity still holds. The key to that is that the field of scalars have the property that multiplication in commutative: ![ $$\\begin{array}{rcl} x\\left \(\\alpha \\beta \\right \)& =& \\left \(\\alpha \\beta \\right \)x \\\\ & =& \\left \(\\beta \\alpha \\right \)x \\\\ & =& \\beta \\left \(\\alpha x\\right \) \\\\ & =& \\left \(x\\alpha \\right \)\\beta\\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ7.gif) These axioms lead to several "obvious" facts. Proposition 1.4.3. Let V be a vector space over a field ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq31.gif) . If x ∈ V and ![ $$\\alpha\\in\\mathbb{F},$$ ](A81414_1_En_1_Chapter_IEq32.gif) then: 1. 0x = 0. 2. α0 = 0. 3. − 1x = −x. 4. If αx = 0, then either α = 0 or x = 0. Proof. By the distributive law, ![ $$0x + 0x = \\left \(0 + 0\\right \)x = 0x.$$ ](A81414_1_En_1_Chapter_Equw.gif) This together with the associative law gives us ![ $$\\begin{array}{rcl} 0x& =& 0x + \\left \(0x - 0x\\right \) \\\\ & =& \\left \(0x + 0x\\right \) - 0x \\\\ & =& 0x - 0x \\\\ & =& 0.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ8.gif) The second identity is proved in the same manner. For the third, consider ![ $$\\begin{array}{rcl} 0& =& 0x \\\\ & =& \\left \(1 - 1\\right \)x \\\\ & =& 1x + \\left \(-1\\right \)x \\\\ & =& x + \\left \(-1\\right \)x, \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ9.gif) adding − x on both sides then yields ![ $$-x = \\left \(-1\\right \)x.$$ ](A81414_1_En_1_Chapter_Equx.gif) Fin Ally, if αx = 0 and α≠0, then we have ![ $$\\begin{array}{rcl} x& =& \\left \({\\alpha }^{-1}\\alpha \\right \)x \\\\ & =& {\\alpha }^{-1}\\left \(\\alpha x\\right \) \\\\ & =& {\\alpha }^{-1}0 \\\\ & =& 0. \\\\ & & \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ10.gif) With these matters behind us, we can relax a bit and start adding, subtracting, and multiplying along the lines we are used to from matrix algebra and vector calculus. Example 1.4.4. The simplest example of a vector space is the trivial vector spaceV = 0 that contains only one point, the origin. The vector space operations and axioms are completely trivial as well in this case. Here are some important examples of vectors spaces. Example 1.4.5. The most important basic example is undoubtedly the Cartesian n-fold product of the field ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq33.gif): ![ $$\\begin{array}{rcl}{ \\mathbb{F}}^{n}& =& \\left \\{\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n} \\end{array} \\right \] : {\\alpha }_{1},\\ldots ,{\\alpha }_{n} \\in\\mathbb{F}\\right \\} \\\\ & =& \\left \\{\\left \({\\alpha }_{1},\\ldots ,{\\alpha }_{n}\\right \) : {\\alpha }_{1},\\ldots ,{\\alpha }_{n} \\in\\mathbb{F}\\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ11.gif) Note that the n ×1 and the n-tuple ways of writing these vectors are equivalent. When writing vectors in a line of text, the n-tuple version is obviously more convenient. The column matrix version, however, conforms to various other natural choices, as we shall see, and carries some extra meaning for that reason. The ith entry α i in the vector ![ $$x = \\left \({\\alpha }_{1},\\ldots ,{\\alpha }_{n}\\right \)$$ ](A81414_1_En_1_Chapter_IEq34.gif) is called the ith coordin Ate of x. Vector addition is defined by adding the entries: ![ $$\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n} \\end{array} \\right \]+\\left \[\\begin{array}{c} {\\beta }_{1}\\\\ \\vdots \\\\ {\\beta }_{n} \\end{array} \\right \] = \\left \[\\begin{array}{c} {\\alpha }_{1} + {\\beta }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n} + {\\beta }_{n} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equy.gif) and likewise with scalar multiplication ![ $$\\alpha \\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n} \\end{array} \\right \] = \\left \[\\begin{array}{c} \\alpha {\\alpha }_{1}\\\\ \\vdots \\\\ \\alpha {\\alpha }_{n} \\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_Equz.gif) The axioms are verified by using the axioms for the field ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq35.gif). Example 1.4.6. The space of functions whose domain is some fixed set S and whose values all lie in the field ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq36.gif) is denoted by ![ $$\\mathrm{Func}\\left \(S, \\mathbb{F}\\right \) = \\left \\{f : S \\rightarrow\\mathbb{F}\\right \\}$$ ](A81414_1_En_1_Chapter_IEq37.gif). Addition and scalar multiplication is defined by ![ $$\\begin{array}{rcl} \\left \(\\alpha f\\right \)\\left \(x\\right \)& =& \\alpha f\\left \(x\\right \), \\\\ \\left \({f}_{1} + {f}_{2}\\right \)\\left \(x\\right \)& =& {f}_{1}\\left \(x\\right \) + {f}_{2}\\left \(x\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ12.gif) And the axioms again follow from using the field axioms for ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq38.gif). In the special case where S = ![ $$\\left \\{1,\\ldots ,n\\right \\}$$ ](A81414_1_En_1_Chapter_IEq39.gif), it is worthwhile noting that ![ $$\\mathrm{Func}\\left \(\\left \\{1,\\ldots ,n\\right \\}, \\mathbb{F}\\right \) = {\\mathbb{F}}^{n}.$$ ](A81414_1_En_1_Chapter_Equaa.gif) Thus, vectors in ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq40.gif) can also be thought of as functions and can be graphed as either an arrow in space or as a histogram type function. The former is of course more geometric, but the latter certainly also has its advantages as collections of numbers in the form of n ×1 matrices do not always look like vectors. In statistics, the histogram picture is obviously far more useful. The point here is that the way in which vectors are pictured might be psychologically important, but from an abstract mathematical perspective, there is no difference. Example 1.4.7. The space of n ×m matrices ![ $$\\begin{array}{rcl} \\mathrm{{Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)& =& \\left \\{\\left \[\\begin{array}{ccc} {\\alpha }_{11} & \\cdots & {\\alpha }_{1m}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\alpha }_{n1} & \\cdots &{\\alpha }_{nm} \\end{array} \\right \] : {\\alpha }_{ij} \\in\\mathbb{F}\\right \\} \\\\ & =& \\left \\{\\left \({\\alpha }_{ij}\\right \) : {\\alpha }_{ij} \\in\\mathbb{F}\\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ13.gif) n ×m matrices are evidently just a different way of arranging vectors in ![ $${\\mathbb{F}}^{n\\cdot m}$$ ](A81414_1_En_1_Chapter_IEq41.gif). This arrangement, as with the column version of vectors in ![ $${\\mathbb{F}}^{n},$$ ](A81414_1_En_1_Chapter_IEq42.gif) imbues these vectors with some extra meaning that will become evident as we proceed. Example 1.4.8. There is a slightly more abstract vector space that we can construct out of a general set S and a vector space V. This is the set ![ $$\\mathrm{Map}\\left \(S,V \\right \)$$ ](A81414_1_En_1_Chapter_IEq43.gif) of all maps from S to V. Scalar multiplication and addition are defined as follows: ![ $$\\begin{array}{rcl} \\left \(\\alpha f\\right \)\\left \(x\\right \)& =& \\alpha f\\left \(x\\right \), \\\\ \\left \({f}_{1} + {f}_{2}\\right \)\\left \(x\\right \)& =& {f}_{1}\\left \(x\\right \) + {f}_{2}\\left \(x\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ14.gif) The axioms now follow from V being a vector space. The space of maps is in some sense the most general type of vector space as all other vector spaces are either of this type or subspaces of such function spaces. Definition 1.4.9. A nonempty subset M ⊂ V of a vector space V is said to be a subspace if it is closed under addition and scalar multiplication: ![ $$\\begin{array}{rcl} x,y \\in M& \\Rightarrow & x + y \\in M, \\\\ \\alpha\\in\\mathbb{F}\\text{ and }x \\in M& \\Rightarrow & \\alpha x \\in M.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ15.gif) We also say that M is closed under vector addition and multiplication by scalars. Note that since M≠∅, we can find x ∈ M; this means that 0 = 0 ⋅x ∈ M. Thus, subspaces become vector spaces in their own right and this without any further checking of the axioms. Example 1.4.10. The set of polynomials whose coefficients lie in the field ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq44.gif) ![ $$\\mathbb{F}\\left \[t\\right \] = \\left \\{p\\left \(t\\right \) = {a}_{0} + {a}_{1}t + \\cdots+ {a}_{k}{t}^{k} : k \\in{\\mathbb{N}}_{ 0},{a}_{0},{a}_{1},\\ldots ,{a}_{k} \\in\\mathbb{F}\\right \\}$$ ](A81414_1_En_1_Chapter_Equab.gif) is also a vector space. If we think of polynomials as functions, then we imagine them as a subspace of ![ $$\\mathrm{Func}\\left \(\\mathbb{F}, \\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq45.gif). However, the fact that a polynomial is determined by its representation as a function depends on the fact that we have a field of characteristic zero! If, for instance, ![ $$\\mathbb{F} =\\left \\{0,1\\right \\},$$ ](A81414_1_En_1_Chapter_IEq46.gif) then the polynomial t 2 + t vanishes when evaluated at both 0 and 1. Thus, this nontrivial polynomial is, when viewed as a function, the same as ![ $$p\\left \(t\\right \) = 0$$ ](A81414_1_En_1_Chapter_IEq47.gif). We could also just record the coefficients. In that case, ![ $$\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq48.gif) is a subspace of ![ $$\\mathrm{Func}\\left \({\\mathbb{N}}_{0}, \\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq49.gif) and consists of those infinite tuples that are zero except at all but a finite number of places. If ![ $$p\\left \(t\\right \) = {a}_{0} + {a}_{1}t + \\cdots+ {a}_{n}{t}^{n} \\in\\mathbb{F}\\left \[t\\right \],$$ ](A81414_1_En_1_Chapter_Equac.gif) then the largest integer k ≤ n such that a k ≠0 is called the degree of p. In other words, ![ $$p\\left \(t\\right \) = {a}_{0} + {a}_{1}t + \\cdots+ {a}_{k}{t}^{k}$$ ](A81414_1_En_1_Chapter_Equad.gif) and a k ≠0. We use the notation ![ $$\\deg \\left \(p\\right \) = k$$ ](A81414_1_En_1_Chapter_IEq50.gif). Example 1.4.11. The collection of formal power series ![ $$\\begin{array}{rcl} \\mathbb{F}\\left \[\\left \[t\\right \]\\right \]& =& \\left \\{{a}_{0} + {a}_{1}t + \\cdots+ {a}_{k}{t}^{k} + \\cdots: {a}_{ 0},{a}_{1},\\ldots ,{a}_{k},\\ldots\\in\\mathbb{F}\\right \\} \\\\ & =& \\left \\{{\\sum\\nolimits }_{i=0}^{\\infty }{a}_{ i}{t}^{i} : {a}_{ i} \\in\\mathbb{F},i \\in{\\mathbb{N}}_{0}\\right \\} \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ16.gif) bears some resemblance to polynomials, but without further discussions on convergence or even whether this makes sense, we cannot interpret power series as lying in ![ $$\\mathrm{Func}\\left \(\\mathbb{F}, \\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq51.gif). If, however, we only think about recording the coefficients, then we see that ![ $$\\mathbb{F}\\left \[\\left \[t\\right \]\\right \] =\\mathrm{ Func}\\left \({\\mathbb{N}}_{0}, \\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq52.gif). The extra piece of information that both ![ $$\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq53.gif) and ![ $$\\mathbb{F}\\left \[\\left \[t\\right \]\\right \]$$ ](A81414_1_En_1_Chapter_IEq54.gif) carry with them, aside from being vector spaces, is that the elements can also be multiplied. This extra structure will be used in the case of ![ $$\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq55.gif). Power series will not play an important role in the sequel. Fin Ally, note that ![ $$\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq56.gif), is a subspace of ![ $$\\mathbb{F}\\left \[\\left \[t\\right \]\\right \]$$ ](A81414_1_En_1_Chapter_IEq57.gif). Example 1.4.12. For two (or more) vector spaces V, W over the same field ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq58.gif) we can form the (Cartesian) product ![ $$V \\times W = \\left \\{\\left \(v,w\\right \) : v \\in V \\text{ and }w \\in W\\right \\}.$$ ](A81414_1_En_1_Chapter_Equae.gif) Scalar multiplication and addition are defined by ![ $$\\begin{array}{rcl} \\alpha \\left \(v,w\\right \)& =& \\left \(\\alpha v,\\alpha w\\right \), \\\\ \\left \({v}_{1},{w}_{1}\\right \) + \\left \({v}_{2},{w}_{2}\\right \)& =& \\left \({v}_{1} + {v}_{2},{w}_{1} + {w}_{2}\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ17.gif) Note that V ×W is not in a natural way a subspace in a space of functions or maps. ### 1.4.1 Exercises 1. Find a subset ![ $$C \\subset{\\mathbb{F}}^{2}$$ ](A81414_1_En_1_Chapter_IEq59.gif) that is closed under scalar multiplication but not under addition of vectors. 2. Find a subset A ⊂ ℂ 2 that is closed under vector addition but not under multiplication by complex numbers. 3. Find a subset Q ⊂ ℝ that is closed under addition but not scalar multiplication by real scalars. 4. Let V = ℤ be the set of integers with the usual addition as "vector addition." Show that it is not possible to define scalar multiplication by ℚ, ℝ, or ℂ so as to make it into a vector space. 5. Let V be a real vector space, i.e., a vector space were the scalars are ℝ. The complexification of V is defined as ![ $${V }_{\\mathbb{C}} = V \\times V$$ ](A81414_1_En_1_Chapter_IEq60.gif). As in the construction of complex numbers, we agree to write ![ $$\\left \(v,w\\right \) \\in{V }_{\\mathbb{C}}$$ ](A81414_1_En_1_Chapter_IEq61.gif) as v + iw. Moreover, if v ∈ V , then it is convenient to use the shorthand notations v = v + i0 and iv = 0 + iv. Define complex scalar multiplication on ![ $${V }_{\\mathbb{C}}$$ ](A81414_1_En_1_Chapter_IEq62.gif) and show that it becomes a complex vector space. 6. Let V be a complex vector space i.e., a vector space were the scalars are ℂ. Define V ∗ as the complex vector space whose additive structure is that of V but where complex scalar multiplication is given by ![ $$\\lambda{_\\ast} x =\\bar{ \\lambda }x$$ ](A81414_1_En_1_Chapter_IEq63.gif). Show that V ∗ is a complex vector space. 7. Let P n be the set of polynomials in ![ $$\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq64.gif) of degree ≤ n. (a) Show that P n is a vector space. (b) Show that the space of polynomials of degree n ≥ 1 is P n − P n − 1 and does not form a subspace. (c) If ![ $$f\\left \(t\\right \) :$$ ](A81414_1_En_1_Chapter_IEq65.gif) ![ $$\\mathbb{F} \\rightarrow\\mathbb{F},$$ ](A81414_1_En_1_Chapter_IEq66.gif) show that ![ $$V = \\left \\{p\\left \(t\\right \)f\\left \(t\\right \) : p \\in{P}_{n}\\right \\}$$ ](A81414_1_En_1_Chapter_IEq67.gif) is a subspace of ![ $$\\mathrm{Func}\\left \(\\mathbb{F}, \\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq68.gif). 8. Let ![ $$V = {\\mathbb{C}}^{\\times } = \\mathbb{C} -\\left \\{0\\right \\}$$ ](A81414_1_En_1_Chapter_IEq69.gif). Define addition on V by x ⊞ y = xy. Define scalar multiplication by α ⊡ x = e α x. (a) Show that if we use 0 V = 1 and − x = x − 1, then the first four axioms for a vector space are satisfied. (b) Which of the scalar multiplication properties do not hold? ## 1.5 Bases We are now going to introduce one of the most important concepts in linear algebra. Let V be a vector space over ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq70.gif). Definition 1.5.1. Our first construction is to form linear combin Ations of vectors. If ![ $${\\alpha }_{1},\\ldots ,{\\alpha }_{m} \\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq71.gif) and x 1,..., x m ∈ V, then we can multiply each x i by the scalar α i and then add up the resulting vectors to form the linear combin Ation ![ $$x = {\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{m}{x}_{m}.$$ ](A81414_1_En_1_Chapter_Equaf.gif) We also say that x is a linear combin Ation of the x i s. If we arrange the vectors in a 1 ×m row matrix ![ $$\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equag.gif) and the scalars in a column m ×1 matrix, we see that the linear combin Ation can be thought of as a matrix product ![ $${\\sum\\nolimits }_{i=1}^{m}{\\alpha }_{ i}{x}_{i} = {\\alpha }_{1}{x}_{1}+\\cdots +{\\alpha }_{m}{x}_{m} = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equah.gif) To be completely rigorous, we should write the linear combin Ation as a 1 ×1 matrix ![ $$\\left \[{\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{m}{x}_{m}\\right \]$$ ](A81414_1_En_1_Chapter_IEq72.gif), but it seems too pedantic to insist on this. Another curiosity here is that matrix multiplication almost forces us to write ![ $${x}_{1}{\\alpha }_{1}+\\cdots +{x}_{m}{\\alpha }_{m} = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equai.gif) This is one reason why we want to be able to multiply by scalars on both the left and right. Definition 1.5.2. A finite basis for V is a finite collection of vectors x 1,..., x n ∈ V such that each element x ∈ V can be written as a linear combin Ation ![ $$x = {\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{n}{x}_{n}$$ ](A81414_1_En_1_Chapter_Equaj.gif) in precisely one way. This means that for each x ∈ V , we can find ![ $${\\alpha }_{1},\\ldots ,{\\alpha }_{n} \\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq73.gif) such that ![ $$x = {\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{n}{x}_{n}.$$ ](A81414_1_En_1_Chapter_Equak.gif) Moreover, if we have two linear combin Ations both yielding x ![ $${\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{n}{x}_{n} = x = {\\beta }_{1}{x}_{1} + \\cdots+ {\\beta }_{n}{x}_{n},$$ ](A81414_1_En_1_Chapter_Equal.gif) then ![ $${\\alpha }_{1} = {\\beta }_{1},\\ldots ,{\\alpha }_{n} = {\\beta }_{n}.$$ ](A81414_1_En_1_Chapter_Equam.gif) Since each x has a unique linear combin Ation, we also refer to it as the expansion of x with respect to the basis. In this way, we get a well-defined correspondence ![ $$V \\longleftrightarrow {\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq74.gif) by identifying ![ $$x = {\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{n}{x}_{n}$$ ](A81414_1_En_1_Chapter_Equan.gif) with the n-tuple ![ $$\\left \({\\alpha }_{1},\\ldots ,{\\alpha }_{n}\\right \)$$ ](A81414_1_En_1_Chapter_IEq75.gif). We note that this correspondence preserves scalar multiplication and vector addition since ![ $$\\begin{array}{rcl} \\alpha x& =& \\alpha \\left \({\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{n}{x}_{n}\\right \) \\\\ & =& \\left \(\\alpha {\\alpha }_{1}\\right \){x}_{1} + \\cdots+ \\left \(\\alpha {\\alpha }_{n}\\right \){x}_{n}, \\\\ x + y& =& \\left \({\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{n}{x}_{n}\\right \) + \\left \({\\beta }_{1}{x}_{1} + \\cdots+ {\\beta }_{n}{x}_{n}\\right \) \\\\ & =& \\left \({\\alpha }_{1} + {\\beta }_{1}\\right \){x}_{1} + \\cdots+ \\left \({\\alpha }_{n} + {\\beta }_{n}\\right \){x}_{n}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ18.gif) This means that the choice of basis makes V equivalent to the more concrete vector space ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq76.gif). This idea of making abstract vector spaces more concrete by the use of a basis is developed further in Sects. 1.7 and 1.8. Note that if x 1,..., x n ∈ V form a basis, then any reordering of the basis vectors, such as x 2, x 1,..., x n ∈ V , also forms a basis. We will think of these two choices as being different bases. We shall prove in Sect. 1.8 and again in Sect. 1.12 that the number of vectors in such a basis for V is always the same. Definition 1.5.3. This allows us to define the dimension of V over ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq77.gif) to be the number of elements in a basis. Note that the uniqueness condition for the linear combin Ations guarantees that none of the vectors in a basis can be the zero vector. Example 1.5.4. The simplest example of a vector space ![ $$V = \\left \\{0\\right \\}$$ ](A81414_1_En_1_Chapter_IEq78.gif) is a bit special. Its only basis is the empty collection due to the requirement that vectors must have unique expansions with respect to a basis. Since such a choice of basis contains 0 elements, we say that the dimension of the trivial vector space is 0. Note also that in this case, the "choice of basis" is not an ordered collection of vectors. So if one insists on ordered collections of vectors for bases, there will be one logical inconsistency in the theory when one talks about selecting a basis for the trivial vector space. Another slightly more interesting case that we can cover now is that of one-dimensional spaces. Lemma 1.5.5. Let V be a vector space over ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq79.gif) . If V has a basis with one element, then any other finite basis also has one element. Proof. Let x 1 be a basis for V. If x ∈ V, then x = αx 1 for some α. Now, suppose that we have z 1,..., z n ∈ V, then z i = α i x 1. If z 1,..., z n forms a basis, then none of the vectors are zero and consequently α i ≠0. Thus, for each i, we have x 1 = α i − 1 z i . Therefore, if n > 1, then we have that x 1 can be written in more than one way as a linear combin Ation of z 1,..., z n . This contradicts the definition of a basis. Whence, n = 1 as desired. Let us consider some basic examples. Example 1.5.6. In ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq80.gif) define the vectors ![ $${e}_{1} = \\left \[\\begin{array}{c} 1\\\\ 0\\\\ \\vdots \\\\ 0 \\end{array} \\right \],{e}_{2} = \\left \[\\begin{array}{c} 0\\\\ 1\\\\ \\vdots \\\\ 0 \\end{array} \\right \],\\ldots ,{e}_{n} = \\left \[\\begin{array}{c} 0\\\\ 0\\\\ \\vdots \\\\ 1 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equao.gif) Thus, e i is the vector that is zero in every entry except the ith where it is 1. These vectors evidently form a basis for ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq81.gif) since any vector in ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq82.gif) has the unique expansion ![ $$\\begin{array}{rcl}{ \\mathbb{F}}^{n} \\ni x& =& \\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ {\\alpha }_{2}\\\\ \\vdots \\\\ {\\alpha }_{n} \\end{array} \\right \] \\\\ & =& {\\alpha }_{1}\\left \[\\begin{array}{c} 1\\\\ 0\\\\ \\vdots \\\\ 0 \\end{array} \\right \] + {\\alpha }_{2}\\left \[\\begin{array}{c} 0\\\\ 1\\\\ \\vdots \\\\ 0 \\end{array} \\right \] + \\cdots+ {\\alpha }_{n}\\left \[\\begin{array}{c} 0\\\\ 0\\\\ \\vdots \\\\ 1 \\end{array} \\right \] \\\\ & =& {\\alpha }_{1}{e}_{1} + {\\alpha }_{2}{e}_{2} + \\cdots+ {\\alpha }_{n}{e}_{n} \\\\ & =& \\left \[\\begin{array}{cccc} {e}_{1} & {e}_{2} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1} \\\\ {\\alpha }_{2}\\\\ \\vdots \\\\ {\\alpha }_{n} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ19.gif) Example 1.5.7. In ![ $${\\mathbb{F}}^{2}$$ ](A81414_1_En_1_Chapter_IEq83.gif) consider ![ $${x}_{1} = \\left \[\\begin{array}{c} 1\\\\ 0 \\end{array} \\right \],{x}_{2} = \\left \[\\begin{array}{c} 1\\\\ 1 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equap.gif) These two vectors also form a basis for ![ $${\\mathbb{F}}^{2}$$ ](A81414_1_En_1_Chapter_IEq84.gif) since we can write ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{c} \\alpha \\\\ \\beta\\end{array} \\right \] = \\left \(\\alpha- \\beta \\right \)\\left \[\\begin{array}{c} 1\\\\ 0 \\end{array} \\right \] + \\beta \\left \[\\begin{array}{c} 1\\\\ 1 \\end{array} \\right \] = \\left \[\\begin{array}{cc} 1&1\\\\ 0 &1 \\end{array} \\right \]\\left \[\\begin{array}{c} \\left \(\\alpha- \\beta \\right \)\\\\ \\beta\\end{array} \\right \].& & \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ20.gif) To see that these choices are unique, observe that the coefficient on x 2 must be β and this then uniquely determines the coefficient in front of x 1. Example 1.5.8. In ![ $${\\mathbb{F}}^{2}$$ ](A81414_1_En_1_Chapter_IEq85.gif) consider the slightly more complicated set of vectors ![ $${x}_{1} = \\left \[\\begin{array}{c} 1\\\\ -1 \\end{array} \\right \],{x}_{2} = \\left \[\\begin{array}{c} 1\\\\ 1 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equaq.gif) This time, we see ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{c} \\alpha \\\\ \\beta\\end{array} \\right \]& =& \\frac{\\alpha- \\beta } {2} \\left \[\\begin{array}{c} 1\\\\ -1 \\end{array} \\right \] + \\frac{\\alpha+ \\beta } {2} \\left \[\\begin{array}{c} 1\\\\ 1 \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{cc} 1 &1\\\\ - 1 &1 \\end{array} \\right \]\\left \[\\begin{array}{c} \\frac{\\alpha -\\beta } {2} \\\\ \\frac{\\alpha +\\beta } {2} \\end{array} \\right \]\\end{array}$$ ](A81414_1_En_1_Chapter_Equ21.gif) Again, we can see that the coefficients are unique by observing that the system ![ $$\\begin{array}{rcl} \\gamma+ \\delta & =& \\alpha , \\\\ -\\gamma+ \\delta & =& \\beta\\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ22.gif) has a unique solution. This is because γ, respectively δ, can be found by subtracting, respectively adding, these two equations. Example 1.5.9. Likewise, the space of matrices ![ $$\\mathrm{{Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq86.gif) has a natural basis E ij of nm elements, where E ij is the matrix that is zero in every entry except the ![ $$\\left \(i,j\\right \)$$ ](A81414_1_En_1_Chapter_IEq87.gif)th where it is 1. The concept of a basis depends quite a lot on the scalars we use. The field of complex numbers ℂ is clearly a one-dimensional vector space when we use ℂ as the scalar field. To be specific, we have that x 1 = 1 is a basis for ℂ. If, however, we view ℂ as a vector space over the real numbers ℝ, then only real numbers in ℂ are linear combin Ations of x 1. Therefore, x 1 is no longer a basis when we restrict to real scalars. Evidently, we need to use x 1 = 1 and x 2 = i to obtain a basis over ℝ. It is also possible to have infinite bases. However, some care must be taken in defining this concept as we are not allowed to form infinite linear combin Ations. We say that a vector space V over ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq88.gif) has a collection x i ∈ V, where i ∈ A is some possibly infinite index set, as a basis, if each x ∈ V is a linear combin Ation of a finite number of the vectors x i in a unique way. There is, surprisingly, only one important vector space that comes endowed with a natural infinite basis. This is the space ![ $$\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq89.gif) of polynomials. The collection x i = t i , i = 0, 1, 2,... evidently gives us a basis. The other spaces ![ $$\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq90.gif) and ![ $$\\mathrm{Func}\\left \(S, \\mathbb{F}\\right \),$$ ](A81414_1_En_1_Chapter_IEq91.gif) where S is infinite, do not come with any natural bases. There is a rather subtle theorem which asserts that every vector space must have a basis.It is somewhat beyond the scope of this text to prove this theorem as it depends on Zorn's lemma or equivalently the axiom of choice. It should also be mentioned that it is a mere existence theorem as it does not give a procedure for constructing infinite bases. In order to get around these nasty points, we resort to the trick of saying that a vector space is infinite-dimensional if it does not admit a finite basis. Note that in the above lemma, we can also show that if V admits a basis with one element, then it cannot have an infinite basis. Fin Ally, we need to mention some subtleties in the definition of a basis. In most texts, a distinction is made between an ordered basis x 1,..., x n and a basis as a subset ![ $$\\left \\{{x}_{1},\\ldots ,{x}_{n}\\right \\} \\subset V.$$ ](A81414_1_En_1_Chapter_Equar.gif) There is a fine difference between these two concepts. The collection x 1, x 2 where x 1 = x 2 = x ∈ V can never be a basis as x can be written as a linear combin Ation of x 1 and x 2 in at least two different ways. As a set, however, we see that ![ $$\\left \\{x\\right \\} = \\left \\{{x}_{1},{x}_{2}\\right \\}$$ ](A81414_1_En_1_Chapter_IEq92.gif) consists of only one vector, and therefore, this redundancy has disappeared. Throughout this text, we assume that bases are ordered. This is entirely reasonable as most people tend to write down a collection of elements of a set in some, perhaps arbitrary, order. It is also important and convenient to work with ordered bases when time comes to discuss matrix representations. On the few occasions where we shall be working with infinite bases, as with ![ $$\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq93.gif), they will also be ordered in a natural way using either the natural numbers or the integers. ### 1.5.1 Exercises 1. Show that 1, t,..., t n form a basis for P n . 2. Show that if ![ $${p}_{0},\\ldots ,{p}_{n} \\in{P}_{n} -\\left \\{0\\right \\}$$ ](A81414_1_En_1_Chapter_IEq94.gif) satisfy ![ $$\\deg \\left \({p}_{k}\\right \) = k,$$ ](A81414_1_En_1_Chapter_IEq95.gif) then they form a basis for P n . 3. Find a basis p 1,..., p 4 ∈ P 3 such that ![ $$\\deg \\left \({p}_{i}\\right \) = 3$$ ](A81414_1_En_1_Chapter_IEq96.gif) for i = 1, 2, 3, 4. 4. For α ∈ ℂ consider the subset ![ $$\\mathbb{Q}\\left \[\\alpha \\right \] = \\left \\{p\\left \(\\alpha \\right \) : p \\in\\mathbb{Q}\\left \[t\\right \]\\right \\} \\subset\\mathbb{C}.$$ ](A81414_1_En_1_Chapter_Equas.gif) Show that: (a) If α ∈ ℚ, then ![ $$\\mathbb{Q}\\left \[\\alpha \\right \] = \\mathbb{Q}$$ ](A81414_1_En_1_Chapter_IEq97.gif) (b) If α is algebraic, i.e., it solves an equation ![ $$p\\left \(\\alpha \\right \) = 0$$ ](A81414_1_En_1_Chapter_IEq98.gif) for some ![ $$p \\in\\mathbb{Q}\\left \[t\\right \],$$ ](A81414_1_En_1_Chapter_IEq99.gif) then ![ $$\\mathbb{Q}\\left \[\\alpha \\right \]$$ ](A81414_1_En_1_Chapter_IEq100.gif) is a finite-dimensional vector space over ℚ with a basis 1, α, α2,..., α n − 1 for some n ∈ ℕ. Hint: Let n be the smallest number so that α n is a linear combin Ation of 1, α, α2,..., α n − 1. You must explain why we can find such n. (c) If α is algebraic, then ![ $$\\mathbb{Q}\\left \[\\alpha \\right \]$$ ](A81414_1_En_1_Chapter_IEq101.gif) is a field that contains ℚ. Hint: Show that α must be the root of a polynomial with a nonzero constant term. Use this to find a formula for α − 1 that depends only on positive powers of α. (d) Show that α is algebraic if and only if ![ $$\\mathbb{Q}\\left \[\\alpha \\right \]$$ ](A81414_1_En_1_Chapter_IEq102.gif) is finite-dimensional over ℚ. (e) We say that α is transcendental if it is not algebraic. Show that if α is transcendental, then 1, α, α2,..., α n ,... form an infinite basis for ![ $$\\mathbb{Q}\\left \[\\alpha \\right \]$$ ](A81414_1_En_1_Chapter_IEq103.gif). Thus, ![ $$\\mathbb{Q}\\left \[\\alpha \\right \]$$ ](A81414_1_En_1_Chapter_IEq104.gif) and ![ $$\\mathbb{Q}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq105.gif) represent the same vector space via the substitution t↔α. 5. Show that ![ $$\\left \[\\begin{array}{l} 1\\\\ 1 \\\\ 0\\\\ 0\\end{array} \\right \],\\left \[\\begin{array}{l} 1\\\\ 0 \\\\ 1\\\\ 0\\end{array} \\right \],\\left \[\\begin{array}{l} 1\\\\ 0 \\\\ 0\\\\ 1\\end{array} \\right \],\\left \[\\begin{array}{l} 0\\\\ 1 \\\\ 1\\\\ 0\\end{array} \\right \],\\left \[\\begin{array}{l} 0\\\\ 1 \\\\ 0\\\\ 1\\end{array} \\right \],\\left \[\\begin{array}{l} 0\\\\ 0 \\\\ 1\\\\ 1\\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equat.gif) span ℂ 4, i.e., every vector on ℂ 4 can be written as a linear combin Ation of these vectors. Which collections of those six vectors form a basis for ℂ 4 ? 6. Is it possible to find a basis x 1,..., x n for ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq106.gif) so that the ith entry for all of the vectors x 1,..., x n is zero? 7. If e 1,..., e n is the standard basis for ℂ n , show that both ![ $${e}_{1},\\ldots ,{e}_{n},i{e}_{1},\\ldots ,i{e}_{n}$$ ](A81414_1_En_1_Chapter_Equau.gif) and ![ $${e}_{1},i{e}_{1},\\ldots ,{e}_{n},i{e}_{n}$$ ](A81414_1_En_1_Chapter_Equav.gif) form bases for ℂ n when viewed as a real vector space. 8. If x 1,..., x n is a basis for the real vector space V , then it is also a basis for the complexification ![ $${V }_{\\mathbb{C}}$$ ](A81414_1_En_1_Chapter_IEq107.gif) (see Exercise 5 in Sect. 1.4 for the definition of ![ $${V }_{\\mathbb{C}}$$ ](A81414_1_En_1_Chapter_IEq108.gif)). 9. Find a basis for ℝ 3 where all coordin Ate entries are ± 1. 10. A subspace ![ $$M \\subset \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq109.gif) is called a two-sided ideal if for all ![ $$X \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq110.gif) and A ∈ M also XA, AX ∈ M. Show that if ![ $$M\\neq \\left \\{0\\right \\},$$ ](A81414_1_En_1_Chapter_IEq111.gif) then ![ $$M =\\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq112.gif). Hint: Assume A ∈ M is such that some entry is nonzero. Make it 1 by multiplying A by an appropriate scalar on the left. Then, show that we can construct the standard basis for ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq113.gif) by multiplying A by the standard basis matrices for ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq114.gif) on the left and right. 11. Let V be a vector space. (a) Show that x, y ∈ V form a basis if and only if x + y, x − y form a basis. (b) Show that x, y, z ∈ V form a basis if and only if x + y, y + z, z + x form a basis. ## 1.6 Linear Maps Definition 1.6.1. A map L : V -> W between vector spaces over the same field ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq115.gif) is said to be linear if it preserves scalar multiplication and addition in the following way: ![ $$\\begin{array}{rcl} L\\left \(\\alpha x\\right \)& =& \\alpha L\\left \(x\\right \), \\\\ L\\left \(x + y\\right \)& =& L\\left \(x\\right \) + L\\left \(y\\right \), \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ23.gif) where ![ $$\\alpha\\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq116.gif) and x, y ∈ V. It is possible to collect these two properties into one condition as follows: ![ $$L\\left \({\\alpha }_{1}{x}_{1} + {\\alpha }_{2}{x}_{2}\\right \) = {\\alpha }_{1}L\\left \({x}_{1}\\right \) + {\\alpha }_{2}L\\left \({x}_{2}\\right \),$$ ](A81414_1_En_1_Chapter_Equaw.gif) where ![ $${\\alpha }_{1},{\\alpha }_{2} \\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq117.gif) and x 1, x 2 ∈ V. More generally, we have that L preserves linear combin Ations in the following way: ![ $$\\begin{array}{rcl} L\\left \(\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \]\\right \)& =& L\\left \({x}_{1}{\\alpha }_{1} + \\cdots+ {x}_{m}{\\alpha }_{m}\\right \) \\\\ & =& L\\left \({x}_{1}\\right \){\\alpha }_{1} + \\cdots+ L\\left \({x}_{m}\\right \){\\alpha }_{m} \\\\ & =& \\left \[\\begin{array}{ccc} L\\left \({x}_{1}\\right \)&\\cdots &L\\left \({x}_{m}\\right \) \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ24.gif) To prove this simple fact, we use induction on m. When m = 1, this is simply the fact that L preserves scalar multiplication ![ $$L\\left \(\\alpha x\\right \) = \\alpha L\\left \(x\\right \).$$ ](A81414_1_En_1_Chapter_Equax.gif) Assuming the induction hypothesis, that the statement holds for m − 1, we see that ![ $$\\begin{array}{rcl} L\\left \({x}_{1}{\\alpha }_{1} + \\cdots+ {x}_{m}{\\alpha }_{m}\\right \)& =& L\\left \(\\left \({x}_{1}{\\alpha }_{1} + \\cdots+ {x}_{m-1}{\\alpha }_{m-1}\\right \) + {x}_{m}{\\alpha }_{m}\\right \) \\\\ & =& L\\left \({x}_{1}{\\alpha }_{1} + \\cdots+ {x}_{m-1}{\\alpha }_{m-1}\\right \) + L\\left \({x}_{m}{\\alpha }_{m}\\right \) \\\\ & =& \\left \(L\\left \({x}_{1}\\right \){\\alpha }_{1} + \\cdots+ L\\left \({x}_{m-1}\\right \){\\alpha }_{m-1}\\right \) + L\\left \({x}_{m}\\right \){\\alpha }_{m} \\\\ & =& L\\left \({x}_{1}\\right \){\\alpha }_{1} + \\cdots+ L\\left \({x}_{m}\\right \){\\alpha }_{m}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ25.gif) The important feature of linear maps is that they preserve the operations that are allowed on the spaces we work with.Some extra terminology is often used for linear maps. Definition 1.6.2. If the values are the field itself, i.e., ![ $$W = \\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq118.gif), then we also call ![ $$L : V \\rightarrow\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq119.gif) a linear function or linear functional. If V = W, then we call L : V -> V a linear operator. Before giving examples, we introduce some further notation. Definition 1.6.3. The set of all linear maps L : V -> W is often denoted ![ $$\\mathrm{Hom}\\left \(V,W\\right \)$$ ](A81414_1_En_1_Chapter_IEq120.gif). In case we need to specify the scalars, we add the field as a subscript ![ $$\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(V,W\\right \)$$ ](A81414_1_En_1_Chapter_IEq121.gif). The abbreviation Hom stands for homomorphism. Homomorphisms are in general maps that preserve whatever algebraic structure that is available. Note that ![ $$\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(V,W\\right \) \\subset \\mathrm{ Map}\\left \(V,W\\right \)$$ ](A81414_1_En_1_Chapter_Equay.gif) and is a subspace of the latter. Thus, ![ $$\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(V,W\\right \)$$ ](A81414_1_En_1_Chapter_IEq122.gif) is a vector space over ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq123.gif). It is easy to see that the composition of linear maps always yields a linear map. Thus, if L 1 : V 1 -> V 2 and L 2 : V 2 -> V 3 are linear maps, then the composition L 2 ∘ L 1 : V 1 -> V 3 defined by ![ $${L}_{2} \\circ{L}_{1}\\left \(x\\right \) = {L}_{2}\\left \({L}_{1}\\left \(x\\right \)\\right \)$$ ](A81414_1_En_1_Chapter_IEq124.gif) is again a linear map. We often ignore the composition sign ∘ and simply write L 2 L 1. An important special situation is that one can "multiply" linear operators L 1, L 2 : V -> V via composition. This multiplication is in general not commutative or abelian as it rarely happens that L 1 L 2 and L 2 L 1 represent the same map. We shall see many examples of this throughout the text. Fin Ally, still staying with the abstract properties, we note that, if L : V -> W is a linear map and M ⊂ V is a subspace, then the restriction L | M : M -> W defined trivially by ![ $$L{\\vert }_{M}\\left \(x\\right \) = L\\left \(x\\right \)$$ ](A81414_1_En_1_Chapter_IEq125.gif)is also a linear map. We shall often even use the same symbol L for both maps, but beware, many properties for a linear map can quickly change when we restrict it to different subspaces. The restriction leads to another important construction that will become very important in subsequent chapters. Definition 1.6.4. Let L : V -> V be a linear operator. A subspace M ⊂ V is said to be L-invariant or simply invariant if ![ $$L\\left \(M\\right \) \\subset M$$ ](A81414_1_En_1_Chapter_IEq126.gif). Thus, the restriction of L to M defines a new linear operator L | M : M -> M. Example 1.6.5. Define a map ![ $$L : \\mathbb{F} \\rightarrow\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq127.gif) by scalar multiplication on ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq128.gif) via ![ $$L\\left \(x\\right \) = \\lambda x$$ ](A81414_1_En_1_Chapter_IEq129.gif) for some ![ $$\\lambda\\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq130.gif). The distributive law says that the map is additive, and the associative law together with the commutative law say that it preserves scalar multiplication. This example can now easily be generalized to scalar multiplication on a vector space V, where we can also define L : V -> V by ![ $$L\\left \(x\\right \) = \\lambda x$$ ](A81414_1_En_1_Chapter_IEq131.gif). Two special cases are of particular interest. First, the identity transformation 1 V : V -> V defined by ![ $${1}_{V }\\left \(x\\right \) = x$$ ](A81414_1_En_1_Chapter_IEq132.gif). This is evidently scalar multiplication by 1. Second, we have the zero transformation 0 = 0 V : V -> V that maps everything to 0 ∈ V and is simply multiplication by 0. The latter map can also be generalized to a zero map 0 : V -> W between different vector spaces. With this in mind, we can always write multiplication by λ as the map λ1 V thus keeping track of what it does, where it does it, and fin Ally keeping track of the fact that we think of the procedure as a map. Expanding on this theme a bit we can, starting with a linear operator L : V -> V , use powers of L as well as linear combin Ations to create new operators on V. For instance, L 2 − 3 ⋅L + 2 ⋅1 V is defined by ![ $$\\left \({L}^{2} - 3 \\cdot L + 2 \\cdot{1}_{ V }\\right \)\\left \(x\\right \) = L\\left \(L\\left \(x\\right \)\\right \) - 3L\\left \(x\\right \) + 2x.$$ ](A81414_1_En_1_Chapter_Equaz.gif) We shall often do this in quite general situations. The most general construction comes about by selecting a polynomial ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq133.gif) and considering ![ $$p\\left \(L\\right \)$$ ](A81414_1_En_1_Chapter_IEq134.gif). If p = α k t k + ⋯ + α1 t + α0, then ![ $$p\\left \(L\\right \) = {\\alpha }_{k}{L}^{k} + \\cdots+ {\\alpha }_{ 1}L + {\\alpha }_{0}{1}_{V }.$$ ](A81414_1_En_1_Chapter_Equba.gif) If we think of t 0 = 1 as the degree 0 term in the polynomial, then by substituting L, we apparently define L 0 = 1 V . So it is still the identity, but the identity in the appropriate set where L lives. Evaluation on x ∈ V is given by ![ $$p\\left \(L\\right \)\\left \(x\\right \) = {\\alpha }_{k}{L}^{k}\\left \(x\\right \) + \\cdots+ {\\alpha }_{ 1}L\\left \(x\\right \) + {\\alpha }_{0}x.$$ ](A81414_1_En_1_Chapter_Equbb.gif) Apparently, p simply defines a linear combin Ation of the linear operators L k ,..., L, 1 V , and ![ $$p\\left \(L\\right \)\\left \(x\\right \)$$ ](A81414_1_En_1_Chapter_IEq135.gif) is a linear combin Ation of the vectors ![ $${L}^{k}\\left \(x\\right \),\\ldots ,L\\left \(x\\right \),$$ ](A81414_1_En_1_Chapter_IEq136.gif) x. Example 1.6.6. Fix x ∈ V. Note that the axioms of scalar multiplication also imply that ![ $$L : \\mathbb{F} \\rightarrow V$$ ](A81414_1_En_1_Chapter_IEq137.gif) defined by ![ $$L\\left \(\\alpha \\right \) = x\\alpha $$ ](A81414_1_En_1_Chapter_IEq138.gif) is linear. Matrix multiplication is the next level of abstraction. Here we let ![ $$V = {\\mathbb{F}}^{m}$$ ](A81414_1_En_1_Chapter_IEq139.gif) and ![ $$W = {\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq140.gif) and L is represented by an n ×m matrix ![ $$B = \\left \[\\begin{array}{ccc} {\\beta }_{11} & \\cdots & {\\beta }_{1m}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\beta }_{n1} & \\cdots &{\\beta }_{nm} \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equbc.gif) The map is defined using matrix multiplication as follows: ![ $$\\begin{array}{rcl} L\\left \(x\\right \)& =& Bx \\\\ & =& \\left \[\\begin{array}{ccc} {\\beta }_{11} & \\cdots & {\\beta }_{1m}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\beta }_{n1} & \\cdots &{\\beta }_{nm} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\xi }_{1}\\\\ \\vdots \\\\ {\\xi }_{m} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{c} {\\beta }_{11}{\\xi }_{1} + \\cdots+ {\\beta }_{1m}{\\xi }_{m}\\\\ \\vdots \\\\ {\\beta }_{n1}{\\xi }_{1} + \\cdots+ {\\beta }_{nm}{\\xi }_{m} \\end{array} \\right \] \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ26.gif) Thus, the ith coordin Ate of ![ $$L\\left \(x\\right \)$$ ](A81414_1_En_1_Chapter_IEq141.gif) is given by ![ $${\\sum\\nolimits }_{j=1}^{m}{\\beta }_{ ij}{\\xi }_{j} = {\\beta }_{i1}{\\xi }_{1} + \\cdots+ {\\beta }_{im}{\\xi }_{m}.$$ ](A81414_1_En_1_Chapter_Equbd.gif) A similar and very important way of representing this map comes by noting that it creates linear combin Ations. Write B as a row matrix of its column vectors ![ $$B = \\left \[\\begin{array}{ccc} {\\beta }_{11} & \\cdots & {\\beta }_{1m}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\beta }_{n1} & \\cdots &{\\beta }_{nm} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {b}_{1} & \\cdots &{b}_{m} \\end{array} \\right \],\\text{ where }{b}_{i} = \\left \[\\begin{array}{c} {\\beta }_{1i}\\\\ \\vdots \\\\ {\\beta }_{ni} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Eqube.gif) and then observe ![ $$\\begin{array}{rcl} L\\left \(x\\right \)& =& Bx \\\\ & =& \\left \[\\begin{array}{ccc} {b}_{1} & \\cdots &{b}_{m} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\xi }_{1}\\\\ \\vdots \\\\ {\\xi }_{m} \\end{array} \\right \] \\\\ & =& {b}_{1}{\\xi }_{1} + \\cdots+ {b}_{m}{\\xi }_{m}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ27.gif) Note that if m = n and the matrix we use is a diagonal matrix with λs down the diagonal and zeros elsewhere, then we obtain the scalar multiplication map ![ $$\\lambda {1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_1_Chapter_IEq142.gif). The matrix looks like this ![ $$\\left \[\\begin{array}{cccc} \\lambda &0 &\\cdots &0\\\\ 0 &\\lambda& & 0\\\\ \\vdots & & \\ddots & \\vdots \\\\ 0 & 0 &\\cdots &\\lambda\\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equbf.gif) A very important observation in connection with linear maps defined by matrix multiplication is that composition of linear maps L : ![ $${\\mathbb{F}}^{l} \\rightarrow{\\mathbb{F}}^{m}$$ ](A81414_1_En_1_Chapter_IEq143.gif) and ![ $$K : {\\mathbb{F}}^{m} \\rightarrow{\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq144.gif) is given by the matrix product. The maps are defined by matrix multiplication ![ $$\\begin{array}{rcl} L\\left \(x\\right \)& =& Bx, \\\\ B& =& \\left \[\\begin{array}{ccc} {b}_{1} & \\cdots &{b}_{l} \\end{array} \\right \]\\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ28.gif) and ![ $$K\\left \(y\\right \) = Cy.$$ ](A81414_1_En_1_Chapter_Equbg.gif) The composition can now be computed as follows using that K is linear: ![ $$\\begin{array}{rcl} \\left \(K \\circ L\\right \)\\left \(x\\right \)& =& K\\left \(L\\left \(x\\right \)\\right \) \\\\ & =& K\\left \(Bx\\right \) \\\\ & =& K\\left \(\\left \[\\begin{array}{ccc} {b}_{1} & \\cdots &{b}_{l} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\xi }_{1}\\\\ \\vdots \\\\ {\\xi }_{l} \\end{array} \\right \]\\right \) \\\\ & =& \\left \[\\begin{array}{ccc} K\\left \({b}_{1}\\right \)&\\cdots &K\\left \({b}_{l}\\right \) \\end{array} \\right \]\\left \[\\begin{array}{c} {\\xi }_{1}\\\\ \\vdots \\\\ {\\xi }_{l} \\end{array} \\right \] \\\\ & =& \\left \(\\left \[\\begin{array}{ccc} C{b}_{1} & \\cdots &C{b}_{l} \\end{array} \\right \]\\right \)\\left \[\\begin{array}{c} {\\xi }_{1}\\\\ \\vdots \\\\ {\\xi }_{l} \\end{array} \\right \] \\\\ & =& \\left \(C\\left \[\\begin{array}{ccc} {b}_{1} & \\cdots &{b}_{l} \\end{array} \\right \]\\right \)\\left \[\\begin{array}{c} {\\xi }_{1}\\\\ \\vdots \\\\ {\\xi }_{l} \\end{array} \\right \] \\\\ & =& \\left \(CB\\right \)x.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ29.gif) Evidently, this all hinges on the fact that the matrix product CB can be defined by ![ $$\\begin{array}{rcl} CB& =& C\\left \[\\begin{array}{ccc} {b}_{1} & \\cdots &{b}_{l} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} C{b}_{1} & \\cdots &C{b}_{l} \\end{array} \\right \], \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ30.gif) a definition that is completely natural if we think of C as a linear map. It should also be noted that we did not use associativity of matrix multiplication in the form ![ $$C\\left \(Bx\\right \) = \\left \(CB\\right \)x$$ ](A81414_1_En_1_Chapter_IEq145.gif). In fact, associativity is a consequence of our calculation. We can also check things a bit more directly using summation notation. Observe that the ith entry in the composition ![ $$K\\left \(L\\left \(\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{l}\\end{array} \\right \]\\right \)\\right \) = \\left \[\\begin{array}{ccc} {\\gamma }_{11} & \\cdots & {\\gamma }_{1m}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\gamma }_{n1} & \\cdots &{\\gamma }_{nm} \\end{array} \\right \]\\left \(\\left \[\\begin{array}{ccc} {\\beta }_{11} & \\cdots & {\\beta }_{1l}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\beta }_{m1} & \\cdots &{\\beta }_{ml} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\xi }_{1}\\\\ \\vdots \\\\ {\\xi }_{l} \\end{array} \\right \]\\right \)$$ ](A81414_1_En_1_Chapter_Equbh.gif) satisfies ![ $$\\begin{array}{rcl} {\\sum\\nolimits }_{j=1}^{m}{\\gamma }_{ ij}\\left \({\\sum\\nolimits }_{s=1}^{l}{\\beta }_{ js}{\\xi }_{s}\\right \)& =& {\\sum\\nolimits }_{j=1}^{m}{ \\sum\\nolimits }_{s=1}^{l}{\\gamma }_{ ij}{\\beta }_{js}{\\xi }_{s} \\\\ & =& {\\sum\\nolimits }_{s=1}^{l}{ \\sum\\nolimits }_{j=1}^{m}{\\gamma }_{ ij}{\\beta }_{js}{\\xi }_{s} \\\\ & =& {\\sum\\nolimits }_{s=1}^{l}\\left \({\\sum\\nolimits }_{j=1}^{m}{\\gamma }_{ ij}{\\beta }_{js}\\right \){\\xi }_{s} \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ31.gif) where ![ $$\\left \({\\sum\\nolimits }_{j=1}^{m}{\\gamma }_{ij}{\\beta }_{js}\\right \)$$ ](A81414_1_En_1_Chapter_IEq146.gif) represents the ![ $$\\left \(i,s\\right \)$$ ](A81414_1_En_1_Chapter_IEq147.gif) entry in the matrix product ![ $$\\left \[{\\gamma }_{ij}\\right \]\\left \[{\\beta }_{js}\\right \]$$ ](A81414_1_En_1_Chapter_IEq148.gif). Example 1.6.7. Note that while scalar multiplication on even the simplest vector space ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq149.gif) is the simplest linear map we can have, there are still several levels of complexity depending on the field we use. Let us consider the map ![ $$L : \\mathbb{C} \\rightarrow\\mathbb{C}$$ ](A81414_1_En_1_Chapter_IEq150.gif) that is multiplication by i, i.e., ![ $$L\\left \(x\\right \) = ix$$ ](A81414_1_En_1_Chapter_IEq151.gif). If we write x = α + iβ, we see that ![ $$L\\left \(x\\right \) = -\\beta+ i\\alpha $$ ](A81414_1_En_1_Chapter_IEq152.gif). Geometrically, what we are doing is rotating x 90 ∘ . If we think of ℂ as the plane ℝ 2, the map is instead given by the matrix ![ $$\\left \[\\begin{array}{cc} 0 & - 1\\\\ 1 & 0 \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equbi.gif) which is not at all scalar multiplication if we only think in terms of real scalars. Thus, a supposedly simple operation with complex numbers is somewhat less simple when we forget complex numbers. What we need to keep in mind is that scalar multiplication with real numbers is simply a form of dilation where vectors are made longer or shorter depending on the scalar. Scalar multiplication with complex numbers is from an abstract algebraic viewpoint equally simple to write down, but geometrically, such an operation can involve a rotation from the perspective of a world where only real scalars exist. Example 1.6.8. The ith coordin Ate map ![ $${\\mathbb{F}}^{n} \\rightarrow\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq153.gif) defined by ![ $$\\begin{array}{rcl} \\mathrm{d}{x}_{i}\\left \(x\\right \)& =& \\mathrm{d}{x}_{i}\\left \(\\left \[\\begin{array}{c} {\\xi }_{1}\\\\ \\vdots \\\\ {\\xi }_{i}\\\\ \\vdots \\\\ {\\xi }_{n}\\end{array} \\right \]\\right \) \\\\ & =& \\left \[0\\cdots 1\\cdots 0\\right \]\\left \[\\begin{array}{c} {\\xi }_{1}\\\\ \\vdots \\\\ {\\xi }_{i}\\\\ \\vdots \\\\ {\\xi }_{n}\\end{array} \\right \] \\\\ & =& {\\xi }_{i}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ32.gif) is a linear map. Here the 1 ×n matrix ![ $$\\left \[0\\cdots 1\\cdots 0\\right \]$$ ](A81414_1_En_1_Chapter_IEq154.gif) is zero everywhere except in the ith entry where it is 1. The notation dx i is not a mistake, but an incursion from multivariable calculus. While some mystifying words involving infinitesimals are often invoked in connection with such symbols, they have in more advanced and modern treatments of the subject simply been redefined as done here. A special piece of notation comes in handy here.The Kronecker δ symbol is defined as ![ $${\\delta }_{ij} = \\left \\{\\begin{array}{cc} 0 & \\mathrm{if}\\mathrm{ }i\\neq j\\\\ 1 & \\mathrm{if }\\mathrm{ } i = j \\end{array} \\right.$$ ](A81414_1_En_1_Chapter_Equbj.gif) Thus, the matrix ![ $$\\left \[0\\cdots 1\\cdots 0\\right \]$$ ](A81414_1_En_1_Chapter_IEq155.gif) can also be written as ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{ccccc} 0&\\cdots &1&\\cdots &0 \\end{array} \\right \]& =& \\left \[\\begin{array}{ccccc} {\\delta }_{i1} & \\cdots &{\\delta }_{ii}&\\cdots &{\\delta }_{in} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {\\delta }_{i1} & \\cdots &{\\delta }_{in} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ33.gif) The matrix representing the identity map ![ $${1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_1_Chapter_IEq156.gif) can then be written as ![ $$\\left \[\\begin{array}{ccc} 1&\\cdots &0\\\\ \\vdots & \\ddots & \\vdots\\\\ 0 &\\cdots&1 \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {\\delta }_{11} & \\cdots & {\\delta }_{1n}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\delta }_{n1} & \\cdots &{\\delta }_{nn} \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equbk.gif) Example 1.6.9. Let us consider the vector space of functions ![ $${C}^{\\infty }\\left \(\\mathbb{R}, \\mathbb{R}\\right \)$$ ](A81414_1_En_1_Chapter_IEq157.gif) that have derivatives of all orders. There are several interesting linear operators ![ $${C}^{\\infty }\\left \(\\mathbb{R}, \\mathbb{R}\\right \) \\rightarrow{C}^{\\infty }\\left \(\\mathbb{R}, \\mathbb{R}\\right \)$$ ](A81414_1_En_1_Chapter_IEq158.gif) ![ $$\\begin{array}{rcl} D\\left \(f\\right \)\\left \(t\\right \)& =& \\frac{\\mathrm{d}f} {\\mathrm{d}t} \\left \(t\\right \), \\\\ S\\left \(f\\right \)\\left \(t\\right \)& =& {\\int\\nolimits \\nolimits }_{{t}_{0}}^{t}f\\left \(s\\right \)\\mathrm{d}s, \\\\ T\\left \(f\\right \)\\left \(t\\right \)& =& t \\cdot f\\left \(t\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ34.gif) In a more shorthand fashion, we have the differentiation operator ![ $$D\\left \(f\\right \) = {f}^{{\\prime}},$$ ](A81414_1_En_1_Chapter_IEq159.gif) the integration operator ![ $$S\\left \(f\\right \) = \\int\\nolimits \\nolimits f,$$ ](A81414_1_En_1_Chapter_IEq160.gif) and the multiplication operator ![ $$T\\left \(f\\right \) = tf$$ ](A81414_1_En_1_Chapter_IEq161.gif). Note that the integration operator is not well defined unless we use the definite integral, and even in that case, it depends on the value t 0. Note that the space of polynomials ![ $$\\mathbb{R}\\left \[t\\right \] \\subset{C}^{\\infty }\\left \(\\mathbb{R}, \\mathbb{R}\\right \)$$ ](A81414_1_En_1_Chapter_Equbl.gif) is an invariant subspace for all three operators. In this case, we usually let t 0 = 0 for S. These operators have some interesting relationships. We point out an intriguing one ![ $$DT - TD = 1.$$ ](A81414_1_En_1_Chapter_Equbm.gif) To see this, simply use Leibniz' rule for differentiating a product to obtain ![ $$\\begin{array}{rcl} D\\left \(T\\left \(f\\right \)\\right \)& =& D\\left \(tf\\right \) \\\\ & =& f + tDf \\\\ & =& f + T\\left \(D\\left \(f\\right \)\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ35.gif) With some slight changes, the identity DT − TD = 1 is the Heisenberg commutation law. This law is important in the verification of Heisenberg's uncertainty principle. Definition 1.6.10. The trace is a linear map on square matrices that adds the diagonal entries. ![ $$\\begin{array}{rcl} \\mathrm{tr} :\\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)& \\rightarrow & \\mathbb{F}, \\\\ \\mathrm{tr}\\left \(A\\right \)& =& {\\alpha }_{11} + {\\alpha }_{22} + \\cdots+ {\\alpha }_{nn}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ36.gif) The trace satisfies the following important commutation relationship. Lemma 1.6.11. (Invariance of Trace) If ![ $$A \\in \\mathrm{{ Mat}}_{m\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq162.gif) and ![ $$B \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_1_Chapter_IEq163.gif) then ![ $$AB \\in \\mathrm{{ Mat}}_{m\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq164.gif) , ![ $$BA \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq165.gif) , and ![ $$\\mathrm{tr}\\left \(AB\\right \) =\\mathrm{ tr}\\left \(BA\\right \).$$ ](A81414_1_En_1_Chapter_Equbn.gif) Proof. We write out the matrices ![ $$\\begin{array}{rcl} A& =& \\left \[\\begin{array}{ccc} {\\alpha }_{11} & \\cdots & {\\alpha }_{1n}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\alpha }_{m1} & \\cdots &{\\alpha }_{mn} \\end{array} \\right \] \\\\ B& =& \\left \[\\begin{array}{ccc} {\\beta }_{11} & \\cdots & {\\beta }_{1m}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\beta }_{n1} & \\cdots &{\\beta }_{nm} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ37.gif) Thus, ![ $$\\begin{array}{rcl} AB& =& \\left \[\\begin{array}{ccc} {\\alpha }_{11} & \\cdots & {\\alpha }_{1n}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\alpha }_{m1} & \\cdots &{\\alpha }_{mn} \\end{array} \\right \]\\left \[\\begin{array}{ccc} {\\beta }_{11} & \\cdots & {\\beta }_{1m}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\beta }_{n1} & \\cdots &{\\beta }_{nm} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {\\alpha }_{11}{\\beta }_{11} + \\cdots+ {\\alpha }_{1n}{\\beta }_{n1} & \\cdots & {\\alpha }_{11}{\\beta }_{1m} + \\cdots+ {\\alpha }_{1n}{\\beta }_{nm}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\alpha }_{m1}{\\beta }_{11} + \\cdots+ {\\alpha }_{mn}{\\beta }_{n1} & \\cdots &{\\alpha }_{m1}{\\beta }_{1m} + \\cdots+ {\\alpha }_{mn}{\\beta }_{nm} \\end{array} \\right \], \\\\ BA& =& \\left \[\\begin{array}{ccc} {\\beta }_{11} & \\cdots & {\\beta }_{1m}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\beta }_{n1} & \\cdots &{\\beta }_{nm} \\end{array} \\right \]\\left \[\\begin{array}{ccc} {\\alpha }_{11} & \\cdots & {\\alpha }_{1n}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\alpha }_{m1} & \\cdots &{\\alpha }_{mn} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {\\beta }_{11}{\\alpha }_{11} + \\cdots+ {\\beta }_{1m}{\\alpha }_{m1} & \\cdots & {\\beta }_{11}{\\alpha }_{1n} + \\cdots+ {\\beta }_{1m}{\\alpha }_{mn}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\beta }_{n1}{\\alpha }_{11} + \\cdots+ {\\beta }_{nm}{\\alpha }_{m1} & \\cdots &{\\beta }_{n1}{\\alpha }_{1n} + \\cdots+ {\\beta }_{nm}{\\alpha }_{mn} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ38.gif) This tells us that ![ $$AB \\in \\mathrm{{ Mat}}_{m\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq166.gif) and ![ $$BA \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq167.gif). To show the identity note that the ![ $$\\left \(i,i\\right \)$$ ](A81414_1_En_1_Chapter_IEq168.gif) entry in AB is ∑ j = 1 n α ij β ji , while the ![ $$\\left \(j,j\\right \)$$ ](A81414_1_En_1_Chapter_IEq169.gif) entry in BA is ∑ i = 1 m β ji α ij . Thus, ![ $$\\begin{array}{rcl} \\mathrm{tr}\\left \(AB\\right \)& =& {\\sum\\nolimits }_{i=1}^{m}{ \\sum\\nolimits }_{j=1}^{n}{\\alpha }_{ ij}{\\beta }_{ji}, \\\\ \\mathrm{tr}\\left \(BA\\right \)& =& {\\sum\\nolimits }_{j=1}^{n}{ \\sum\\nolimits }_{i=1}^{m}{\\beta }_{ ji}{\\alpha }_{ij}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ39.gif) By using α ij β ji = β ji α ij and ![ $${\\sum\\nolimits }_{i=1}^{m}{ \\sum\\nolimits }_{j=1}^{n} ={ \\sum\\nolimits }_{j=1}^{n}{ \\sum\\nolimits }_{i=1}^{m},$$ ](A81414_1_En_1_Chapter_Equbo.gif) we see that the two traces are equal. This allows us to show that Heisenberg commutation law cannot be true for matrices. Corollary 1.6.12. There are no matrices ![ $$A,B \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq170.gif) such that ![ $$AB - BA = {1}_{{\\mathbb{F}}^{n}}.$$ ](A81414_1_En_1_Chapter_Equbp.gif) Proof. By the above lemma and linearity, we have that ![ $$\\mathrm{tr}\\left \(AB - BA\\right \) = 0$$ ](A81414_1_En_1_Chapter_IEq171.gif). On the other hand, ![ $$\\mathrm{tr}\\left \({1}_{{\\mathbb{F}}^{n}}\\right \) = n,$$ ](A81414_1_En_1_Chapter_IEq172.gif) since the identity matrix has n diagonal entries each of which is 1. Remark 1.6.13. Observe that we just used the fact that n≠0 in ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq173.gif) or, in other words, that ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq174.gif) has characteristic zero. If we allowed ourselves to use the field ![ $${\\mathbb{F}}_{2} = \\left \\{0,1\\right \\}$$ ](A81414_1_En_1_Chapter_IEq175.gif) where 1 + 1 = 0, then we have that 1 = − 1. Thus, we can use the matrices ![ $$\\begin{array}{rcl} A& =& \\left \[\\begin{array}{cc} 0&1\\\\ 0 &0 \\end{array} \\right \], \\\\ B& =& \\left \[\\begin{array}{cc} 0&1\\\\ 1 &0 \\end{array} \\right \],\\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ40.gif) to get the Heisenberg commutation law satisfied: ![ $$\\begin{array}{rcl} AB - BA& =& \\left \[\\begin{array}{cc} 0&1\\\\ 0 &0 \\end{array} \\right \]\\left \[\\begin{array}{cc} 0&1\\\\ 1 &0 \\end{array} \\right \] -\\left \[\\begin{array}{cc} 0&1\\\\ 1 &0 \\end{array} \\right \]\\left \[\\begin{array}{cc} 0&1\\\\ 0 &0 \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{cc} 1&0\\\\ 0 &0 \\end{array} \\right \] -\\left \[\\begin{array}{cc} 0&0\\\\ 0 &1 \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{cc} 1& 0\\\\ 0 & -1 \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{cc} 1&0\\\\ 0 &1 \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ41.gif) We have two further linear maps. Consider ![ $$V =\\mathrm{ Func}\\left \(S, \\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq176.gif) and select s 0 ∈ S; then, the evaluation map ![ $$\\mathrm{{ev}}_{{s}_{0}} :\\mathrm{ Func}\\left \(S, \\mathbb{F}\\right \) \\rightarrow\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq177.gif) defined by ![ $$\\mathrm{{ev}}_{{s}_{0}}\\left \(f\\right \) = f\\left \({s}_{0}\\right \)$$ ](A81414_1_En_1_Chapter_IEq178.gif) is linear. More generally, we have the restriction map for T ⊂ S defined as a linear maps ![ $$\\mathrm{Func}\\left \(S, \\mathbb{F}\\right \) \\rightarrow \\mathrm{ Func}\\left \(T, \\mathbb{F}\\right \),$$ ](A81414_1_En_1_Chapter_IEq179.gif) by mapping f to f | T . The notation f | T means that we only consider f as mapping from T into ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq180.gif). In other words, we have forgotten that f maps all of S into ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq181.gif) and only remembered what it did on T. Linear maps play a big role in multivariable calculus and are used in a number of ways to clarify and understand certain constructions. The fact that linear algebra is the basis for multivariable calculus should not be surprising as linear algebra is merely a generalization of vector algebra. Let F : Ω -> ℝ n be a differentiable function defined on some open domain Ω ⊂ ℝ m , i.e., for each x 0 ∈ Ω, we can find a linear map L : ℝ m -> ℝ n satisfying ![ $${\\lim }_{\\left \\vert h\\right \\vert \\rightarrow 0}\\frac{\\left \\vert F\\left \({x}_{0} + h\\right \) - F\\left \({x}_{0}\\right \) - L\\left \(h\\right \)\\right \\vert } {\\left \\vert h\\right \\vert } = 0.$$ ](A81414_1_En_1_Chapter_Equbq.gif) It is easy to see that such a linear map must be unique. It is also calledthe differential of F at x 0 ∈ Ω and denoted by ![ $$L = D{F}_{{x}_{0}} : {\\mathbb{R}}^{m} \\rightarrow{\\mathbb{R}}^{n}$$ ](A81414_1_En_1_Chapter_IEq182.gif). The differential ![ $$D{F}_{{x}_{0}}$$ ](A81414_1_En_1_Chapter_IEq183.gif) is also represented by the n ×m matrix of partial derivatives ![ $$\\begin{array}{rcl} D{F}_{{x}_{0}}\\left \(h\\right \)& =& D{F}_{{x}_{0}}\\left \(\\left \[\\begin{array}{c} {h}_{1}\\\\ \\vdots \\\\ {h}_{m} \\end{array} \\right \]\\right \) \\\\ & =& \\left \[\\begin{array}{ccc} \\frac{\\partial {F}_{1}} {\\partial {x}_{1}} & \\cdots &\\frac{\\partial {F}_{1}} {\\partial {x}_{m}}\\\\ \\vdots & \\ddots & \\vdots \\\\ \\frac{\\partial {F}_{n}} {\\partial {x}_{1}} & \\cdots &\\frac{\\partial {F}_{n}} {\\partial {x}_{m}} \\end{array} \\right \]\\left \[\\begin{array}{c} {h}_{1}\\\\ \\vdots \\\\ {h}_{m} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{c} \\frac{\\partial {F}_{1}} {\\partial {x}_{1}} {h}_{1} + \\cdots+ \\frac{\\partial {F}_{1}} {\\partial {x}_{m}}{h}_{m}\\\\ \\vdots \\\\ \\frac{\\partial {F}_{n}} {\\partial {x}_{1}} {h}_{1} + \\cdots+ \\frac{\\partial {F}_{n}} {\\partial {x}_{m}}{h}_{m} \\end{array} \\right \] \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ42.gif) One of the main ideas in differential calculus (of several variables) is that linear maps are simpler to work with and that they give good local approximations to differentiable maps. This can be made more precise by observing that we have the first-order approximation ![ $$\\begin{array}{rcl} F\\left \({x}_{0} + h\\right \)& =& F\\left \({x}_{0}\\right \) + D{F}_{{x}_{0}}\\left \(h\\right \) + o\\left \(h\\right \), \\\\ {\\text{ }\\lim }_{\\left \\vert h\\right \\vert \\rightarrow 0}\\frac{\\left \\vert o\\left \(h\\right \)\\right \\vert } {\\left \\vert h\\right \\vert } & =& 0\\end{array}$$ ](A81414_1_En_1_Chapter_Equ43.gif) One of the goals of differential calculus is to exploit knowledge of the linear map ![ $$D{F}_{{x}_{0}}$$ ](A81414_1_En_1_Chapter_IEq184.gif) and then use this first-order approximation to get a better understanding of the map F itself. In case f : Ω -> ℝ is a function, one often sees the differential of f defined as the expression ![ $$\\mathrm{d}f = \\frac{\\partial f} {\\partial {x}_{1}}\\mathrm{d}{x}_{1} + \\cdots+ \\frac{\\partial f} {\\partial {x}_{m}}\\mathrm{d}{x}_{m}.$$ ](A81414_1_En_1_Chapter_Equbr.gif) Having now interpreted dx i as a linear function, we then observe that df itself is a linear function whose matrix description is given by ![ $$\\begin{array}{rcl} \\mathrm{d}f\\left \(h\\right \)& =& \\frac{\\partial f} {\\partial {x}_{1}}\\mathrm{d}{x}_{1}\\left \(h\\right \) + \\cdots+ \\frac{\\partial f} {\\partial {x}_{m}}\\mathrm{d}{x}_{m}\\left \(h\\right \) \\\\ & =& \\frac{\\partial f} {\\partial {x}_{1}}{h}_{1} + \\cdots+ \\frac{\\partial f} {\\partial {x}_{m}}{h}_{m} \\\\ & =& \\left \[\\begin{array}{ccc} \\frac{\\partial f} {\\partial {x}_{1}} & \\cdots & \\frac{\\partial f} {\\partial {x}_{m}} \\end{array} \\right \]\\left \[\\begin{array}{c} {h}_{1}\\\\ \\vdots \\\\ {h}_{m} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ44.gif) More generally, if we write ![ $$F = \\left \[\\begin{array}{c} {F}_{1}\\\\ \\vdots \\\\ {F}_{n} \\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_Equbs.gif) then ![ $$D{F}_{{x}_{0}} = \\left \[\\begin{array}{c} \\mathrm{d}{F}_{1}\\\\ \\vdots \\\\ \\mathrm{d}{F}_{n} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equbt.gif) with the understanding that ![ $$D{F}_{{x}_{0}}\\left \(h\\right \) = \\left \[\\begin{array}{c} \\mathrm{d}{F}_{1}\\left \(h\\right \)\\\\ \\vdots \\\\ \\mathrm{d}{F}_{n}\\left \(h\\right \) \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equbu.gif) Note how this conforms nicely with the above matrix representation of the differential. ### 1.6.1 Exercises 1. Let V, W be vector spaces over ℚ. Show that any additive map L : V -> W, i.e., ![ $$L\\left \({x}_{1} + {x}_{2}\\right \) = L\\left \({x}_{1}\\right \) + L\\left \({x}_{2}\\right \),$$ ](A81414_1_En_1_Chapter_Equbv.gif) is linear. 2. Let ![ $$D : \\mathbb{F}\\left \[t\\right \] \\rightarrow\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq185.gif) be defined by ![ $$D\\left \({\\alpha }_{0} + {\\alpha }_{1}t + \\cdots+ {\\alpha }_{n}{t}^{n}\\right \) = {\\alpha }_{ 1} + 2{\\alpha }_{2}t + \\cdots+ n{\\alpha }_{n}{t}^{n-1}.$$ ](A81414_1_En_1_Chapter_Equbw.gif) (a) Show that this defines a linear operator. (b) Show directly, i.e., without using differential calculus, that this operator satisfies Leibniz' rule ![ $$D\\left \(pq\\right \) = pD\\left \(q\\right \) + \\left \(D\\left \(p\\right \)\\right \)q.$$ ](A81414_1_En_1_Chapter_Equbx.gif) (c) Show that the subspace ![ $${P}_{n} \\subset\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq186.gif) of polynomials of degree ≤ n is invariant. 3. Let L : V -> V be a linear operator and V a vector space over ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq187.gif). Show that the map ![ $$K : \\mathbb{F}\\left \[t\\right \] \\rightarrow \\mathrm{{ Hom}}_{\\mathbb{F}}\\left \(V,V \\right \)$$ ](A81414_1_En_1_Chapter_IEq188.gif) defined by ![ $$K\\left \(p\\right \) = p\\left \(L\\right \)$$ ](A81414_1_En_1_Chapter_IEq189.gif) is a linear map. 4. Let L : V -> V be a linear operator and V a vector space over ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq190.gif). Show that if M ⊂ V is L-invariant and ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq191.gif), then M is also invariant under ![ $$p\\left \(L\\right \)$$ ](A81414_1_En_1_Chapter_IEq192.gif). 5. Let T : V -> W be a linear map, and ![ $$\\tilde{V }$$ ](A81414_1_En_1_Chapter_IEq193.gif) is a vector space, all over the same field. Show that right composition ![ $${R}_{T} :\\mathrm{ Hom}\\left \(W,\\tilde{V }\\right \) \\rightarrow \\mathrm{ Hom}\\left \(V,\\tilde{V }\\right \)$$ ](A81414_1_En_1_Chapter_Equby.gif) defined by R T K = K ∘ T and left composition ![ $${L}_{T} :\\mathrm{ Hom}\\left \(\\tilde{V },V \\right \) \\rightarrow \\mathrm{ Hom}\\left \(\\tilde{V },W\\right \)$$ ](A81414_1_En_1_Chapter_Equbz.gif) defined by ![ $${L}_{T}\\left \(K\\right \) = T \\circ K$$ ](A81414_1_En_1_Chapter_IEq194.gif) are linear maps. 6. Assume that ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq195.gif) has a block decomposition ![ $$A = \\left \[\\begin{array}{cc} {A}_{11} & {A}_{12} \\\\ {A}_{21} & {A}_{22}\\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_Equca.gif) where ![ $${A}_{11} \\in \\mathrm{{ Mat}}_{k\\times k}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq196.gif). (a) Show that the subspace ![ $${\\mathbb{F}}^{k} = \\left \\{\\left \({\\alpha }_{1},\\ldots ,{\\alpha }_{k},0,\\ldots ,0\\right \)\\right \\} \\subset{\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq197.gif) is invariant if and only if A 21 = 0. (b) Show that the subspace ![ $${M}^{n-k} = \\left \\{\\left \(0,\\ldots ,0,{\\alpha }_{k+1},\\ldots ,{\\alpha }_{n}\\right \)\\right \\} \\subset{\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq198.gif) is invariant if and only if A 12 = 0. 7. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq199.gif) be upper triangular, i.e., α ij = 0 for i > j or ![ $$A = \\left \[\\begin{array}{cccc} {\\alpha }_{11} & {\\alpha }_{12} & \\cdots & {\\alpha }_{1n} \\\\ 0 &{\\alpha }_{22} & \\cdots & {\\alpha }_{2n}\\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ 0 & 0 &\\cdots &{\\alpha }_{nn}\\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_Equcb.gif) and ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq200.gif). Show that ![ $$p\\left \(A\\right \)$$ ](A81414_1_En_1_Chapter_IEq201.gif) is also upper triangular and the diagonal entries are ![ $$p\\left \({\\alpha }_{ii}\\right \),$$ ](A81414_1_En_1_Chapter_IEq202.gif) i.e., ![ $$p\\left \(A\\right \) = \\left \[\\begin{array}{cccc} p\\left \({\\alpha }_{11}\\right \)& {_\\ast} &\\cdots & {_\\ast} \\\\ 0 &p\\left \({\\alpha }_{22}\\right \)&\\cdots & {_\\ast}\\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ 0 & 0 &\\cdots &p\\left \({\\alpha }_{nn}\\right \)\\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equcc.gif) 8. Let t 1,..., t n ∈ ℝ and define ![ $$\\begin{array}{rcl} L : {C}^{\\infty }\\left \(\\mathbb{R}, \\mathbb{R}\\right \)& \\rightarrow{\\mathbb{R}}^{n} & \\\\ L\\left \(f\\right \)& = \\left \(f\\left \({t}_{1}\\right \),\\ldots ,f\\left \({t}_{n}\\right \)\\right \).& \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ45.gif) Show that L is linear. 9. Let t 0 ∈ ℝ and define ![ $$\\begin{array}{rcl} L : {C}^{\\infty }\\left \(\\mathbb{R}, \\mathbb{R}\\right \)& \\rightarrow{\\mathbb{R}}^{n} & \\\\ L\\left \(f\\right \)& = \\left \(f\\left \({t}_{0}\\right \),\\left \(Df\\right \)\\left \({t}_{0}\\right \),\\ldots ,\\left \({D}^{n-1}f\\right \)\\left \({t}_{0}\\right \)\\right \).& \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ46.gif) Show that L is linear. 10. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{R}\\right \)$$ ](A81414_1_En_1_Chapter_IEq203.gif) be symmetric, i.e., the ![ $$\\left \(i,j\\right \)$$ ](A81414_1_En_1_Chapter_IEq204.gif) entry is the same as the ![ $$\\left \(j,i\\right \)$$ ](A81414_1_En_1_Chapter_IEq205.gif) entry. Show that A = 0 if and only if ![ $$\\mathrm{tr}\\left \({A}^{2}\\right \) = 0$$ ](A81414_1_En_1_Chapter_IEq206.gif). 11. For each n ≥ 2, find ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq207.gif) such that A≠0, but ![ $$\\mathrm{tr}\\left \({A}^{k}\\right \) = 0$$ ](A81414_1_En_1_Chapter_IEq208.gif) for all k = 1, 2,.... 12. Find A ∈ Mat2 ×2 ℝ such that trA 2 < 0. ## 1.7 Linear Maps as Matrices We saw above that quite a lot of linear maps can be defined using matrices. In this section, we shall generalize this construction and show that all abstractly defined linear maps between finite-dimensional vector spaces come from some basic matrix constructions. To warm up, we start with the simplest situation. Lemma 1.7.1. Assume V is one-dimensional over ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq209.gif) , then any L : V -> V is of the form L = λ1 V . Proof. Assume x 1 is a basis. Then, ![ $$L\\left \({x}_{1}\\right \) = \\lambda {x}_{1}$$ ](A81414_1_En_1_Chapter_IEq210.gif) for some ![ $$\\lambda\\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq211.gif). Now, any x = αx 1 so ![ $$L\\left \(x\\right \) = L\\left \(\\alpha {x}_{1}\\right \) = \\alpha L\\left \({x}_{1}\\right \) = \\alpha \\lambda {x}_{1} = \\lambda x$$ ](A81414_1_En_1_Chapter_IEq212.gif) as desired. This gives us a very simple canonical form for linear maps in this elementary situation. The rest of the section tries to explain how one can generalize this to vector spaces with finite bases. Possibly, the most important abstractly defined linear map comes from considering linear combin Ations. We fix a vector space V over ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq213.gif) and select x 1,..., x m ∈ V. Then, we have a linear map ![ $$\\begin{array}{rcl} & L : {\\mathbb{F}}^{m} \\rightarrow V & \\\\ & L\\left \(\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \]\\right \) = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \] = {x}_{1}{\\alpha }_{1} + \\cdots+ {x}_{m}{\\alpha }_{m}.& \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ47.gif) The fact that it is linear follows from knowing that ![ $$L : \\mathbb{F} \\rightarrow V$$ ](A81414_1_En_1_Chapter_IEq214.gif) defined by ![ $$L\\left \(\\alpha \\right \) = \\alpha x$$ ](A81414_1_En_1_Chapter_IEq215.gif) is linear together with the fact that sums of linear maps are linear. We shall denote this map by its row matrix ![ $$L = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_Equcd.gif) where the entries are vectors. Using the standard basis e 1,..., e m for ![ $${\\mathbb{F}}^{m}$$ ](A81414_1_En_1_Chapter_IEq216.gif) we observe that the entries x i (think of them as column vectors) satisfy ![ $$L\\left \({e}_{i}\\right \) = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \]{e}_{i} = {x}_{i}.$$ ](A81414_1_En_1_Chapter_Equce.gif) Thus, the vectors that form the columns for the matrix for L are the images of the basis vectors for ![ $${\\mathbb{F}}^{m}$$ ](A81414_1_En_1_Chapter_IEq217.gif). With this in mind, we can show Lemma 1.7.2. Any linear map ![ $$L : {\\mathbb{F}}^{m} \\rightarrow V$$ ](A81414_1_En_1_Chapter_IEq218.gif) is of the form ![ $$L = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_Equcf.gif) where ![ $${x}_{i} = L\\left \({e}_{i}\\right \)$$ ](A81414_1_En_1_Chapter_IEq219.gif) . Proof. Define ![ $$L\\left \({e}_{i}\\right \) = {x}_{i}$$ ](A81414_1_En_1_Chapter_IEq220.gif) and use linearity of L to see that ![ $$\\begin{array}{rcl} L\\left \(\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \]\\right \)& =& L\\left \(\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \]\\right \) \\\\ & =& L\\left \({e}_{1}{\\alpha }_{1} + \\cdots+ {e}_{m}{\\alpha }_{m}\\right \) \\\\ & =& L\\left \({e}_{1}\\right \){\\alpha }_{1} + \\cdots+ L\\left \({e}_{m}\\right \){\\alpha }_{m} \\\\ & =& \\left \[\\begin{array}{ccc} L\\left \({e}_{1}\\right \)&\\cdots &L\\left \({e}_{m}\\right \) \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \]. \\\\ & & \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ48.gif) If we specialize to the situation where ![ $$V = {\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq221.gif), then vectors x 1,..., x m really are n ×1 column matrices. More explicitly, ![ $${x}_{i} = \\left \[\\begin{array}{c} {\\beta }_{1i}\\\\ \\vdots \\\\ {\\beta }_{ni} \\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_Equcg.gif) and ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \]& =& {x}_{1}{\\alpha }_{1} + \\cdots+ {x}_{m}{\\alpha }_{m} \\\\ & =& \\left \[\\begin{array}{c} {\\beta }_{11}\\\\ \\vdots \\\\ {\\beta }_{n1} \\end{array} \\right \]{\\alpha }_{1} + \\cdots+ \\left \[\\begin{array}{c} {\\beta }_{1m}\\\\ \\vdots \\\\ {\\beta }_{nm} \\end{array} \\right \]{\\alpha }_{m} \\\\ & =& \\left \[\\begin{array}{c} {\\beta }_{11}{\\alpha }_{1}\\\\ \\vdots \\\\ {\\beta }_{n1}{\\alpha }_{1} \\end{array} \\right \] + \\cdots+ \\left \[\\begin{array}{c} {\\beta }_{1m}{\\alpha }_{m}\\\\ \\vdots \\\\ {\\beta }_{nm}{\\alpha }_{m} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{c} {\\beta }_{11}{\\alpha }_{1} + \\cdots+ {\\beta }_{1m}{\\alpha }_{m}\\\\ \\vdots \\\\ {\\beta }_{n1}{\\alpha }_{1} + \\cdots+ {\\beta }_{nm}{\\alpha }_{m} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {\\beta }_{11} & \\cdots & {\\beta }_{1m}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\beta }_{n1} & \\cdots &{\\beta }_{nm} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ49.gif) Hence, any linear map ![ $${\\mathbb{F}}^{m} \\rightarrow{\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq222.gif) is given by matrix multiplication, and the columns of the matrix are the images of the basis vectors of ![ $${\\mathbb{F}}^{m}$$ ](A81414_1_En_1_Chapter_IEq223.gif). We can also use this to study maps V -> W as long as we have bases e 1,..., e m for V and f 1,..., f n for W. Each x ∈ V has a unique expansion ![ $$x = \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equch.gif) So if L : V -> W is linear, then ![ $$\\begin{array}{rcl} L\\left \(x\\right \)& =& L\\left \(\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \]\\right \) \\\\ & =& \\left \[\\begin{array}{ccc} L\\left \({e}_{1}\\right \)&\\cdots &L\\left \({e}_{m}\\right \) \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \], \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ50.gif) where ![ $${x}_{i} = L\\left \({e}_{i}\\right \)$$ ](A81414_1_En_1_Chapter_IEq224.gif). In effect, we have proven that ![ $$L\\circ \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} L\\left \({e}_{1}\\right \)&\\cdots &L\\left \({e}_{m}\\right \) \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equci.gif) if we interpret ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \]& :& {\\mathbb{F}}^{m} \\rightarrow V, \\\\ \\left \[\\begin{array}{ccc} L\\left \({e}_{1}\\right \)&\\cdots &L\\left \({e}_{m}\\right \) \\end{array} \\right \]& :& {\\mathbb{F}}^{m} \\rightarrow W\\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ51.gif) as linear maps. Expanding ![ $$L\\left \({e}_{i}\\right \) = {x}_{i}$$ ](A81414_1_En_1_Chapter_IEq225.gif) with respect to the basis for W gives us ![ $${x}_{i} = \\left \[\\begin{array}{ccc} {f}_{1} & \\cdots &{f}_{n} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\beta }_{1i}\\\\ \\vdots \\\\ {\\beta }_{ni} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equcj.gif) and ![ $$\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {f}_{1} & \\cdots &{f}_{n} \\end{array} \\right \]\\left \[\\begin{array}{ccc} {\\beta }_{11} & \\cdots & {\\beta }_{1m}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\beta }_{n1} & \\cdots &{\\beta }_{nm} \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equck.gif) This gives us the matrix representation for a linear map V -> W with respect to the specified bases. ![ $$\\begin{array}{rcl} L\\left \(x\\right \)& =& \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {f}_{1} & \\cdots &{f}_{n} \\end{array} \\right \]\\left \[\\begin{array}{ccc} {\\beta }_{11} & \\cdots & {\\beta }_{1m}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\beta }_{n1} & \\cdots &{\\beta }_{nm} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ52.gif) We will often use the notation ![ $$\\left \[L\\right \] = \\left \[\\begin{array}{ccc} {\\beta }_{11} & \\cdots & {\\beta }_{1m}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\beta }_{n1} & \\cdots &{\\beta }_{nm} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equcl.gif) for the matrix representing L. The way to remember the formula for ![ $$\\left \[L\\right \]$$ ](A81414_1_En_1_Chapter_IEq226.gif) is to use ![ $$\\begin{array}{rcl} L \\circ \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \]& =& \\left \[\\begin{array}{ccc} L\\left \({e}_{1}\\right \)&\\cdots &L\\left \({e}_{m}\\right \) \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {f}_{1} & \\cdots &{f}_{n} \\end{array} \\right \]\\left \[L\\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ53.gif) In the special case where L : V -> V is a linear operator, one usually only selects one basis e 1,..., e n . In this case, we get the relationship ![ $$\\begin{array}{rcl} L \\circ \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]& =& \\left \[\\begin{array}{ccc} L\\left \({e}_{1}\\right \)&\\cdots &L\\left \({e}_{n}\\right \) \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \[L\\right \]\\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ54.gif) for the matrix representation. Example 1.7.3. Let ![ $${P}_{n} = \\left \\{{\\alpha }_{0} + {\\alpha }_{1}t + \\cdots+ {\\alpha }_{n}{t}^{n} : {\\alpha }_{ 0},{\\alpha }_{1},\\ldots ,{\\alpha }_{n} \\in\\mathbb{F}\\right \\}$$ ](A81414_1_En_1_Chapter_Equcm.gif) be the space of polynomials of degree ≤ n and D : P n -> P n the differentiation operator ![ $$D\\left \({\\alpha }_{0} + {\\alpha }_{1}t + \\cdots+ {\\alpha }_{n}{t}^{n}\\right \) = {\\alpha }_{ 1} + \\cdots+ n{\\alpha }_{n}{t}^{n-1}.$$ ](A81414_1_En_1_Chapter_Equcn.gif) If we use the basis 1, t,..., t n for P n , then ![ $$D\\left \({t}^{k}\\right \) = k{t}^{k-1},$$ ](A81414_1_En_1_Chapter_Equco.gif) and thus, the ![ $$\\left \(n + 1\\right \) \\times \\left \(n + 1\\right \)$$ ](A81414_1_En_1_Chapter_IEq227.gif) matrix representation is computed via ![ $$\\begin{array}{rcl} & \\left \[\\begin{array}{ccccc} D\\left \(1\\right \)&D\\left \(t\\right \)&D\\left \({t}^{2}\\right \)&\\cdots &D\\left \({t}^{n}\\right \) \\end{array} \\right \] & \\\\ & \\quad = \\left \[\\begin{array}{ccccc} 0&1&2t&\\cdots &n{t}^{n-1} \\end{array} \\right \] & \\\\ & \\quad = \\left \[\\begin{array}{ccccc} 1&t&{t}^{2} & \\cdots &{t}^{n} \\end{array} \\right \]\\left \[\\begin{array}{ccccc} 0&1&0&\\cdots & 0\\\\ 0 &0 &2 &\\cdots& 0 \\\\ 0&0&0& \\ddots & 0\\\\ \\vdots & \\vdots & \\vdots & \\ddots &n \\\\ 0&0&0&\\cdots & 0 \\end{array} \\right \].& \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ55.gif) Next, consider the maps T, S : P n -> P n + 1 defined by ![ $$\\begin{array}{rcl} T\\left \({\\alpha }_{0} + {\\alpha }_{1}t + \\cdots+ {\\alpha }_{n}{t}^{n}\\right \)& =& {\\alpha }_{ 0}t + {\\alpha }_{1}{t}^{2} + \\cdots+ {\\alpha }_{ n}{t}^{n+1}, \\\\ S\\left \({\\alpha }_{0} + {\\alpha }_{1}t + \\cdots+ {\\alpha }_{n}{t}^{n}\\right \)& =& {\\alpha }_{ 0}t + \\frac{{\\alpha }_{1}} {2} {t}^{2} + \\cdots+ \\frac{{\\alpha }_{n}} {n + 1}{t}^{n+1}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ56.gif) This time, the image space and domain are not the same but the choices for basis are at least similar. We get the ![ $$\\left \(n + 2\\right \) \\times \\left \(n + 1\\right \)$$ ](A81414_1_En_1_Chapter_IEq228.gif) matrix representations ![ $$\\begin{array}{rcl} & \\left \[\\begin{array}{ccccc} T\\left \(1\\right \)&T\\left \(t\\right \)&T\\left \({t}^{2}\\right \)&\\cdots &T\\left \({t}^{n}\\right \) \\end{array} \\right \] & \\\\ & \\quad = \\left \[\\begin{array}{ccccc} t&{t}^{2} & {t}^{3} & \\cdots &{t}^{n+1} \\end{array} \\right \] & \\\\ & \\quad = \\left \[\\begin{array}{cccccc} 1&t&{t}^{2} & {t}^{3} & \\cdots &{t}^{n+1} \\end{array} \\right \]\\left \[\\begin{array}{ccccc} 0&0&0&\\cdots &0\\\\ 1 &0 &0 &\\cdots&0 \\\\ 0&1&0&\\cdots &0\\\\ 0 &0 &1 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\vdots & \\ddots &0\\\\ 0 &0 &0 &\\cdots&1 \\end{array} \\right \]& \\\\ & \\quad \\left \[\\begin{array}{ccccc} S\\left \(1\\right \)&S\\left \(t\\right \)&S\\left \({t}^{2}\\right \)&\\cdots &S\\left \({t}^{n}\\right \) \\end{array} \\right \] & \\\\ & \\qquad = \\left \[\\begin{array}{ccccc} t&\\frac{1} {2}{t}^{2} & \\frac{1} {3}{t}^{3} & \\cdots & \\frac{1} {n+1}{t}^{n+1} \\end{array} \\right \] & \\\\ & \\qquad = \\left \[\\begin{array}{cccccc} 1&t&{t}^{2} & {t}^{3} & \\cdots &{t}^{n+1} \\end{array} \\right \]\\left \[\\begin{array}{ccccc} 0& 0 & 0 &\\cdots & 0\\\\ 1 & 0 & 0 &\\cdots& 0 \\\\ 0&\\frac{1} {2} & 0 &\\cdots & 0 \\\\ 0& 0 &\\frac{1} {3} & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\vdots & \\ddots & 0 \\\\ 0& 0 & 0 &\\cdots &\\frac{1} {n} \\end{array} \\right \].& \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ57.gif) Doing a matrix representation of a linear map that is already given as a matrix can get a little confusing, but the procedure is obviously the same. Example 1.7.4. Let ![ $$L = \\left \[\\begin{array}{cc} 1&1\\\\ 0 &2 \\end{array} \\right \] : {\\mathbb{F}}^{2} \\rightarrow{\\mathbb{F}}^{2}$$ ](A81414_1_En_1_Chapter_Equcp.gif) and consider the basis ![ $${x}_{1} = \\left \[\\begin{array}{c} 1\\\\ 0 \\end{array} \\right \],{x}_{2} = \\left \[\\begin{array}{c} 1\\\\ 1 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equcq.gif) Then, ![ $$\\begin{array}{rcl} L\\left \({x}_{1}\\right \)& =& {x}_{1}, \\\\ L\\left \({x}_{2}\\right \)& =& \\left \[\\begin{array}{c} 2\\\\ 2 \\end{array} \\right \] = 2{x}_{2}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ58.gif) So ![ $$\\left \[\\begin{array}{cc} L\\left \({x}_{1}\\right \)&L\\left \({x}_{2}\\right \) \\end{array} \\right \] = \\left \[\\begin{array}{cc} {x}_{1} & {x}_{2} \\end{array} \\right \]\\left \[\\begin{array}{cc} 1&0\\\\ 0 &2 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equcr.gif) Example 1.7.5. Again, let ![ $$L = \\left \[\\begin{array}{cc} 1&1\\\\ 0 &2 \\end{array} \\right \] : {\\mathbb{F}}^{2} \\rightarrow{\\mathbb{F}}^{2}$$ ](A81414_1_En_1_Chapter_Equcs.gif) but consider instead the basis ![ $${y}_{1} = \\left \[\\begin{array}{c} 1\\\\ -1 \\end{array} \\right \],{y}_{2} = \\left \[\\begin{array}{c} 1\\\\ 1 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equct.gif) Then, ![ $$\\begin{array}{rcl} L\\left \({y}_{1}\\right \)& =& \\left \[\\begin{array}{c} 0\\\\ -2 \\end{array} \\right \] = {y}_{1} - {y}_{2}, \\\\ L\\left \({y}_{2}\\right \)& =& \\left \[\\begin{array}{c} 2\\\\ 2 \\end{array} \\right \] = 2{y}_{2}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ59.gif) So ![ $$\\left \[\\begin{array}{cc} L\\left \({y}_{1}\\right \)&L\\left \({y}_{2}\\right \) \\end{array} \\right \] = \\left \[\\begin{array}{cc} {y}_{1} & {y}_{2} \\end{array} \\right \]\\left \[\\begin{array}{cc} 1 &0\\\\ - 1 &2 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equcu.gif) Example 1.7.6. Let ![ $$A = \\left \[\\begin{array}{cc} a&c\\\\ b &d \\end{array} \\right \] \\in \\mathrm{{ Mat}}_{2\\times 2}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_Equcv.gif) and consider ![ $$\\begin{array}{rcl}{ L}_{A} :\\mathrm{{ Mat}}_{2\\times 2}\\left \(\\mathbb{F}\\right \)& \\rightarrow \\mathrm{{ Mat}}_{2\\times 2}\\left \(\\mathbb{F}\\right \)& \\\\ {L}_{A}\\left \(X\\right \)& = AX. & \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ60.gif) We use the basis E ij for ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq229.gif) where the ij entry in E ij is 1 and all other entries are zero. Next, order the basis E 11, E 21, E 12, E 22. This means that we think of ![ $$\\mathrm{{Mat}}_{2\\times 2}\\left \(\\mathbb{F}\\right \) \\approx{\\mathbb{F}}^{4}$$ ](A81414_1_En_1_Chapter_IEq230.gif) where the columns are stacked on top of each other with the first column being the top most. With this choice of basis, we note that ![ $$\\begin{array}{rcl} & \\left \[\\begin{array}{cccc} {L}_{A}\\left \({E}_{11}\\right \)&{L}_{A}\\left \({E}_{21}\\right \)&{L}_{A}\\left \({E}_{12}\\right \)&{L}_{A}\\left \({E}_{22}\\right \) \\end{array} \\right \] & \\\\ & \\quad = \\left \[\\begin{array}{cccc} A{E}_{11} & A{E}_{21} & A{E}_{12} & A{E}_{22} \\end{array} \\right \] & \\\\ & \\quad = \\left \[\\left \[\\begin{array}{cc} a&0\\\\ b &0 \\end{array} \\right \]\\left \[\\begin{array}{cc} c&0\\\\ d &0 \\end{array} \\right \]\\left \[\\begin{array}{cc} 0&a\\\\ 0 & b \\end{array} \\right \]\\left \[\\begin{array}{cc} 0& c\\\\ 0 &d \\end{array} \\right \]\\right \]& \\\\ & \\quad = \\left \[\\begin{array}{cccc} {E}_{11} & {E}_{21} & {E}_{12} & {E}_{22} \\end{array} \\right \]\\left \[\\begin{array}{cccc} a&c&0&0\\\\ b &d & 0 & 0 \\\\ 0&0&a&c\\\\ 0 & 0 & b &d \\end{array} \\right \] & \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ61.gif) Thus, L A has the block diagonal form ![ $$\\left \[\\begin{array}{cc} A& 0\\\\ 0 &A \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equcw.gif) This problem easily generalizes to the case of n ×n matrices, where L A will have a block diagonal form that looks like ![ $$\\left \[\\begin{array}{cccc} A& 0 &\\cdots & 0\\\\ 0 &A & & 0\\\\ \\vdots & & \\ddots & \\vdots \\\\ 0 & 0 &\\cdots &A \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equcx.gif) Example 1.7.7. Let ![ $$L : {\\mathbb{F}}^{n} \\rightarrow{\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq231.gif) be a linear map that maps basis vectors to basis vectors. Thus, ![ $$L\\left \({e}_{j}\\right \) = {e}_{\\sigma \\left \(j\\right \)},$$ ](A81414_1_En_1_Chapter_IEq232.gif) where ![ $$\\sigma: \\left \\{1,\\ldots ,n\\right \\} \\rightarrow \\left \\{1,\\ldots ,n\\right \\}.$$ ](A81414_1_En_1_Chapter_Equcy.gif) If σ is one-to-one and onto, then it is called a permutation. Apparently, it permutes the elements of ![ $$\\left \\{1,\\ldots ,n\\right \\}$$ ](A81414_1_En_1_Chapter_IEq233.gif). The corresponding linear map is denoted L σ. The matrix representation of L σ can be computed from the simple relationship ![ $${L}_{\\sigma }\\left \({e}_{j}\\right \) = {e}_{\\sigma \\left \(j\\right \)}$$ ](A81414_1_En_1_Chapter_IEq234.gif). Thus, the jth column has zeros everywhere except for a 1 in the ![ $$\\sigma \\left \(j\\right \)$$ ](A81414_1_En_1_Chapter_IEq235.gif) entry. This means that ![ $$\\left \[{L}_{\\sigma }\\right \] = \\left \[{\\delta }_{i,\\sigma \\left \(j\\right \)}\\right \]$$ ](A81414_1_En_1_Chapter_IEq236.gif). The matrix ![ $$\\left \[{L}_{\\sigma }\\right \]$$ ](A81414_1_En_1_Chapter_IEq237.gif) is also known as a permutation matrix. Example 1.7.8. Let L : V -> V be a linear map whose matrix representation with respect to the basis x 1, x 2 is given by ![ $$\\left \[\\begin{array}{cc} 1&2\\\\ 0 &1 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equcz.gif) We wish to compute the matrix representation of K = 2L 2 + 3L − 1 V . We know that ![ $$\\left \[\\begin{array}{cc} L\\left \({x}_{1}\\right \)&L\\left \({x}_{2}\\right \) \\end{array} \\right \] = \\left \[\\begin{array}{cc} {x}_{1} & {x}_{2} \\end{array} \\right \]\\left \[\\begin{array}{cc} 1&2\\\\ 0 &1 \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equda.gif) or equivalently ![ $$\\begin{array}{rcl} L\\left \({x}_{1}\\right \)& =& {x}_{1}, \\\\ L\\left \({x}_{2}\\right \)& =& 2{x}_{1} + {x}_{2}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ62.gif) Thus, ![ $$\\begin{array}{rcl} K\\left \({x}_{1}\\right \)& =& 2L\\left \(L\\left \({x}_{1}\\right \)\\right \) + 3L\\left \({x}_{1}\\right \) - {1}_{V }\\left \({x}_{1}\\right \) \\\\ & =& 2L\\left \({x}_{1}\\right \) + 3{x}_{1} - {x}_{1} \\\\ & =& 2{x}_{1} + 3{x}_{1} - {x}_{1} \\\\ & =& 4{x}_{1}, \\\\ K\\left \({x}_{2}\\right \)& =& 2L\\left \(L\\left \({x}_{2}\\right \)\\right \) + 3L\\left \({x}_{2}\\right \) - {1}_{V }\\left \({x}_{2}\\right \) \\\\ & =& 2L\\left \(2{x}_{1} + {x}_{2}\\right \) + 3\\left \(2{x}_{1} + {x}_{2}\\right \) - {x}_{2} \\\\ & =& 2\\left \(2{x}_{1} + \\left \(2{x}_{1} + {x}_{2}\\right \)\\right \) + 3\\left \(2{x}_{1} + {x}_{2}\\right \) - {x}_{2} \\\\ & =& 14{x}_{1} + 4{x}_{2}, \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ63.gif) and ![ $$\\left \[\\begin{array}{cc} K\\left \({x}_{1}\\right \)&K\\left \({x}_{2}\\right \) \\end{array} \\right \] = \\left \[\\begin{array}{cc} {x}_{1} & {x}_{2} \\end{array} \\right \]\\left \[\\begin{array}{cc} 4&14\\\\ 0 & 4 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equdb.gif) ### 1.7.1 Exercises 1. (a) Show that, t 3, t 3 + t 2, t 3 + t 2 + t, t 3 + t 2 + t + 1 form a basis for P 3. (b) Compute the image of ![ $$\\left \(1,2,3,4\\right \)$$ ](A81414_1_En_1_Chapter_IEq238.gif) under the coordin Ate map ![ $$\\left \[\\begin{array}{llll} {t}^{3} & {t}^{3} + {t}^{2} & {t}^{3} + {t}^{2} + t&{t}^{3} + {t}^{2} + t + 1\\end{array} \\right \] : {\\mathbb{F}}^{4} \\rightarrow{P}_{ 3}$$ ](A81414_1_En_1_Chapter_Equdc.gif) (c) Find the vector in ![ $${\\mathbb{F}}^{4}$$ ](A81414_1_En_1_Chapter_IEq239.gif) whose image is 4t 3 + 3t 2 + 2t + 1. 2. Find the matrix representation for D : P 3 -> P 3 with respect to the basis t 3, t 3 + t 2, t 3 + t 2 + t, t 3 + t 2 + t + 1. 3. Find the matrix representation for ![ $${D}^{2} + 2D + {1}_{{ P}_{3}} : {P}_{3} \\rightarrow{P}_{3}$$ ](A81414_1_En_1_Chapter_Equdd.gif) with respect to the standard basis 1, t, t 2, t 3. 4. Show that, if L : V -> V is a linear operator on a finite-dimensional vector space and ![ $$p\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \],$$ ](A81414_1_En_1_Chapter_IEq240.gif) then the matrix representations for L and ![ $$p\\left \(L\\right \)$$ ](A81414_1_En_1_Chapter_IEq241.gif) with respect to some fixed basis are related by ![ $$\\left \[p\\left \(L\\right \)\\right \] = p\\left \(\\left \[L\\right \]\\right \)$$ ](A81414_1_En_1_Chapter_IEq242.gif). 5. Consider the two linear maps K, L : P n -> ℂ n + 1 defined by ![ $$\\begin{array}{rcl} K\\left \(f\\right \)& =& \\left \(f\\left \({t}_{0}\\right \),\\left \(Df\\right \)\\left \({t}_{0}\\right \),\\ldots ,\\left \({D}^{n}f\\right \)\\left \({t}_{ 0}\\right \)\\right \), \\\\ L\\left \(f\\right \)& =& \\left \(f\\left \({t}_{0}\\right \),\\ldots ,f\\left \({t}_{n}\\right \)\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ64.gif) (a) Find a basis p 0,..., p n for P n such that ![ $$K\\left \({p}_{i}\\right \) = {e}_{i+1},$$ ](A81414_1_En_1_Chapter_IEq243.gif) where e 1,..., e n + 1 is the standard (aka canonical) basis for ℂ n + 1. (b) Provided t 0,..., t n are distinct, find a basis q 0,..., q n for P n such that ![ $$L\\left \({q}_{i}\\right \) = {e}_{i+1}$$ ](A81414_1_En_1_Chapter_IEq244.gif). 6. Let ![ $$A = \\left \[\\begin{array}{cc} a&c\\\\ b &d\\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equde.gif) and consider the linear map ![ $${R}_{A} :\\mathrm{{ Mat}}_{2\\times 2}\\left \(\\mathbb{F}\\right \) \\rightarrow \\mathrm{{ Mat}}_{2\\times 2}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq245.gif) defined by ![ $${R}_{A}\\left \(X\\right \) = XA$$ ](A81414_1_En_1_Chapter_IEq246.gif). Compute the matrix representation of this linear maps with respect to the basis ![ $$\\begin{array}{rcl}{ E}_{11}& =& \\left \[\\begin{array}{cc} 1&0\\\\ 0 &0\\end{array} \\right \], \\\\ {E}_{21}& =& \\left \[\\begin{array}{cc} 0&0\\\\ 1 &0\\end{array} \\right \], \\\\ {E}_{12}& =& \\left \[\\begin{array}{cc} 0&1\\\\ 0 &0\\end{array} \\right \], \\\\ {E}_{22}& =& \\left \[\\begin{array}{cc} 0&0\\\\ 0 &1\\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ65.gif) 7. Compute a matrix representation for ![ $$L :\\mathrm{{ Mat}}_{2\\times 2}\\left \(\\mathbb{F}\\right \) \\rightarrow \\mathrm{{ Mat}}_{1\\times 2}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq247.gif) defined by ![ $$L\\left \(X\\right \) = \\left \[\\begin{array}{cc} 1& - 1\\end{array} \\right \]X$$ ](A81414_1_En_1_Chapter_IEq248.gif) using the standard bases. 8. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq249.gif) and E ij the matrix that has 1 in the ij entry and is zero elsewhere. (a) If ![ $${E}_{ij} \\in \\mathrm{{ Mat}}_{k\\times n}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_1_Chapter_IEq250.gif) then ![ $${E}_{ij}A \\in \\mathrm{{ Mat}}_{k\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq251.gif) is the matrix whose ith row is the jth row of A and all other entries are zero. (b) If ![ $${E}_{ij} \\in \\mathrm{{ Mat}}_{n\\times k}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_1_Chapter_IEq252.gif) then ![ $$A{E}_{ij} \\in \\mathrm{{ Mat}}_{n\\times k}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq253.gif) is the matrix whose jth column is the ith column of A and all other entries are zero. 9. Let e 1, e 2 be the standard basis for ℂ 2 and consider the two real bases e 1, e 2, ie 1, ie 2 and e 1, ie 1, e 2, ie 2. If λ = α + iβ is a complex number, then compute the real matrix representations for λ1 ℂ 2 with respect to both bases. 10. Show that if L : V -> V has a lower triangular representation with respect to the basis x 1,..., x n , then it has an upper triangular representation with respect to x n ,..., x 1. 11. Let V and W be vector spaces with bases e 1,..., e m and f 1,..., f n , respectively. Define ![ $${E}_{ij} \\in \\mathrm{ Hom}\\left \(V,W\\right \)$$ ](A81414_1_En_1_Chapter_IEq254.gif) as the linear map that sends e j to f i and all other e k s go to zero, i.e., ![ $${E}_{ij}\\left \({e}_{k}\\right \) = {\\delta }_{jk}{f}_{i}$$ ](A81414_1_En_1_Chapter_IEq255.gif). (a) Show that the matrix representation for E ij is 1 in the ij entry and 0 otherwise. (b) Show that E ij form a basis for ![ $$\\mathrm{Hom}\\left \(V,W\\right \)$$ ](A81414_1_En_1_Chapter_IEq256.gif). (c) Let ![ $$L \\in \\mathrm{ Hom}\\left \(V,W\\right \)$$ ](A81414_1_En_1_Chapter_IEq257.gif) and expand L = ∑ i, j α ij E ij . Show that ![ $$\\left \[L\\right \] = \\left \[{\\alpha }_{ij}\\right \]$$ ](A81414_1_En_1_Chapter_IEq258.gif) with respect to these bases. ## 1.8 Dimension and Isomorphism We are now almost ready to prove that the number of elements in a basis for a fixed vector space is always the same. Definition 1.8.1. We say that a linear map L : V -> W is an isomorphism if we can find K : W -> V such that LK = 1 W and KL = 1 V . One can also describe the equations LK = 1 W and KL = 1 V in an interesting little diagram of maps ![ $$\\begin{array}{rll} V & \\rightarrow ^{L} &W\\\\ {1}_{ V } \\uparrow & &\\uparrow{1}_{W} \\\\ V & \\leftarrow ^{K}&W, \\end{array}$$ ](A81414_1_En_1_Chapter_Equdf.gif) where the vertical arrows are the identity maps. Definition 1.8.2. Two vector spaces V and W over ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq259.gif) are said to be isomorphic if we can find an isomorphism L : V -> W. Note that if V 1 and V 2 are isomorphic and V 2 and V 3 are isomorphic, then V 1 and V 3 are also isomorphic. The isomorphism is the composition of the given isomorphisms. Recall that a map f : S -> T between sets is one-to-one or injective if ![ $$f\\left \({x}_{1}\\right \) = f\\left \({x}_{2}\\right \)$$ ](A81414_1_En_1_Chapter_IEq260.gif) implies that x 1 = x 2. A better name for this concept is two-to-two as pointed out by Arens, since injective maps evidently take two distinct points to two distinct points.We say that f : S -> T is onto or surjective if every y ∈ T is of the form ![ $$y = f\\left \(x\\right \)$$ ](A81414_1_En_1_Chapter_IEq261.gif) for some x ∈ S. In others words, ![ $$f\\left \(S\\right \) = T$$ ](A81414_1_En_1_Chapter_IEq262.gif). A map that is both one-to-one and onto is said to be bijective. Such a map always has an inverse f − 1 defined via ![ $${f}^{-1}\\left \(y\\right \) = x$$ ](A81414_1_En_1_Chapter_IEq263.gif) if ![ $$f\\left \(x\\right \) = y$$ ](A81414_1_En_1_Chapter_IEq264.gif). Note that for each y ∈ T, such an x exists since f is onto and that this x is unique since f is one-to-one. The relationship between f and f − 1 is ![ $$f \\circ{f}^{-1}\\left \(y\\right \) = y$$ ](A81414_1_En_1_Chapter_IEq265.gif) and ![ $${f}^{-1} \\circ f\\left \(x\\right \) = x$$ ](A81414_1_En_1_Chapter_IEq266.gif). Observe that f − 1 : T -> S is also a bijection and has inverse ![ $${\\left \({f}^{-1}\\right \)}^{-1} = f$$ ](A81414_1_En_1_Chapter_IEq267.gif). Lemma 1.8.3. V and W are isomorphic if and only if there is a bijective linear map L : V -> W. The "if and only if" part asserts that the two statements: * V and W are isomorphic. * There is a bijective linear map L : V -> W. are equivalent. In other words, if one statement is true, then so is the other. To establish the proposition, it is therefore necessary to prove two things, namely, that the first statement implies the second and that the second implies the first. Proof. If V and W are isomorphic, then we can find linear maps L : V -> W and K : W -> V so that LK = 1 W and KL = 1 V . Then, for any y ∈ W ![ $$y = {1}_{W}\\left \(y\\right \) = L\\left \(K\\left \(y\\right \)\\right \).$$ ](A81414_1_En_1_Chapter_Equdg.gif) Thus, ![ $$y = L\\left \(x\\right \)$$ ](A81414_1_En_1_Chapter_IEq268.gif) if ![ $$x = K\\left \(y\\right \)$$ ](A81414_1_En_1_Chapter_IEq269.gif). This means L is onto. If ![ $$L\\left \({x}_{1}\\right \) = L\\left \({x}_{2}\\right \)$$ ](A81414_1_En_1_Chapter_IEq270.gif), then ![ $${x}_{1} = {1}_{V }\\left \({x}_{1}\\right \) = KL\\left \({x}_{1}\\right \) = KL\\left \({x}_{2}\\right \) = {1}_{V }\\left \({x}_{2}\\right \) = {x}_{2}.$$ ](A81414_1_En_1_Chapter_Equdh.gif) Showing that L is one-to-one. Conversely, assume L : V -> W is linear and a bijection. Then, we have an inverse map L − 1 that satisfies L ∘ L − 1 = 1 W and L − 1 ∘ L = 1 V . In order for this inverse to be allowable as K, we need to check that it is linear. Thus, select ![ $${\\alpha }_{1},{\\alpha }_{2} \\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq271.gif) and y 1, y 2 ∈ W. Let ![ $${x}_{i} = {L}^{-1}\\left \({y}_{i}\\right \)$$ ](A81414_1_En_1_Chapter_IEq272.gif) so that ![ $$L\\left \({x}_{i}\\right \) = {y}_{i}$$ ](A81414_1_En_1_Chapter_IEq273.gif). Then we have ![ $$\\begin{array}{rcl}{ L}^{-1}\\left \({\\alpha }_{ 1}{y}_{1} + {\\alpha }_{2}{y}_{2}\\right \)& =& {L}^{-1}\\left \({\\alpha }_{ 1}L\\left \({x}_{1}\\right \) + {\\alpha }_{2}L\\left \({x}_{2}\\right \)\\right \) \\\\ & =& {L}^{-1}\\left \(L\\left \({\\alpha }_{ 1}{x}_{1} + {\\alpha }_{2}{x}_{2}\\right \)\\right \) \\\\ & =& {1}_{V }\\left \({\\alpha }_{1}{x}_{1} + {\\alpha }_{2}{x}_{2}\\right \) \\\\ & =& {\\alpha }_{1}{x}_{1} + {\\alpha }_{2}{x}_{2} \\\\ & =& {\\alpha }_{1}{L}^{-1}\\left \({y}_{ 1}\\right \) + {\\alpha }_{2}{L}^{-1}\\left \({y}_{ 2}\\right \). \\\\ & & \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ66.gif) Recall that a finite basis for V over ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq274.gif) consists of a collection of vectors x 1,..., x n ∈ V so that each x ∈ V has a unique expansion x = x 1α1 + ⋯ + x n α n , α1,..., ![ $${\\alpha }_{n} \\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq275.gif). This means that the linear map ![ $$\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \] : {\\mathbb{F}}^{n} \\rightarrow V$$ ](A81414_1_En_1_Chapter_Equdi.gif) is a bijection and hence by the above lemma an isomorphism. We saw in Lemma 1.7.2 that any linear map ![ $${\\mathbb{F}}^{m} \\rightarrow V$$ ](A81414_1_En_1_Chapter_IEq276.gif) must be of this form. In particular, any isomorphism ![ $${\\mathbb{F}}^{m} \\rightarrow V$$ ](A81414_1_En_1_Chapter_IEq277.gif) gives rise to a basis for V. Since ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq278.gif) is our prototype for an n-dimensional vector space over ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq279.gif), it is natural to say that a vector space has dimension n if it is isomorphic to ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq280.gif). As we have just seen, this is equivalent to saying that V has a basis consisting of n vectors. The only problem is that we do not know if two spaces ![ $${\\mathbb{F}}^{m}$$ ](A81414_1_En_1_Chapter_IEq281.gif) and ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq282.gif) can be isomorphic when m≠n. This is taken care of next. Theorem 1.8.4. (Uniqueness of Dimension) If ![ $${\\mathbb{F}}^{m}$$ ](A81414_1_En_1_Chapter_IEq283.gif) and ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq284.gif) are isomorphic over ![ $$\\mathbb{F},$$ ](A81414_1_En_1_Chapter_IEq285.gif) then n = m. Proof. Suppose we have ![ $$L : {\\mathbb{F}}^{m} \\rightarrow{\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq286.gif) and ![ $$K : {\\mathbb{F}}^{n} \\rightarrow{\\mathbb{F}}^{m}$$ ](A81414_1_En_1_Chapter_IEq287.gif) such that ![ $$LK = {1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_1_Chapter_IEq288.gif) and ![ $$KL = {1}_{{\\mathbb{F}}^{m}}$$ ](A81414_1_En_1_Chapter_IEq289.gif). In Sect. 1.7, we showed that the linear maps L and K are represented by matrices, i.e., ![ $$\\left \[L\\right \] \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq290.gif) and ![ $$\\left \[K\\right \] \\in \\mathrm{{ Mat}}_{m\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq291.gif). Using invariance of trace (Lemma 1.6.11), we then see that ![ $$\\begin{array}{rcl} n& =& \\mathrm{tr}\\left \(\\left \[{1}_{{\\mathbb{F}}^{n}}\\right \]\\right \) \\\\ & =& \\mathrm{tr}\\left \(\\left \[L\\right \]\\left \[K\\right \]\\right \) \\\\ & =& \\mathrm{tr}\\left \(\\left \[K\\right \]\\left \[L\\right \]\\right \) \\\\ & =& \\mathrm{tr}\\left \(\\left \[{1}_{{\\mathbb{F}}^{m}}\\right \]\\right \) \\\\ & =& m. \\\\ & & \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ67.gif) This proof has the defect of only working when the field has characteristic 0. The result still holds in the more general situation where the characteristic is nonzero. Other more standard proofs that work in these more general situations can be found in Sects. 1.12 and 1.13. Definition 1.8.5. We can now unequivocally denote and define the dimension of a vector space V over ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq292.gif) as ![ $${\\dim }_{\\mathbb{F}}V = n$$ ](A81414_1_En_1_Chapter_IEq293.gif) if V is isomorphic to ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq294.gif). In case V is not isomorphic to any ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq295.gif), we say that V is infinite-dimensional and write ![ $${\\dim }_{\\mathbb{F}}V = \\infty $$ ](A81414_1_En_1_Chapter_IEq296.gif). Note that for some vector spaces, it is possible to change the choice of scalars. Such a change can have a rather drastic effect on what the dimension is. For example, ![ $${\\dim }_{\\mathbb{C}}\\mathbb{C} = 1$$ ](A81414_1_En_1_Chapter_IEq297.gif), while ![ $${\\dim }_{\\mathbb{R}}\\mathbb{C} = 2$$ ](A81414_1_En_1_Chapter_IEq298.gif). If we consider ℝ as a vector space over ℚ, something even worse happens: ![ $${\\dim }_{\\mathbb{Q}}\\mathbb{R} = \\infty $$ ](A81414_1_En_1_Chapter_IEq299.gif). This is because ℝ is not countably infinite, while all of the vector spaces ℚ n are countably infinite. More precisely, it is possible to find a bijective map ![ $$f : \\mathbb{N} \\rightarrow {\\mathbb{Q}}^{n},$$ ](A81414_1_En_1_Chapter_IEq300.gif) but, as first observed by Cantor using his famous diagonal argument, there is no bijective map ![ $$f : \\mathbb{N} \\rightarrow\\mathbb{R}$$ ](A81414_1_En_1_Chapter_IEq301.gif). Thus, ![ $${\\dim }_{\\mathbb{Q}}\\mathbb{R} = \\infty $$ ](A81414_1_En_1_Chapter_IEq302.gif) for set-theoretic reasons related to the (non)existence of bijective maps between sets. Corollary 1.8.6. If V and W are finite-dimensional vector spaces over ![ $$\\mathbb{F},$$ ](A81414_1_En_1_Chapter_IEq303.gif) then ![ $$\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(V,W\\right \)$$ ](A81414_1_En_1_Chapter_IEq304.gif) is also finite-dimensional and ![ $${\\dim }_{\\mathbb{F}}\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(V,W\\right \) = \\left \({\\dim }_{\\mathbb{F}}W\\right \) \\cdot \\left \({\\dim }_{\\mathbb{F}}V \\right \)$$ ](A81414_1_En_1_Chapter_Equdj.gif) Proof. By choosing bases for V and W, we showed in Sect. 1.7 that there is a natural map: ![ $$\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(V,W\\right \) \\rightarrow \\mathrm{{ Mat}}_{\\left \({\\dim }_{\\mathbb{F}}W\\right \)\\times \\left \({\\dim }_{\\mathbb{F}}V \\right \)}\\left \(\\mathbb{F}\\right \) \\simeq{\\mathbb{F}}^{\\left \({\\dim }_{\\mathbb{F}}W\\right \)\\cdot \\left \({\\dim }_{\\mathbb{F}}V \\right \)}.$$ ](A81414_1_En_1_Chapter_Equdk.gif) This map is both one-to-one and onto as the matrix representation is uniquely determined by the linear map and every matrix yields a linear map. Fin Ally, one easily checks that the map is linear. In the special case where V = W and we have a basis for the n-dimensional space V , the linear isomorphism ![ $$\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(V,V \\right \)\\longleftrightarrow \\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_Equdl.gif) also preserves composition and products. Thus, for L, K : V -> V , we have ![ $$\\left \[LK\\right \] = \\left \[L\\right \]\\left \[K\\right \].$$ ](A81414_1_En_1_Chapter_Equdm.gif) The composition in ![ $$\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(V,V \\right \)$$ ](A81414_1_En_1_Chapter_IEq305.gif) and matrix product in ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq306.gif) give an extra product structure on the vector spaces that make them into so-called algebras. Algebras are vector spaces that also have a product structure. This product structure must satisfy the associative law, the distributive law, and also commute with scalar multiplication. Unlike a field, it is not required that all nonzero elements have inverses. The above isomorphism is what we call an algebra isomorphism. ### 1.8.1 Exercises 1. Let L, K : V -> V be linear maps between finite-dimensional vector spaces that satisfy L ∘ K = 0. Is it true that K ∘ L = 0 ? 2. Let L : V -> W be a linear map between finite-dimensional vector spaces. Show that L is an isomorphism if and only if it maps a basis for V to a basis for W. 3. If V is finite-dimensional, show that V and ![ $$\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(V, \\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq307.gif) have the same dimension. 4. Show that a linear map L : V -> W is one-to-one if and only if ![ $$L\\left \(x\\right \) = 0$$ ](A81414_1_En_1_Chapter_IEq308.gif) implies that x = 0. 5. Let V be a vector space over ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq309.gif). Consider the map ![ $$K : V \\rightarrow \\mathrm{{ Hom}}_{\\mathbb{F}}\\left \(\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(V, \\mathbb{F}\\right \), \\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_Equdn.gif) defined by the condition that ![ $$K\\left \(x\\right \) \\in \\mathrm{{ Hom}}_{\\mathbb{F}}\\left \(\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(V, \\mathbb{F}\\right \), \\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_Equdo.gif) is the linear functional on ![ $$\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(V, \\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq310.gif) such that ![ $$K\\left \(x\\right \)\\left \(L\\right \) = L\\left \(x\\right \),\\text{ for }L \\in \\mathrm{{ Hom}}_{\\mathbb{F}}\\left \(V, \\mathbb{F}\\right \).$$ ](A81414_1_En_1_Chapter_Equdp.gif) Show that this map is one-to-one. Show that it is also onto when V is finite-dimensional. 6. Let V ≠0 be finite-dimensional and assume that ![ $${L}_{1},\\ldots ,{L}_{n} : V \\rightarrow V$$ ](A81414_1_En_1_Chapter_Equdq.gif) are linear operators. Show that if L 1 ∘ ⋯ ∘ L n = 0, then at least one of the maps L i is not one-to-one. 7. Let t 0,..., t n ∈ ℝ be distinct and consider P n ⊂ ℂt. Define L : P n -> ℂ n + 1 by ![ $$L\\left \(p\\right \) = \\left \(p\\left \({t}_{0}\\right \),\\ldots ,p\\left \({t}_{n}\\right \)\\right \)$$ ](A81414_1_En_1_Chapter_IEq311.gif). Show that L is an isomorphism. (This problem will be easier to solve later in the text.) 8. Let ![ $${t}_{0} \\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq312.gif) and consider ![ $${P}_{n} \\subset\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq313.gif). Show that ![ $$L : {P}_{n} \\rightarrow{\\mathbb{F}}^{n+1}$$ ](A81414_1_En_1_Chapter_IEq314.gif) defined by ![ $$L\\left \(p\\right \) = \\left \(p\\left \({t}_{0}\\right \),\\left \(Dp\\right \)\\left \({t}_{0}\\right \),\\ldots ,\\left \({D}^{n}p\\right \)\\left \({t}_{ 0}\\right \)\\right \)$$ ](A81414_1_En_1_Chapter_Equdr.gif) is an isomorphism. Hint: Think of a Taylor expansion at t 0. 9. (a) Let V be finite-dimensional. Show that if ![ $${L}_{1},{L}_{2} : {\\mathbb{F}}^{n} \\rightarrow V$$ ](A81414_1_En_1_Chapter_IEq315.gif) are isomorphisms, then for any linear operator L : V -> V ![ $$\\mathrm{tr}\\left \({L}_{1}^{-1} \\circ L \\circ{L}_{ 1}\\right \) =\\mathrm{ tr}\\left \({L}_{2}^{-1} \\circ L \\circ{L}_{ 2}\\right \).$$ ](A81414_1_En_1_Chapter_Equds.gif) This means we can define trL. Hint: Try not to use explicit matrix representations. (b) Let V and W be finite-dimensional and L 1 : V -> W and L 2 : W -> V linear maps. Show that ![ $$\\mathrm{tr}\\left \({L}_{1} \\circ{L}_{2}\\right \) =\\mathrm{ tr}\\left \({L}_{2} \\circ{L}_{1}\\right \)$$ ](A81414_1_En_1_Chapter_Equdt.gif) 10. Construct an isomorphism V -> ![ $$\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(\\mathbb{F},V \\right \)$$ ](A81414_1_En_1_Chapter_IEq316.gif) with selecting bases for the spaces. 11. Let V be a complex vector space. Is the identity map V -> V ∗ an isomorphism? (see Exercise 6 in Sect. 1.4 for the definition of V ∗ .) 12. Assume that V and W are finite-dimensional. Define ![ $$\\begin{array}{rcl} \\mathrm{{Hom}}_{\\mathbb{F}}\\left \(V,W\\right \)& \\rightarrow &\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(W,V \\right \), \\mathbb{F}\\right \), \\\\ L& \\rightarrow &\\left \[A \\rightarrow \\mathrm{ tr}\\left \(A \\circ L\\right \)\\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ68.gif) Thus, the linear map L : V -> W is mapped to a linear map ![ $$\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(W,V \\right \) \\rightarrow\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq317.gif) that simply takes ![ $$A \\in \\mathrm{{ Hom}}_{\\mathbb{F}}\\left \(W,V \\right \)$$ ](A81414_1_En_1_Chapter_IEq318.gif) to trA ∘ L. Show that this map is an isomorphism. 13. Consider the map ![ $$\\Psi: \\mathbb{C} \\rightarrow \\mathrm{{ Mat}}_{2\\times 2}\\left \(\\mathbb{R}\\right \)$$ ](A81414_1_En_1_Chapter_Equdu.gif) defined by ![ $$\\Psi \\left \(\\alpha+ i\\beta \\right \) = \\left \[\\begin{array}{cc} \\alpha & - \\beta \\\\ \\beta& \\alpha \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equdv.gif) (a) Show that this is ℝ-linear and one-to-one but not onto. Find an example of a matrix in Mat2 ×2 ℝ that does not come from ℂ. (b) Extend this map to a linear map ![ $$\\Psi:\\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \) \\rightarrow \\mathrm{{ Mat}}_{2n\\times 2n}\\left \(\\mathbb{R}\\right \)$$ ](A81414_1_En_1_Chapter_Equdw.gif) and show that this map is also ℝ-linear and one-to-one but not onto. Conclude that there must be matrices in Mat2n ×2n ℝ that do not come from complex matrices in Mat n ×n ℂ. (c) Show that ![ $${\\dim }_{\\mathbb{R}}\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \) = 2{n}^{2},$$ ](A81414_1_En_1_Chapter_IEq319.gif) while ![ $${\\dim }_{\\mathbb{R}}\\mathrm{{Mat}}_{2n\\times 2n}\\left \(\\mathbb{R}\\right \) = 4{n}^{2}$$ ](A81414_1_En_1_Chapter_IEq320.gif). 14. For ![ $$A = \\left \[{\\alpha }_{ij}\\right \] \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq321.gif), define the transpose ![ $${A}^{t} = \\left \[{\\beta }_{ij}\\right \] \\in \\mathrm{{ Mat}}_{m\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq322.gif) by β ij = α ji . Thus, A t is gotten from A by reflecting in the diagonal entries. (a) Show that A -> A t is a linear map which is also an isomorphism whose inverse is given by B -> B t . (b) If ![ $$A \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq323.gif) and ![ $$B \\in \\mathrm{{ Mat}}_{m\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq324.gif), show that ![ $${\\left \(AB\\right \)}^{t} = {B}^{t}{A}^{t}$$ ](A81414_1_En_1_Chapter_IEq325.gif). (c) Show that if ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq326.gif) is invertible, i.e., there exists ![ $${A}^{-1} \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq327.gif) such that ![ $$A{A}^{-1} = {A}^{-1}A = {1}_{{ \\mathbb{F}}^{n}},$$ ](A81414_1_En_1_Chapter_Equdx.gif) then A t is also invertible and ![ $${\({A}^{t}\)}^{-1} = {\({A}^{-1}\)}^{t}$$ ](A81414_1_En_1_Chapter_IEq328.gif). ## 1.9 Matrix Representations Revisited While the number of elements in a basis is always the same, there is unfortunately not a clear choice of a basis for many abstract vector spaces. This necessitates a discussion on the relationship between expansions of vectors in different bases. Using the idea of isomorphism in connection with a choice of basis, we can streamline the procedure for expanding vectors and constructing the matrix representation of a linear map. Fix a linear map L : V -> W and bases e 1,..., e m for V and f 1,..., f n for W. One can then encode all of the necessary information in a diagram of maps ![ $$\\begin{array}{ccc} V & \\rightarrow ^{L} &W\\\\ \\uparrow &&\\uparrow\\\\ {\\mathbb{F}}^{m}& \\rightarrow ^{\\left \[L\\right \]}&{\\mathbb{F}}^{n}\\end{array}$$ ](A81414_1_En_1_Chapter_Equdy.gif) In this diagram, the top horizontal arrow represents L and the bottom horizontal arrow represents the matrix for L interpreted as a linear map ![ $$\\left \[L\\right \] : {\\mathbb{F}}^{m} \\rightarrow{\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq329.gif). The two vertical arrows are the basis isomorphisms defined by the choices of bases for V and W, i.e., ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \]& :& {\\mathbb{F}}^{m} \\rightarrow V, \\\\ \\left \[\\begin{array}{ccc} {f}_{1} & \\cdots &{f}_{n} \\end{array} \\right \]& :& {\\mathbb{F}}^{n} \\rightarrow W.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ69.gif) Thus, we have the formulae relating L and ![ $$\\left \[L\\right \]$$ ](A81414_1_En_1_Chapter_IEq330.gif) ![ $$\\begin{array}{rcl} L& =& \\left \[\\begin{array}{ccc} {f}_{1} & \\cdots &{f}_{n} \\end{array} \\right \] \\circ \\left \[L\\right \] \\circ {\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \]}^{-1}, \\\\ \\left \[L\\right \]& =&{ \\left \[\\begin{array}{ccc} {f}_{1} & \\cdots &{f}_{n} \\end{array} \\right \]}^{-1} \\circ L \\circ \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ70.gif) Note that a basis isomorphism ![ $$\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \] : {\\mathbb{F}}^{m} \\rightarrow{\\mathbb{F}}^{m}$$ ](A81414_1_En_1_Chapter_Equdz.gif) is a matrix ![ $$\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \] \\in \\mathrm{{ Mat}}_{m\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_Equea.gif) provided we write the vectors x 1,..., x m as column vectors. As such, the map can be inverted using the standard matrix inverse. That said, it is not an easy problem to invert matrices or linear maps in general. It is important to be aware of the fact that different bases will yield different matrix representations. To see what happens abstractly let us assume that we have two bases x 1,..., x n and y 1,..., y n for a vector space V. If we think of x 1,..., x n as a basis for the domain and y 1,..., y n as a basis for the image, then the identity map 1 V : V -> V has a matrix representation that is computed via ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]& =& \\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \]\\left \[\\begin{array}{ccc} {\\beta }_{11} & \\cdots & {\\beta }_{1n}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\beta }_{n1} & \\cdots &{\\beta }_{nn} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \]B.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ71.gif) The matrix B, being the matrix representation for an isomorphism, is itself invertible, and we see that by multiplying by B − 1 on the right, we obtain ![ $$\\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]{B}^{-1}.$$ ](A81414_1_En_1_Chapter_Equeb.gif) This is the matrix representation for ![ $${1}_{V }^{-1} = {1}_{V }$$ ](A81414_1_En_1_Chapter_IEq331.gif) when we switch the bases around. Differently stated, we have ![ $$\\begin{array}{rcl} B& =&{ \\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \]}^{-1}\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \], \\\\ {B}^{-1}& =&{ \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]}^{-1}\\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ72.gif) We next check what happens to a vector x ∈ V ![ $$\\begin{array}{rcl} x& =& \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \]\\left \[\\begin{array}{ccc} {\\beta }_{11} & \\cdots & {\\beta }_{1n}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\beta }_{n1} & \\cdots &{\\beta }_{nn} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ73.gif) Thus, if we know the coordin Ates for x with respect to x 1,..., x n , then we immediately obtain the coordin Ates for x with respect to y 1,..., y n by changing ![ $$\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equec.gif) to ![ $$\\left \[\\begin{array}{ccc} {\\beta }_{11} & \\cdots & {\\beta }_{1n}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\beta }_{n1} & \\cdots &{\\beta }_{nn} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n} \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equed.gif) We can evidently also go backwards using the inverse B − 1 rather than B. Example 1.9.1. In ![ $${\\mathbb{F}}^{2}$$ ](A81414_1_En_1_Chapter_IEq332.gif), let e 1, e 2 be the standard basis and ![ $${x}_{1} = \\left \[\\begin{array}{c} 1\\\\ 0 \\end{array} \\right \],\\:{x}_{2} = \\left \[\\begin{array}{c} 1\\\\ 1 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equee.gif) Then B 1 − 1 is easily found using ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{cc} 1&1\\\\ 0 &1 \\end{array} \\right \]& =& \\left \[\\begin{array}{cc} {x}_{1} & {x}_{2} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{cc} {e}_{1} & {e}_{2} \\end{array} \\right \]{B}_{1}^{-1} \\\\ & =& \\left \[\\begin{array}{cc} 1&0\\\\ 0 &1 \\end{array} \\right \]{B}_{1}^{-1} \\\\ & =& {B}_{1}^{-1} \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ74.gif) B 1 itself requires solving ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{cc} {e}_{1} & {e}_{2} \\end{array} \\right \]& =& \\left \[\\begin{array}{cc} {x}_{1} & {x}_{2} \\end{array} \\right \]{B}_{1}\\text{ or} \\\\ \\left \[\\begin{array}{cc} 1&0\\\\ 0 &1 \\end{array} \\right \]& =& \\left \[\\begin{array}{cc} 1&1\\\\ 0 &1 \\end{array} \\right \]{B}_{1}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ75.gif) Thus, ![ $$\\begin{array}{rcl}{ B}_{1}& =&{ \\left \[\\begin{array}{cc} {x}_{1} & {x}_{2} \\end{array} \\right \]}^{-1} \\\\ & =&{ \\left \[\\begin{array}{cc} 1&1\\\\ 0 &1 \\end{array} \\right \]}^{-1} \\\\ & =& \\left \[\\begin{array}{cc} 1& - 1\\\\ 0 & 1 \\end{array} \\right \] \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ76.gif) Example 1.9.2. In ![ $${\\mathbb{F}}^{2}$$ ](A81414_1_En_1_Chapter_IEq333.gif), let ![ $${y}_{1} = \\left \[\\begin{array}{c} 1\\\\ -1 \\end{array} \\right \],\\:{y}_{2} = \\left \[\\begin{array}{c} 1\\\\ 1 \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equef.gif) and ![ $${x}_{1} = \\left \[\\begin{array}{c} 1\\\\ 0 \\end{array} \\right \],\\:{x}_{2} = \\left \[\\begin{array}{c} 1\\\\ 1 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equeg.gif) Then, B 2 is found by ![ $$\\begin{array}{rcl}{ B}_{2}& =&{ \\left \[\\begin{array}{cc} {x}_{1} & {x}_{2} \\end{array} \\right \]}^{-1}\\left \[\\begin{array}{cc} {y}_{1} & {y}_{2} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{cc} 1& - 1\\\\ 0 & 1 \\end{array} \\right \]\\left \[\\begin{array}{cc} 1 &1\\\\ - 1 &1 \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{cc} 2 &0\\\\ - 1 &1 \\end{array} \\right \] \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ77.gif) and ![ $${B}_{2}^{-1} = \\left \[\\begin{array}{cc} \\frac{1} {2} & 0 \\\\ \\frac{1} {2} & 1 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equeh.gif) Recall that we know ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{c} \\alpha \\\\ \\beta\\end{array} \\right \]& =& \\alpha {e}_{1} + \\beta {e}_{2} \\\\ & =& \\frac{\\alpha- \\beta } {2} {y}_{1} + \\frac{\\alpha+ \\beta } {2} {y}_{2} \\\\ & =& \\left \(\\alpha- \\beta \\right \){x}_{1} + \\beta {x}_{2}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ78.gif) Thus, it should be true that ![ $$\\left \[\\begin{array}{c} \\left \(\\alpha- \\beta \\right \)\\\\ \\beta\\end{array} \\right \] = \\left \[\\begin{array}{cc} 2 &0\\\\ - 1 &1 \\end{array} \\right \]\\left \[\\begin{array}{c} \\frac{\\alpha -\\beta } {2} \\\\ \\frac{\\alpha +\\beta } {2} \\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_Equei.gif) which indeed is the case. Now, suppose that we have a linear operator L : V -> V. It will have matrix representations with respect to both bases. First, let us do this in a diagram of maps ![ $$\\begin{array}{ccc} {\\mathbb{F}}^{n}& \\rightarrow ^{{A}_{1}} & {\\mathbb{F}}^{n} \\\\ \\downarrow & &\\downarrow\\\\ V & \\rightarrow ^{L} & V\\\\ \\uparrow& & \\uparrow\\\\ {\\mathbb{F}}^{n}& \\rightarrow ^{{A}_{2}} & {\\mathbb{F}}^{n} \\end{array}$$ ](A81414_1_En_1_Chapter_Equej.gif) Here the downward arrows come from the isomorphism ![ $$\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \] : {\\mathbb{F}}^{n} \\rightarrow V,$$ ](A81414_1_En_1_Chapter_Equek.gif) and the upward arrows are ![ $$\\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \] : {\\mathbb{F}}^{n} \\rightarrow V.$$ ](A81414_1_En_1_Chapter_Equel.gif) Thus, ![ $$\\begin{array}{rcl} L& =& \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]{A}_{1}{\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]}^{-1} \\\\ L& =& \\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \]{A}_{2}{\\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \]}^{-1}\\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ79.gif) We wish to discover what the relationship between A 1 and A 2 is. To figure this out, we simply note that ![ $$\\begin{array}{rcl} & \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]{A}_{1}{\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]}^{-1} & \\\\ & = L & \\\\ & = \\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \]{A}_{2}{\\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \]}^{-1}.& \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ80.gif) Hence, ![ $$\\begin{array}{rcl}{ A}_{1}& ={ \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]}^{-1}\\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \]{A}_{2}{\\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \]}^{-1}\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]& \\\\ & = {B}^{-1}{A}_{2}B. & \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ81.gif) To memorize this formula, keep in mind that B transforms from the x 1,..., x n basis to the y 1,..., y n basis while B − 1 reverses this process. The matrix product B − 1 A 2 B then indicates that starting from the right, we have gone from x 1,..., x n to y 1,..., y n then used A 2 on the y 1,..., y n basis and then transformed back from the y 1,..., y n basis to the x 1,..., x n basis in order to find what A 1 does with respect to the x 1,..., x n basis. Definition 1.9.3. Two matrices ![ $${A}_{1},{A}_{2} \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq334.gif) are said to be similar if there is an invertible matrix ![ $$B \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq335.gif) such that ![ $${A}_{1} = {B}^{-1}{A}_{ 2}B.$$ ](A81414_1_En_1_Chapter_Equem.gif) We have evidently shown that any two matrix representations of the same linear operator are always similar. Example 1.9.4. We have the representations for ![ $$L = \\left \[\\begin{array}{cc} 1&1\\\\ 0 &2 \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equen.gif) with respect to the three bases we just studied earlier in Sect. 1.7 ![ $$\\left \[\\begin{array}{cc} L\\left \({e}_{1}\\right \)&L\\left \({e}_{2}\\right \) \\end{array} \\right \] = \\left \[\\begin{array}{cc} {e}_{1} & {e}_{2} \\end{array} \\right \]\\left \[\\begin{array}{cc} 1&1\\\\ 0 &2 \\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_Equeo.gif) ![ $$\\left \[\\begin{array}{cc} L\\left \({x}_{1}\\right \)&L\\left \({x}_{2}\\right \) \\end{array} \\right \] = \\left \[\\begin{array}{cc} {x}_{1} & {x}_{2} \\end{array} \\right \]\\left \[\\begin{array}{cc} 1&0\\\\ 0 &2 \\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_Equep.gif) ![ $$\\left \[\\begin{array}{cc} L\\left \({y}_{1}\\right \)&L\\left \({y}_{2}\\right \) \\end{array} \\right \] = \\left \[\\begin{array}{cc} {y}_{1} & {y}_{2} \\end{array} \\right \]\\left \[\\begin{array}{cc} 1 &0\\\\ - 1 &2 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equeq.gif) Using the changes of basis calculated above, we can check the following relationships: ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{cc} 1&0\\\\ 0 &2 \\end{array} \\right \]& =& {B}_{1}\\left \[\\begin{array}{cc} 1&1\\\\ 0 &2 \\end{array} \\right \]{B}_{1}^{-1} \\\\ & =& \\left \[\\begin{array}{cc} 1& - 1\\\\ 0 & 1 \\end{array} \\right \]\\left \[\\begin{array}{cc} 1&1\\\\ 0 &2 \\end{array} \\right \]\\left \[\\begin{array}{cc} 1&1\\\\ 0 &1 \\end{array} \\right \] \\\\ \\left \[\\begin{array}{cc} 1&0\\\\ 0 &2 \\end{array} \\right \]& =& {B}_{2}\\left \[\\begin{array}{cc} 1 &0\\\\ - 1 &2 \\end{array} \\right \]{B}_{2}^{-1} \\\\ & =& \\left \[\\begin{array}{cc} 2 &0\\\\ - 1 &1 \\end{array} \\right \]\\left \[\\begin{array}{cc} 1 &0\\\\ - 1 &2 \\end{array} \\right \]\\left \[\\begin{array}{cc} \\frac{1} {2} & 0 \\\\ \\frac{1} {2} & 1 \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ82.gif) One can more generally consider L : V -> W and see what happens if we change bases in both V and W. The analysis is similar as long as we keep in mind that there are four bases in play. The key diagram looks like ![ $$\\begin{array}{ccc} {\\mathbb{F}}^{m}& \\rightarrow ^{{A}_{1}} & {\\mathbb{F}}^{n} \\\\ \\downarrow& &\\downarrow\\\\ V & \\rightarrow ^{L} &W\\\\ \\uparrow& & \\uparrow\\\\ {\\mathbb{F}}^{m}& \\rightarrow ^{{A}_{2}} & {\\mathbb{F}}^{n} \\end{array}$$ ](A81414_1_En_1_Chapter_Equer.gif) One of the goals in the study of linear operators or just square matrices is to find a suitable basis that makes the matrix representation as simple as possible. This is a rather complicated theory which the rest of the book will try to uncover. ### 1.9.1 Exercises 1. Let ![ $$V = \\left \\{f \\in \\mathrm{ Func}\\left \(\\mathbb{R}, \\mathbb{C}\\right \) : f\\left \(t\\right \) = \\alpha \\cos \\left \(t\\right \) + \\beta \\sin \\left \(t\\right \),\\,\\alpha ,\\beta\\in\\mathbb{C}\\right \\}$$ ](A81414_1_En_1_Chapter_IEq336.gif). (a) Show that cost, sint and expit, exp − it both form a basis for V. (b) Find the change of basis matrix. (c) Find the matrix representation of D : V -> V with respect to both bases and check that the change of basis matrix gives the correct relationship between these two matrices. 2. Let ![ $$A = \\left \[\\begin{array}{cc} 0& - 1\\\\ 1 & 0\\end{array} \\right \] : {\\mathbb{R}}^{2} \\rightarrow{\\mathbb{R}}^{2}$$ ](A81414_1_En_1_Chapter_Eques.gif) and consider the basis ![ $${x}_{1} = \\left \[\\begin{array}{c} 1\\\\ -1\\end{array} \\right \],{x}_{2} = \\left \[\\begin{array}{c} 1\\\\ 1\\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equet.gif) (a) Compute the matrix representation of A with respect to x 1, x 2. (b) Compute the matrix representation of A with respect to ![ $$\\frac{1} {\\sqrt{2}}{x}_{1}$$ ](A81414_1_En_1_Chapter_IEq337.gif), ![ $$\\frac{1} {\\sqrt{2}}{x}_{2}$$ ](A81414_1_En_1_Chapter_IEq338.gif). (c) Compute the matrix representation of A with respect to x 1, x 1 + x 2. 3. Let e 1, e 2 be the standard basis for ℂ 2 and consider the two real bases e 1, e 2, ie 1, ie 2 and e 1, ie 1, e 2, ie 2. If ![ $$\\lambda= \\alpha+ i\\beta $$ ](A81414_1_En_1_Chapter_IEq339.gif) is a complex number, compute the real matrix representations for λ1 ℂ 2 with respect to both bases. Show that the two matrices are related via the change of basis formula. 4. If x 1,..., x n is a basis for V, then what is the change of basis matrix from x 1,..., x n to x n ,..., x 1 ? How does the matrix representation of an operator on V change with this change of basis? 5. Let L : V -> V be a linear operator, ![ $$p\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq340.gif), a polynomial and K : V -> W an isomorphism. Show that ![ $$p\\left \(K \\circ L \\circ{K}^{-1}\\right \) = K \\circ p\\left \(L\\right \) \\circ{K}^{-1}.$$ ](A81414_1_En_1_Chapter_Equeu.gif) 6. Let A be a permutation matrix (see Example 1.7.7 for the definition.) Will the matrix representation for A still be a permutation matrix if we select a different basis? 7. What happens to the matrix representation of a linear map if the change of basis matrix is a permutation matrix (see Example 1.7.7 for the definition)? ## 1.10 Subspaces We are now ready for a more in-depth study of subspaces.Recall that a nonempty subset M ⊂ V of a vector space V is said to be a subspace if it is closed under addition and scalar multiplication: ![ $$\\begin{array}{rcl} x,y \\in M& \\Rightarrow & x + y \\in M, \\\\ \\alpha\\in\\mathbb{F}\\text{ and }x \\in M& \\Rightarrow & \\alpha x \\in M \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ83.gif) The two axioms for a subspace can be combined into one as follows: ![ $${\\alpha }_{1},{\\alpha }_{2} \\in\\mathbb{F}\\text{ and }{x}_{1},{x}_{2} \\in M \\Rightarrow{\\alpha }_{1}{x}_{1} + {\\alpha }_{2}{x}_{2} \\in M$$ ](A81414_1_En_1_Chapter_Equev.gif) Any vector space always has two trivial subspaces, namely, V and ![ $$\\left \\{0\\right \\}$$ ](A81414_1_En_1_Chapter_IEq341.gif). Some more interesting examples come below. Example 1.10.1. Let M i be the ith coordin Ate axis in ![ $${\\mathbb{F}}^{n},$$ ](A81414_1_En_1_Chapter_IEq342.gif) i.e., the set consisting of the vectors where all but the ith coordin Ate are zero. Thus, ![ $${M}_{i} = \\left \\{\\left \(0,\\ldots ,0,{\\alpha }_{i},0,\\ldots ,0\\right \) : {\\alpha }_{i} \\in\\mathbb{F}\\right \\}.$$ ](A81414_1_En_1_Chapter_Equew.gif) Example 1.10.2. Polynomials in ![ $$\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq343.gif) of degree ≤ n form a subspace denoted P n . Example 1.10.3. The set of continuous functions C 0 a, b, ℝ on an interval ![ $$\\left \[a,b\\right \] \\subset\\mathbb{R}$$ ](A81414_1_En_1_Chapter_IEq344.gif) is evidently a subspace of Funca, b, ℝ. Likewise, the space of functions that have derivatives of all orders is a subspace ![ $${C}^{\\infty }\\left \(\\left \[a,b\\right \], \\mathbb{R}\\right \) \\subset{C}^{0}\\left \(\\left \[a,b\\right \], \\mathbb{R}\\right \).$$ ](A81414_1_En_1_Chapter_Equex.gif) If we regard polynomials as functions on ![ $$\\left \[a,b\\right \]$$ ](A81414_1_En_1_Chapter_IEq345.gif), then it too becomes a subspace ![ $$\\mathbb{R}\\left \[t\\right \] \\subset{C}^{\\infty }\\left \(\\left \[a,b\\right \], \\mathbb{R}\\right \).$$ ](A81414_1_En_1_Chapter_Equey.gif) Example 1.10.4. Solutions to simple types of equations often form subspaces: ![ $$\\left \\{3{\\alpha }_{1} - 2{\\alpha }_{2} + {\\alpha }_{3} = 0 : \\left \({\\alpha }_{1},{\\alpha }_{2},{\\alpha }_{3}\\right \) \\in{\\mathbb{F}}^{3}\\right \\}.$$ ](A81414_1_En_1_Chapter_Equez.gif) However, something like ![ $$\\left \\{3{\\alpha }_{1} - 2{\\alpha }_{2} + {\\alpha }_{3} = 1 : \\left \({\\alpha }_{1},{\\alpha }_{2},{\\alpha }_{3}\\right \) \\in{\\mathbb{F}}^{3}\\right \\}$$ ](A81414_1_En_1_Chapter_Equfa.gif) does not yield a subspace as it does not contain the origin. Example 1.10.5. There are other interesting examples of subspaces of C ∞ ℝ, ℂ. If ω > 0 is some fixed number, then we consider ![ $${C}_{\\omega }^{\\infty }\\left \(\\mathbb{R}, \\mathbb{C}\\right \) = \\left \\{f \\in{C}^{\\infty }\\left \(\\mathbb{R}, \\mathbb{C}\\right \) : f\\left \(t\\right \) = f\\left \(t + \\omega \\right \)\\text{ for all }t \\in\\mathbb{R}\\right \\}.$$ ](A81414_1_En_1_Chapter_Equfb.gif) These are the periodic functions with period ω. Note that ![ $$\\begin{array}{rcl} f\\left \(t\\right \)& =& \\exp \\left \(i2\\pi t/\\omega \\right \) \\\\ & =& \\cos \\left \(2\\pi t/\\omega \\right \) + i\\sin \\left \(2\\pi t/\\omega \\right \) \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ84.gif) is an example of a periodic function. Subspaces allow for a generalized type of calculus. That is, we can "add" and "multiply" them to form other subspaces. However, it is not possible to find inverses for either operation. Definition 1.10.6. If M, N ⊂ V are subspaces, then we can form two new subspaces, the sum and the intersection: ![ $$\\begin{array}{rcl} M + N& =& \\left \\{x + y : x \\in M\\text{ and }y \\in N\\right \\}, \\\\ M \\cap N& =& \\left \\{x : x \\in M\\text{ and }x \\in N\\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ85.gif) It is certainly true that both of these sets contain the origin. The intersection is most easily seen to be a subspace so let us check the sum. If ![ $$\\alpha\\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq346.gif) and x ∈ M, y ∈ N, then we have αx ∈ M, αy ∈ N so ![ $$\\alpha x + \\alpha y = \\alpha \\left \(x + y\\right \) \\in M + N.$$ ](A81414_1_En_1_Chapter_Equfc.gif) In this way, we see that M + N is closed under scalar multiplication. To check that it is closed under addition is equally simple. We can think of M + N as addition of subspaces and M ∩ N as a kind of multiplication. The element that acts as zero for addition is the trivial subspace ![ $$\\left \\{0\\right \\}$$ ](A81414_1_En_1_Chapter_IEq347.gif) as ![ $$M + \\left \\{0\\right \\} = M,$$ ](A81414_1_En_1_Chapter_IEq348.gif) while M ∩ V = M implies that V is the identity for intersection. Beyond this, it is probably not that useful to think of these subspace operations as arithmetic operations e.g., the distributive law does not hold. Definition 1.10.7. If S ⊂ V is a subset of a vector space, then the span of S is defined as ![ $$\\mathrm{span}\\left \(S\\right \) ={ \\bigcap\\nolimits }_{S\\subset M\\subset V }M,$$ ](A81414_1_En_1_Chapter_Equfd.gif) where M ⊂ V is always a subspace of V. Thus, the span is the intersection of all subspaces that contain S. This is a subspace of V and must in fact be the smallest subspace containing S. We immediately get the following elementary properties. Proposition 1.10.8. Let V be a vector space and S,T ⊂ V subsets. (1) If S ⊂ T, then span S ⊂ span T. (2) If M ⊂ V is a subspace, then span M = M. (3) span span S = span S. (4) span S = span T if and only if S ⊂ span T and T ⊂ span S. Proof. The first property is obvious from the definition of span. To prove the second property, we first note that we always have that S ⊂ spanS. In particular, M ⊂ spanM. On the other hand, as M is a subspace that contains M, it must also follow that spanM ⊂ M. The third property follows from the second as spanS is a subspace. To prove the fin Al property, we first observe that if spanS ⊂ spanT, then S ⊂ spanT. Thus, it is clear that if spanS = spanT, then S ⊂ spanT and T ⊂ spanS. Conversely, we have from the first and third properties that if S ⊂ spanT, then spanS ⊂ spanspanT = spanT. This shows that if S ⊂ spanT and T ⊂ spanS, then spanS = spanT. The following lemma gives an alternate and very convenient description of the span. Lemma 1.10.9. (Characterization of span M ) Let S ⊂ V be a nonempty subset of M. Then, span S consists of all linear combin Ations of vectors in S. Proof. Let C be the set of all linear combin Ations of vectors in S. Since spanS is a subspace, it must be true that C ⊂ spanS. Conversely, if x, y ∈ C, then we note that also αx + βy is a linear combin Ation of vectors from S. Thus, αx + βy ∈ C and hence C is a subspace. This means that spanS ⊂ C. Definition 1.10.10. We say that two subspaces M, N ⊂ V have trivial intersection provided M ∩ N = 0, i.e., their intersection is the trivial subspace. We say that M and N are transversal provided ![ $$M + N = V$$ ](A81414_1_En_1_Chapter_IEq349.gif). Both concepts are important in different ways. Transversality also plays a very important role in the more advanced subject of differentiable topology. Differentiable topology is the study of smooth maps and manifolds. Definition 1.10.11. If we combine the two concepts of transversality and trivial intersection, we arrive at another important idea. Two subspaces are said to be complementary if they are transversal and have trivial intersection. Lemma 1.10.12. Two subspaces M,N ⊂ V are complementary if and only if each vector z ∈ V can be written as ![ $$z = x + y$$ ](A81414_1_En_1_Chapter_IEq350.gif) , where x ∈ M and y ∈ N in one and only one way. Before embarking on the proof, let us explain the use of "one and only one." The idea is first that z can be written like that in (at least) one way the second part is that this is the only way in which to do it. In other words, having found x and y so that ![ $$z = x + y$$ ](A81414_1_En_1_Chapter_IEq351.gif), there cannot be any other ways in which to decompose z into a sum of elements from M and N. Proof. First assume that M and N are complementary. Since ![ $$V = M + N$$ ](A81414_1_En_1_Chapter_IEq352.gif), we know that ![ $$z = x + y$$ ](A81414_1_En_1_Chapter_IEq353.gif) for some x ∈ M and y ∈ N. If we have ![ $${x}_{1} + {y}_{1} = z = {x}_{2} + {y}_{2},$$ ](A81414_1_En_1_Chapter_Equfe.gif) where x 1, x 2 ∈ M and y 1, y 2 ∈ N, then by moving each of x 2 and y 1 to the other side, we get ![ $$M \\ni{x}_{1} - {x}_{2} = {y}_{2} - {y}_{1} \\in N.$$ ](A81414_1_En_1_Chapter_Equff.gif) This means that ![ $${x}_{1} - {x}_{2} = {y}_{2} - {y}_{1} \\in M \\cap N = \\left \\{0\\right \\}$$ ](A81414_1_En_1_Chapter_Equfg.gif) and hence that ![ $${x}_{1} - {x}_{2} = {y}_{2} - {y}_{1} = 0.$$ ](A81414_1_En_1_Chapter_Equfh.gif) Thus, x 1 = x 2 and y 1 = y 2 and we have established that z has the desired unique decomposition. Conversely, assume that any ![ $$z = x + y,$$ ](A81414_1_En_1_Chapter_IEq354.gif) for unique x ∈ M and y ∈ N. First, we see that this means ![ $$V = M + N$$ ](A81414_1_En_1_Chapter_IEq355.gif). To see that M ∩ N = 0, we simply select z ∈ M ∩ N. Then, ![ $$z = z + 0 = 0 + z$$ ](A81414_1_En_1_Chapter_IEq356.gif) where z ∈ M, 0 ∈ N and 0 ∈ M, z ∈ N. Since such decompositions are assumed to be unique, we must have that z = 0 and hence M ∩ N = 0. Definition 1.10.13. When we have two complementary subspaces M, N ⊂ V , we also say that V is a direct sum of M and N and we write this symbolically as V = M ⊕ N. The special sum symbol indicates that indeed, ![ $$V = M + N$$ ](A81414_1_En_1_Chapter_IEq357.gif) and also that the two subspaces have trivial intersection. Using what we have learned so far about subspaces, we get a result that is often quite useful. Corollary 1.10.14. Let M,N ⊂ V be subspaces. If M ∩ N = 0, then ![ $$M + N = M \\oplus N,$$ ](A81414_1_En_1_Chapter_Equfi.gif) and if both are finite-dimensional, then ![ $$\\dim \\left \(M + N\\right \) =\\dim \\left \(M\\right \) +\\dim \\left \(N\\right \).$$ ](A81414_1_En_1_Chapter_Equfj.gif) Proof. The first statement follows immediately from the definition. The second statement is proven by selecting bases e 1,..., e k for M and f 1,..., f l for N and then showing that the concatenation e 1,..., e k , f 1,..., f l is a basis for M + N. We also have direct sum decompositions for more than two subspaces. If M 1,..., M k ⊂ V are subspaces, we say that V is a direct sum of M 1,..., M k and write ![ $$V = {M}_{1} \\oplus \\cdots\\oplus{M}_{k}$$ ](A81414_1_En_1_Chapter_Equfk.gif) provided any vector z ∈ V can be decomposed as ![ $$\\begin{array}{rcl} z& = {x}_{1} + \\cdots+ {x}_{k}, & \\\\ & {x}_{1} \\in{M}_{1},\\ldots ,{x}_{k} \\in{M}_{k}& \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ86.gif) in one and only one way. Here are some examples of direct sums. Example 1.10.15. The prototypical example of a direct sum comes from the plane, where V = ℝ 2 and ![ $$M = \\left \\{\\left \(x,0\\right \) : x \\in\\mathbb{R}\\right \\}$$ ](A81414_1_En_1_Chapter_Equfl.gif) is the first coordin Ate axis and ![ $$N = \\left \\{\\left \(0,y\\right \) : y \\in\\mathbb{R}\\right \\}$$ ](A81414_1_En_1_Chapter_Equfm.gif) the second coordin Ate axis. Example 1.10.16. Direct sum decompositions are by no means unique, as can be seen using V = ℝ 2 and ![ $$M = \\left \\{\\left \(x,0\\right \) : x \\in\\mathbb{R}\\right \\}$$ ](A81414_1_En_1_Chapter_Equfn.gif) and ![ $$N = \\left \\{\\left \(y,y\\right \) : y \\in\\mathbb{R}\\right \\}$$ ](A81414_1_En_1_Chapter_Equfo.gif) the diagonal. We can easily visualize and prove that the intersection is trivial. As for transversality, just observe that ![ $$\\left \(x,y\\right \) = \\left \(x - y,0\\right \) + \\left \(y,y\\right \).$$ ](A81414_1_En_1_Chapter_Equfp.gif) Example 1.10.17. We also have the direct sum decomposition ![ $${\\mathbb{F}}^{n} = {M}_{ 1} \\oplus \\cdots\\oplus{M}_{n},$$ ](A81414_1_En_1_Chapter_Equfq.gif) where ![ $${M}_{i} = \\left \\{\\left \(0,\\ldots ,0,{\\alpha }_{i},0,\\ldots ,0\\right \) : {\\alpha }_{i} \\in\\mathbb{F}\\right \\}.$$ ](A81414_1_En_1_Chapter_Equfr.gif) Example 1.10.18. Here is a more abstract example that imitates the first. Partition the set ![ $$\\left \\{1,2,\\ldots ,n\\right \\} = \\left \\{{i}_{1},\\ldots ,{i}_{k}\\right \\} \\cup \\left \\{{j}_{1},\\ldots ,{j}_{n-k}\\right \\}$$ ](A81414_1_En_1_Chapter_Equfs.gif) into two complementary sets. Let ![ $$\\begin{array}{rcl} V & =& {\\mathbb{F}}^{n}, \\\\ M& =& \\left \\{\\left \({\\alpha }_{1},\\ldots ,{\\alpha }_{n}\\right \) \\in{\\mathbb{F}}^{n} : {\\alpha }_{{ j}_{1}} = \\cdots= {\\alpha }_{{j}_{n-k}} = 0\\right \\}, \\\\ N& =& \\left \\{\\left \({\\alpha }_{1},\\ldots ,{\\alpha }_{n}\\right \) : {\\alpha }_{{i}_{1}} = \\cdots= {\\alpha }_{{i}_{k}} = 0\\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ87.gif) Thus, ![ $$\\begin{array}{rcl} M& =& {M}_{{i}_{1}} \\oplus \\cdots\\oplus{M}_{{i}_{k}}, \\\\ N& =& {M}_{{j}_{1}} \\oplus \\cdots\\oplus{M}_{{j}_{n-k}}, \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ88.gif) and ![ $${\\mathbb{F}}^{n} = M \\oplus N$$ ](A81414_1_En_1_Chapter_IEq358.gif). Note that M is isomorphic to ![ $${\\mathbb{F}}^{k}$$ ](A81414_1_En_1_Chapter_IEq359.gif) and N to ![ $${\\mathbb{F}}^{n-k}$$ ](A81414_1_En_1_Chapter_IEq360.gif) but with different indices for the axes. Thus, we have the more or less obvious decomposition: ![ $${\\mathbb{F}}^{n} = {\\mathbb{F}}^{k} \\times{\\mathbb{F}}^{n-k}$$ ](A81414_1_En_1_Chapter_IEq361.gif). Note, however, that when we use ![ $${\\mathbb{F}}^{k}$$ ](A81414_1_En_1_Chapter_IEq362.gif) rather than M, we do not think of ![ $${\\mathbb{F}}^{k}$$ ](A81414_1_En_1_Chapter_IEq363.gif) as a subspace of ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq364.gif), as vectors in ![ $${\\mathbb{F}}^{k}$$ ](A81414_1_En_1_Chapter_IEq365.gif) are k-tuples of the form ![ $$\\left \({\\alpha }_{{i}_{1}},\\ldots ,{\\alpha }_{{i}_{k}}\\right \)$$ ](A81414_1_En_1_Chapter_IEq366.gif). Thus, there is a subtle difference between writing ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq367.gif) as a product or direct sum. Example 1.10.19. Another very interesting decomposition is that of separating functions into odd and even parts. Recall that a function f : ℝ -> ℝ is said to be odd, respectively even, if ![ $$f\\left \(-t\\right \) = -f\\left \(t\\right \),$$ ](A81414_1_En_1_Chapter_IEq368.gif) respectively, ![ $$f\\left \(-t\\right \) = f\\left \(t\\right \)$$ ](A81414_1_En_1_Chapter_IEq369.gif). Note that constant functions are even, while functions whose graphs are lines through the origin are odd. We denote the subsets of odd and even functions by Funcodd ℝ, ℝ and Funcev ℝ, ℝ. It is easily seen that these subsets are subspaces. Also, Funcodd ℝ, ℝ ∩ Funcev ℝ, ℝ = 0 since only the zero function can be both odd and even. Fin Ally, any f ∈ Funcℝ, ℝ can be decomposed as follows: ![ $$\\begin{array}{rcl} f\\left \(t\\right \)& =& {f}_{\\mathrm{ev}}\\left \(t\\right \) + {f}_{\\mathrm{odd}}\\left \(t\\right \), \\\\ {f}_{\\mathrm{ev}}\\left \(t\\right \)& =& \\frac{f\\left \(t\\right \) + f\\left \(-t\\right \)} {2} , \\\\ {f}_{\\mathrm{odd}}\\left \(t\\right \)& =& \\frac{f\\left \(t\\right \) - f\\left \(-t\\right \)} {2}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ89.gif) A specific example of such a decomposition is ![ $$\\begin{array}{rcl}{ \\mathrm{e}}^{t}& =& \\cosh \\left \(t\\right \) +\\sinh \\left \(t\\right \), \\\\ \\cosh \\left \(t\\right \)& =& \\frac{{\\mathrm{e}}^{t} +{ \\mathrm{e}}^{-t}} {2} , \\\\ \\sinh \\left \(t\\right \)& =& \\frac{{\\mathrm{e}}^{t} -{\\mathrm{e}}^{-t}} {2}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ90.gif) If we consider complex-valued functions Funcℝ, ℂ, we still have the same concepts of even and odd and also the desired direct sum decomposition. Here, another similar and very interesting decomposition is Euler's formula ![ $$\\begin{array}{rcl}{ \\mathrm{e}}^{it}& =& \\cos \\left \(t\\right \) + i\\sin \\left \(t\\right \) \\\\ \\cos \\left \(t\\right \)& =& \\frac{{\\mathrm{e}}^{it} +{ \\mathrm{e}}^{-it}} {2} , \\\\ \\sin \\left \(t\\right \)& =& \\frac{{\\mathrm{e}}^{it} -{\\mathrm{e}}^{-it}} {2i}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ91.gif) Some interesting questions come to mind with the definitions encountered here. What is the relationship between ![ $${\\dim }_{\\mathbb{F}}M$$ ](A81414_1_En_1_Chapter_IEq370.gif) and ![ $${\\dim }_{\\mathbb{F}}V$$ ](A81414_1_En_1_Chapter_IEq371.gif) for a subspace M ⊂ V ? Do all subspaces have a complement? How are subspaces and linear maps related? At this point, we can show that subspaces of finite-dimensional vector spaces do have complements. This result is central to almost all of the subsequent developments in this chapter. As such, it is worth noting that the result is so basic that it does not even depend on the concept of dimension. It is also noteworthy that it gives us a stronger conclusion than stated here (see Corollary 1.12.6). Theorem 1.10.20. (Existence of Complements) Let M ⊂ V be a subspace and assume that V = span x 1 ,...,x n . If M≠V, then it is possible to choose ![ $${x}_{{i}_{1}},\\ldots.,{x}_{{i}_{k}} \\in \\left \\{{x}_{1},\\ldots ,{x}_{n}\\right \\}$$ ](A81414_1_En_1_Chapter_IEq372.gif) such that ![ $$V = M \\oplus \\mathrm{ span}\\left \\{{x}_{{i}_{1}},\\ldots ,{x}_{{i}_{k}}\\right \\}$$ ](A81414_1_En_1_Chapter_Equft.gif) Proof. Successively choose ![ $${x}_{{i}_{1}},\\ldots ,{x}_{{i}_{k}}$$ ](A81414_1_En_1_Chapter_IEq373.gif) such that ![ $$\\begin{array}{rcl} {x}_{{i}_{1}}& \\notin & M, \\\\ {x}_{{i}_{2}}& \\notin & M +\\mathrm{ span}\\left \\{{x}_{{i}_{1}}\\right \\}, \\\\ \\vdots& & \\vdots \\\\ {x}_{{i}_{k}}& \\notin & M +\\mathrm{ span}\\left \\{{x}_{{i}_{1}},\\ldots ,{x}_{{i}_{k-1}}\\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ92.gif) This process can be continued until ![ $$V = M +\\mathrm{ span}\\left \\{{x}_{{i}_{1}},\\ldots.,{x}_{{i}_{k}}\\right \\},$$ ](A81414_1_En_1_Chapter_Equfu.gif) and since ![ $$\\mathrm{span}\\left \\{{x}_{1},\\ldots ,{x}_{n}\\right \\} = V,$$ ](A81414_1_En_1_Chapter_Equfv.gif) we know that this will happen for some k ≤ n. It now only remains to be seen that ![ $$\\left \\{0\\right \\} = M \\cap \\mathrm{ span}\\left \\{{x}_{{i}_{1}},\\ldots.,{x}_{{i}_{k}}\\right \\}.$$ ](A81414_1_En_1_Chapter_Equfw.gif) To check, this suppose that ![ $$x \\in M \\cap \\mathrm{ span}\\left \\{{x}_{{i}_{1}},\\ldots.,{x}_{{i}_{k}}\\right \\}$$ ](A81414_1_En_1_Chapter_Equfx.gif) and write ![ $$x = {\\alpha }_{{i}_{1}}{x}_{{i}_{1}} + \\cdots+ {\\alpha }_{{i}_{k}}{x}_{{i}_{k}} \\in M.$$ ](A81414_1_En_1_Chapter_Equfy.gif) If ![ $${\\alpha }_{{i}_{1}} = \\cdots= {\\alpha }_{{i}_{k}} = 0,$$ ](A81414_1_En_1_Chapter_IEq374.gif) there is nothing to worry about. Otherwise, we can find the largest l so that ![ $${\\alpha }_{{i}_{l}}\\neq 0$$ ](A81414_1_En_1_Chapter_IEq375.gif). Then, ![ $$\\frac{1} {{\\alpha }_{{i}_{l}}}x = \\frac{{\\alpha }_{{i}_{1}}} {{\\alpha }_{{i}_{l}}} {x}_{{i}_{1}} + \\cdots+ \\frac{{\\alpha }_{{i}_{l-1}}} {{\\alpha }_{{i}_{l}}} {x}_{{i}_{l-1}} + {x}_{{i}_{l}} \\in M$$ ](A81414_1_En_1_Chapter_Equfz.gif) which implies the contradictory statement that ![ $${x}_{{i}_{l}} \\in M +\\mathrm{ span}\\left \\{{x}_{{i}_{1}},\\ldots ,{x}_{{i}_{l-1}}\\right \\}.$$ ](A81414_1_En_1_Chapter_Equga.gif) If we use Corollary 1.10.14, then we see that this theorem shows that dimM ≤ dimV as long as we know that both M and V are finite-dimensional. Thus, the important point lies in showing that M is finite-dimensional. We will establish this in the next section. ### 1.10.1 Exercises 1. Show that the subset of linear maps L : ℝ 3 -> ℝ 2 defined by ![ $$S = \\left \\{L : {\\mathbb{R}}^{3} \\rightarrow{\\mathbb{R}}^{2} : L\\left \(1,2,3\\right \) = 0,\\text{ }\\left \(2,3\\right \) = L\\left \(x\\right \)\\text{ for some }x \\in{\\mathbb{R}}^{2}\\right \\}$$ ](A81414_1_En_1_Chapter_Equgb.gif) is not a subspace of Homℝ 3, ℝ 2. 2. Find a one-dimensional complex subspace M ⊂ ℂ 2 such that ℝ 2 ∩ M = 0. 3. Let L : V -> W be a linear map and N ⊂ W a subspace. Show that ![ $${L}^{-1}\\left \(N\\right \) = \\left \\{x \\in V : L\\left \(x\\right \) \\in N\\right \\}$$ ](A81414_1_En_1_Chapter_Equgc.gif) is a subspace of V. 4. Is it true that subspaces satisfy the distributive law ![ $$M \\cap \\left \({N}_{1} + {N}_{2}\\right \) = M \\cap{N}_{1} + M \\cap{N}_{2}?$$ ](A81414_1_En_1_Chapter_Equgd.gif) If not, give a counter example. 5. Show that if V is finite-dimensional, then HomV, V is a direct sum of the two subspaces M = span1 V and ![ $$N = \\left \\{L :\\mathrm{ tr}L = 0\\right \\}$$ ](A81414_1_En_1_Chapter_IEq376.gif). 6. Show that Mat n ×n ℝ is the direct sum of the following three subspaces (you also have to show that they are subspaces): ![ $$\\begin{array}{rcl} I& =& \\mathrm{span}\\left \\{{1}_{{\\mathbb{R}}^{n}}\\right \\}, \\\\ {S}_{0}& =& \\left \\{A :\\mathrm{ tr}A = 0\\mathrm{ and }{A}^{t} = A\\right \\}, \\\\ A& =& \\left \\{A : {A}^{t} = -A\\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ93.gif) (A t is defined in Exercise 14 in Sect. 1.8.) 7. Let V be a vector space over a field ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq377.gif) of characteristic zero. Let M 1,..., M k ⊊V be proper subspaces of a finite-dimensional vector space and N ⊂ V a subspace. Show that if N ⊂ M 1 ∪⋯ ∪M k , then N ⊂ M i for some i. Conclude that, if N is not contained in any of the M i s, then we can find x ∈ N such that x∉M 1,..., x∉M k . Hint: Do the case where k = 2 first. 8. An affine subspaceA ⊂ V of a vector space is a subset such that affine linear combin Ations of vectors in A lie in A, i.e., if ![ $${\\alpha }_{1} + \\cdots+ {\\alpha }_{n} = 1$$ ](A81414_1_En_1_Chapter_IEq378.gif) and x 1,..., x n ∈ A, then ![ $${\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{n}{x}_{n} \\in A$$ ](A81414_1_En_1_Chapter_IEq379.gif). (a) Show that A is an affine subspace if and only if there is a point x 0 ∈ V and a subspace M ⊂ V such that ![ $$A = {x}_{0} + M = \\left \\{{x}_{0} + x : x \\in M\\right \\}.$$ ](A81414_1_En_1_Chapter_Equge.gif) (b) Show that A is an affine subspace if and only if there is a subspace M ⊂ V with the properties: (1) if x, y ∈ A, then x − y ∈ M and (2) if x ∈ A and z ∈ M, then x + z ∈ A. (c) Show that the subspaces constructed in parts (a) and (b) are equal. (d) Show that the set of monic polynomials of degree n in P n , i.e., the coefficient in front of t n is 1, is an affine subspace with ![ $$M = {P}_{n-1}$$ ](A81414_1_En_1_Chapter_IEq380.gif). 9. Show that the two spaces below are subspaces of C 2π ∞ ℝ, ℝ that are not equal to each other: ![ $$\\begin{array}{rcl}{ V }_{1}& =& \\left \\{{b}_{1}\\sin \\left \(t\\right \) + {b}_{2}\\sin \\left \(2t\\right \) + {b}_{3}\\sin \\left \(3t\\right \) : {b}_{1},{b}_{2},{b}_{3} \\in\\mathbb{R}\\right \\}, \\\\ {V }_{2}& =& \\left \\{{b}_{1}\\sin \\left \(t\\right \) + {b{}_{2}\\sin }^{2}\\left \(t\\right \) + {b{}_{ 3}\\sin }^{3}\\left \(t\\right \) : {b}_{ 1},{b}_{2},{b}_{3} \\in\\mathbb{R}\\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ94.gif) What is their intersection? 10. Show that if M ⊂ V and N ⊂ W are subspaces, then M ×N ⊂ V ×W is also a subspace. 11. If ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq381.gif) has trA = 0, show that ![ $$A = {A}_{1}{B}_{1} - {B}_{1}{A}_{1} + \\cdots+ {A}_{m}{B}_{m} - {B}_{m}{A}_{m}$$ ](A81414_1_En_1_Chapter_Equgf.gif) for suitable ![ $${A}_{i},{B}_{i} \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq382.gif). Hint: Show that ![ $$M =\\mathrm{ span}\\left \\{XY - Y X : X,Y \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)\\right \\}$$ ](A81414_1_En_1_Chapter_Equgg.gif) has dimension n 2 − 1 by exhibiting a suitable basis. 12. Let L : V -> W be a linear map and consider the graph ![ $${G}_{L} = \\left \\{\\left \(x,L\\left \(x\\right \)\\right \) : x \\in V \\right \\} \\subset V \\times W.$$ ](A81414_1_En_1_Chapter_Equgh.gif) (a) Show that G L is a subspace. (b) Show that the map V -> G L that sends x to ![ $$\\left \(x,L\\left \(x\\right \)\\right \)$$ ](A81414_1_En_1_Chapter_IEq383.gif) is an isomorphism. (c) Show that L is one-to-one if and only if the projection P W : V ×W -> W is one-to-one when restricted to G L . (d) Show that L is onto if and only if the projection P W : V ×W -> W is onto when restricted to G L . (e) Show that a subspace N ⊂ V ×W is the graph of a linear map K : V -> W if and only if the projection P V : V ×W -> V is an isomorphism when restricted to N. (f) Show that a subspace N ⊂ V ×W is the graph of a linear map K : V -> W if and only if V ×W = N ⊕ 0 ×W. ## 1.11 Linear Maps and Subspaces Linear maps generate a lot of interesting subspaces and can also be used to understand certain important aspects of subspaces. Conversely, the subspaces associated to a linear map give us crucial information as to whether the map is one-to-one or onto. Definition 1.11.1. Let L : V -> W be a linear map between vector spaces. The kernel or nullspace of L is ![ $$\\ker \\left \(L\\right \) =\\mathrm{ N}\\left \(L\\right \) = {L}^{-1}\\left \(0\\right \) = \\left \\{x \\in V : L\\left \(x\\right \) = 0\\right \\}.$$ ](A81414_1_En_1_Chapter_Equgi.gif) The image or range of L is ![ $$\\mathrm{im}\\left \(L\\right \) =\\mathrm{ R}\\left \(L\\right \) = L\\left \(V \\right \) = \\left \\{y \\in W : y = L\\left \(x\\right \)\\mathrm{ for some }x \\in V \\right \\}.$$ ](A81414_1_En_1_Chapter_Equgj.gif) Both of these spaces are subspaces. Lemma 1.11.2. ker L is a subspace of V and im L is a subspace of W. Proof. Assume that ![ $${\\alpha }_{1},{\\alpha }_{2} \\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq384.gif) and that x 1, x 2 ∈ kerL, then ![ $$L\\left \({\\alpha }_{1}{x}_{1} + {\\alpha }_{2}{x}_{2}\\right \) = {\\alpha }_{1}L\\left \({x}_{1}\\right \) + {\\alpha }_{2}L\\left \({x}_{2}\\right \) = 0.$$ ](A81414_1_En_1_Chapter_Equgk.gif) More generally, if we only assume x 1, x 2 ∈ V , then we have ![ $${\\alpha }_{1}L\\left \({x}_{1}\\right \) + {\\alpha }_{2}L\\left \({x}_{2}\\right \) = L\\left \({\\alpha }_{1}{x}_{1} + {\\alpha }_{2}{x}_{2}\\right \) \\in \\mathrm{ im}\\left \(L\\right \).$$ ](A81414_1_En_1_Chapter_Equgl.gif) This proves the claim. The same proof shows that LM = Lx : x ∈ M is a subspace of W when M is a subspace of V. Lemma 1.11.3. L is one-to-one if and only if ker L = 0. Proof. We know that ![ $$L\\left \(0 \\cdot0\\right \) = 0 \\cdot L \\left \(0\\right \) = 0,$$ ](A81414_1_En_1_Chapter_IEq385.gif) so if L is one-to-one, we have that ![ $$L\\left \(x\\right \) = 0 = L\\left \(0\\right \)$$ ](A81414_1_En_1_Chapter_IEq386.gif) implies that x = 0. Hence, kerL = 0. Conversely, assume that kerL = 0. If Lx 1 = Lx 2, then linearity of L tells us that ![ $$L\\left \({x}_{1} - {x}_{2}\\right \) = 0$$ ](A81414_1_En_1_Chapter_IEq387.gif). Then, kerL = 0 implies ![ $${x}_{1} - {x}_{2} = 0,$$ ](A81414_1_En_1_Chapter_IEq388.gif) which shows that x 1 = x 2. If we have a direct sum decomposition V = M ⊕ N, then we can construct what is called the projection of V onto M along N. Definition 1.11.4. The map E : V -> V is defined as follows. For z ∈ V , we write ![ $$z = x + y$$ ](A81414_1_En_1_Chapter_IEq389.gif) for unique x ∈ M, y ∈ N and define ![ $$E\\left \(z\\right \) = x.$$ ](A81414_1_En_1_Chapter_Equgm.gif) Thus, imE = M and kerE = N. Note that ![ $$\\left \({1}_{V } - E\\right \)\\left \(z\\right \) = z - x = y.$$ ](A81414_1_En_1_Chapter_Equgn.gif) This means that 1 V − E is the projection of V onto N along M. So the decomposition V = M ⊕ N gives us similar resolution of 1 V using these two projections: ![ $${1}_{V } = E + \\left \({1}_{V } - E\\right \)$$ ](A81414_1_En_1_Chapter_IEq390.gif). Using all of the examples of direct sum decompositions, we get several examples of projections. Note that each projection E onto M leads in a natural way to a linear map P : V -> M. This map has the same definition ![ $$P\\left \(z\\right \) = P\\left \(x + y\\right \) = x$$ ](A81414_1_En_1_Chapter_IEq391.gif), but it is not E as it is not defined as an operator V -> V. It is perhaps pedantic to insist on having different names but note that as it stands we are not allowed to compose P with itself as it does not map into V. We are now ready to establish several extremely important results relating linear maps, subspaces, and dimensions. Recall that complements to a fixed subspace are usually not unique; however, they do have the same dimension as the next result shows. Lemma 1.11.5. (Uniqueness of Complements) If ![ $$V = {M}_{1} \\oplus N = {M}_{2} \\oplus N,$$ ](A81414_1_En_1_Chapter_IEq392.gif) then M 1 and M 2 are isomorphic. Proof. Let P : V -> M 2 be the projection whose kernel is N. We contend that the map ![ $$P{\\vert }_{{M}_{1}} : {M}_{1} \\rightarrow{M}_{2}$$ ](A81414_1_En_1_Chapter_IEq393.gif) is an isomorphism. The kernel can be computed as ![ $$\\begin{array}{rcl} \\ker \\left \(P{\\vert }_{{M}_{1}}\\right \)& =& \\left \\{x \\in{M}_{1} : P\\left \(x\\right \) = 0\\right \\} \\\\ & =& \\left \\{x \\in V : P\\left \(x\\right \) = 0\\right \\} \\cap{M}_{1} \\\\ & =& N \\cap{M}_{1} \\\\ & =& \\left \\{0\\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ95.gif) Thus, ![ $$P{\\vert }_{{M}_{1}}$$ ](A81414_1_En_1_Chapter_IEq394.gif) is one-to-one by Lemma 1.11.3. To check that the map is onto select x 2 ∈ M 2. Next, write ![ $${x}_{2} = {x}_{1} + {y}_{1}$$ ](A81414_1_En_1_Chapter_IEq395.gif), where x 1 ∈ M 1 and y 1 ∈ N. Then, ![ $$\\begin{array}{rcl}{ x}_{2}& =& P\\left \({x}_{2}\\right \) \\\\ & =& P\\left \({x}_{1} + {y}_{1}\\right \) \\\\ & =& P\\left \({x}_{1}\\right \) + P\\left \({y}_{1}\\right \) \\\\ & =& P\\left \({x}_{1}\\right \) \\\\ & =& P{\\vert }_{{M}_{1}}\\left \({x}_{1}\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ96.gif) This establishes the claim. Theorem 1.11.6. (The Subspace Theorem) Assume that V is finite-dimensional and that M ⊂ V is a subspace. Then, M is finite-dimensional and ![ $${\\dim }_{\\mathbb{F}}M {\\leq \\dim }_{\\mathbb{F}}V.$$ ](A81414_1_En_1_Chapter_Equgo.gif) Moreover, if V = M ⊕ N, then ![ $${\\dim }_{\\mathbb{F}}V {=\\dim }_{\\mathbb{F}}M {+\\dim }_{\\mathbb{F}}N.$$ ](A81414_1_En_1_Chapter_Equgp.gif) Proof. If M = V , we are finished. Otherwise, select a basis x 1,..., x m for V and use Theorem 1.10.20 to extract a complement to M in V ![ $$\\begin{array}{rcl} V & =& M \\oplus \\mathrm{ span}\\left \\{{x}_{{i}_{1}},\\ldots ,{x}_{{i}_{k}}\\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ97.gif) On the other hand, we also know that ![ $$V =\\mathrm{ span}\\left \\{{x}_{{j}_{1}},\\ldots ,{x}_{{j}_{l}}\\right \\} \\oplus \\mathrm{ span}\\left \\{{x}_{{i}_{1}},\\ldots ,{x}_{{i}_{k}}\\right \\},$$ ](A81414_1_En_1_Chapter_Equgq.gif) where ![ $$k + l = m$$ ](A81414_1_En_1_Chapter_IEq396.gif) and ![ $$\\left \\{1,\\ldots ,m\\right \\} = \\left \\{{j}_{1},\\ldots ,{j}_{l}\\right \\} \\cup \\left \\{{i}_{1},\\ldots ,{i}_{k}\\right \\}.$$ ](A81414_1_En_1_Chapter_Equgr.gif) Lemma 1.11.5 then shows that M and ![ $$\\mathrm{span}\\left \\{{x}_{{j}_{1}},\\ldots ,{x}_{{j}_{l}}\\right \\}$$ ](A81414_1_En_1_Chapter_IEq397.gif) are isomorphic. Thus, ![ $${\\dim }_{\\mathbb{F}}M = l < m.$$ ](A81414_1_En_1_Chapter_Equgs.gif) In addition, we see that if V = M ⊕ N, then Lemma 1.11.5 also shows that ![ $${\\dim }_{\\mathbb{F}}N = k.$$ ](A81414_1_En_1_Chapter_Equgt.gif) This proves the theorem. Theorem 1.11.7. (The Dimension Formula) Let V be finite-dimensional and L : V -> W a linear map, then im L is finite-dimensional and ![ $${\\dim }_{\\mathbb{F}}V {=\\dim }_{\\mathbb{F}}\\ker \\left \(L\\right \) {+\\dim }_{\\mathbb{F}}\\mathrm{im}\\left \(L\\right \).$$ ](A81414_1_En_1_Chapter_Equgu.gif) Proof. We know that ![ $${\\dim }_{\\mathbb{F}}\\ker \\left \(L\\right \) {\\leq \\dim }_{\\mathbb{F}}V$$ ](A81414_1_En_1_Chapter_IEq398.gif) and that it has a complement N ⊂ V of dimension ![ $$k {=\\dim }_{\\mathbb{F}}V {-\\dim }_{\\mathbb{F}}\\ker \\left \(L\\right \)$$ ](A81414_1_En_1_Chapter_IEq399.gif). Since N ∩ kerL = 0, the linear map L must be one-to-one when restricted to N. Thus, L | N : N -> imL is an isomorphism. This proves the theorem. Definition 1.11.8. The number ![ $$\\mathrm{nullity}\\left \(L\\right \) {=\\dim }_{\\mathbb{F}}\\ker \\left \(L\\right \)$$ ](A81414_1_En_1_Chapter_IEq400.gif) is called the nullity of L, and ![ $$\\mathrm{rank}\\left \(L\\right \) {=\\dim }_{\\mathbb{F}}\\mathrm{im}\\left \(L\\right \)$$ ](A81414_1_En_1_Chapter_IEq401.gif) is known as the rank of L. Corollary 1.11.9. If M is a subspace of V and ![ $${\\dim }_{\\mathbb{F}}M {=\\dim }_{\\mathbb{F}}V = n < \\infty $$ ](A81414_1_En_1_Chapter_IEq402.gif) , then M = V. Proof. If M≠V , there must be a complement of dimension > 0. This gives us a contradiction with the subspace theorem. Corollary 1.11.10. Assume that L : V -> W and ![ $${\\dim }_{\\mathbb{F}}V {=\\dim }_{\\mathbb{F}}W < \\infty $$ ](A81414_1_En_1_Chapter_IEq403.gif) . Then, L is an isomorphism if either nullity L = 0 or rank L = dim W. Proof. The dimension theorem shows that if either nullity![ $$\\left \(L\\right \) = 0$$ ](A81414_1_En_1_Chapter_IEq404.gif) or rankL = dimW, then also rankL = dimV or nullity![ $$\\left \(L\\right \) = 0$$ ](A81414_1_En_1_Chapter_IEq405.gif), thus showing that L is an isomorphism. Knowing that the vector spaces are abstractly isomorphic can therefore help us in checking when a given linear map might be an isomorphism. Many of these results are not true in infinite-dimensional spaces. The differentiation operator D : C ∞ ℝ, ℝ -> C ∞ ℝ, ℝ is onto and has a kernel consisting of all constant functions. The multiplication operator T : C ∞ ℝ, ℝ -> C ∞ ℝ, ℝ on the other hand is one-to-one but is not onto as Tf0 = 0 for all f ∈ C ∞ ℝ, ℝ. Corollary 1.11.11. If L : V -> W is a linear map between finite-dimensional spaces, then we can find bases e 1 ,...,e m for V and f 1 ,...,f n for W so that ![ $$\\begin{array}{rcl} L\\left \({e}_{1}\\right \)& =& {f}_{1}, \\\\ \\vdots& & \\vdots \\\\ L\\left \({e}_{k}\\right \)& =& {f}_{k}, \\\\ L\\left \({e}_{k+1}\\right \)& =& 0, \\\\ \\vdots& & \\vdots \\\\ L\\left \({e}_{m}\\right \)& =& 0, \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ98.gif) where k = rank L. Proof. Simply decompose V = kerL ⊕ M. Then, choose a basis e 1,..., e k for M and a basis e k + 1,..., e m for kerL. Combining these two bases gives us a basis for V. Then, define f 1 = Le 1,..., f k = Le k . Since L | M : M -> imL is an isomorphism, this implies that f 1,..., f k form a basis for imL. We then get the desired basis for W by letting f k + 1,..., f n be a basis for a complement to imL in W. While this certainly gives the nicest possible matrix representation for L, it is not very useful. The complete freedom one has in the choice of both bases somehow also means that aside from the rank, no other information is encoded in the matrix. The real goal will be to find the best matrix for a linear operator L : V -> V with respect to one basis. In the general situation L : V -> W, we will have something more to say in case V and W are inner product spaces and the bases are orthonormal (see Sects. 4.8, 4.9, and 4.10). Fin Ally, it is worth mentioning that projections as a class of linear operators on V can be characterized in a surprisingly simple manner. Theorem 1.11.12. (Characterization of Projections) Projections all satisfy the functional relationship E 2 = E. Conversely, any E : V -> V that satisfies E 2 = E is a projection. Proof. First assume that E is the projection onto M along N coming from V = M ⊕ N. If ![ $$z = x + y \\in M \\oplus N,$$ ](A81414_1_En_1_Chapter_IEq406.gif) then ![ $$\\begin{array}{rcl}{ E}^{2}\\left \(z\\right \)& =& E\\left \(E\\left \(z\\right \)\\right \) \\\\ & =& E\\left \(x\\right \) \\\\ & =& x \\\\ & =& E\\left \(z\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ99.gif) Conversely, assume that E 2 = E, then Ex = x provided x ∈ imE. Thus, we have ![ $$\\begin{array}{rcl} \\mathrm{im}\\left \(E\\right \) \\cap \\ker \\left \(E\\right \)& =& \\left \\{0\\right \\},\\text{ and} \\\\ \\mathrm{im}\\left \(E\\right \) +\\ker \\left \(E\\right \)& =& \\mathrm{im}\\left \(E\\right \) \\oplus \\ker \\left \(E\\right \) \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ100.gif) From the dimension theorem, we also have that ![ $$\\dim \\left \(\\mathrm{im}\\left \(E\\right \)\\right \) +\\dim \\left \(\\ker \\left \(E\\right \)\\right \) =\\dim \\left \(V \\right \).$$ ](A81414_1_En_1_Chapter_Equgv.gif) This shows that imE + kerE is a subspace of dimension dimV and hence all of V. Fin Ally, if we write ![ $$z = x + y$$ ](A81414_1_En_1_Chapter_IEq407.gif), x ∈ imE and y ∈ kerE, then ![ $$E\\left \(x + y\\right \) = E\\left \(x\\right \) = x,$$ ](A81414_1_En_1_Chapter_IEq408.gif) so E is the projection onto imE along kerE. In this way, we have shown that there is a natural identification between direct sum decompositions and projections, i.e., maps satisfying E 2 = E. ### 1.11.1 Exercises 1. Let L, K : V -> V be linear maps that satisfy L ∘ K = 1 V . Show that (a) If V is finite-dimensional, then K ∘ L = 1 V . (b) If V is infinite-dimensional give an example where K ∘ L≠1 V . 2. Let M ⊂ V be a k-dimensional subspace of an n-dimensional vector space. Show that any isomorphism ![ $$L : M \\rightarrow{\\mathbb{F}}^{k}$$ ](A81414_1_En_1_Chapter_IEq409.gif) can be extended to an isomorphism ![ $$\\hat{L} : V \\rightarrow{\\mathbb{F}}^{n},$$ ](A81414_1_En_1_Chapter_IEq410.gif) such that ![ $$\\hat{L}{\\vert }_{M} = L$$ ](A81414_1_En_1_Chapter_IEq411.gif). Here we have identified ![ $${\\mathbb{F}}^{k}$$ ](A81414_1_En_1_Chapter_IEq412.gif) with the subspace in ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq413.gif) where the last n − k coordin Ates are zero. 3. Let L : V -> V be a linear operator on a vector space over ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq414.gif). Show that (a) If K : V -> V commutes with L, i.e., K ∘ L = L ∘ K, then kerK ⊂ V is an L-invariant subspace. (b) If ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq415.gif), then kerpL ⊂ V is an L-invariant subspace. 4. Let L : V -> W be a linear map. (a) If L has rank k, show that it can be factored through ![ $${\\mathbb{F}}^{k},$$ ](A81414_1_En_1_Chapter_IEq416.gif) i.e., we can find ![ $${K}_{1} : V \\rightarrow{\\mathbb{F}}^{k}$$ ](A81414_1_En_1_Chapter_IEq417.gif) and ![ $${K}_{2} : {\\mathbb{F}}^{k} \\rightarrow W$$ ](A81414_1_En_1_Chapter_IEq418.gif) such that L = K 2 K 1. (b) Show that any matrix ![ $$A \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq419.gif) of rank k can be factored A = BC, where ![ $$B \\in \\mathrm{{ Mat}}_{n\\times k}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq420.gif) and ![ $$C \\in \\mathrm{{ Mat}}_{k\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq421.gif). (c) Conclude that any rank 1 matrix ![ $$A \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq422.gif) looks like ![ $$A = \\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n}\\end{array} \\right \]\\left \[\\begin{array}{ccc} {\\beta }_{1} & \\cdots &{\\beta }_{m}\\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equgw.gif) 5. Assume L 1 : V 1 -> V 2 and L 2 : V 2 -> V 3 are linear maps between finite-dimensional vector spaces. Show: (a) imL 2 ∘ L 1 ⊂ imL 2. In particular, if L 2 ∘ L 1 is onto, then so is L 2. (b) kerL 1 ⊂ kerL 2 ∘ L 1. In particular, if L 2 ∘ L 1 is one-to-one, then so is L 1. (c) Give an example where L 2 ∘ L 1 is an isomorphism but L 1 and L 2 are not. (d) What happens in (c) if we assume that the vector spaces all have the same dimension? (e) (Sylvester's rank inequality) Show that ![ $$\\begin{array}{rcl} \\mathrm{rank}\\left \({L}_{1}\\right \) +\\mathrm{ rank}\\left \({L}_{2}\\right \) -\\dim \\left \({V }_{2}\\right \)& \\leq &\\mathrm{rank}\\left \({L}_{2} \\circ{L}_{1}\\right \) \\\\ & \\leq &\\min \\left \\{\\mathrm{rank}\\left \({L}_{1}\\right \),\\mathrm{rank}\\left \({L}_{2}\\right \)\\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ101.gif) (e) Show that ![ $$\\begin{array}{rcl} \\dim \\left \(\\ker {L}_{2} \\circ{L}_{1}\\right \)& \\leq &\\dim \\left \(\\ker {L}_{1}\\right \) +\\dim \\left \(\\ker {L}_{2}\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ102.gif) 6. Let L : V -> V be a linear operator on a finite-dimensional vector space. (a) Show that L = λ1 V if and only if Lx ∈ spanx for all x ∈ V. (b) Show that L = λ1 V if and only if L ∘ K = K ∘ L for all K ∈ HomV, V. (c) Show that L = λ1 V if and only if L ∘ K = K ∘ L for all isomorphisms K : V -> V. 7. Show that two 2-dimensional subspaces of a 3-dimensional vector space must have a nontrivial intersection. 8. (Dimension formula for subspaces) Let M 1, M 2 ⊂ V be subspaces of a finite-dimensional vector space. Show that ![ $$\\dim \\left \({M}_{1} \\cap{M}_{2}\\right \) +\\dim \\left \({M}_{1} + {M}_{2}\\right \) =\\dim \\left \({M}_{1}\\right \) +\\dim \\left \({M}_{2}\\right \).$$ ](A81414_1_En_1_Chapter_Equgx.gif) Conclude that if M 1 and M 2 are transverse, then M 1 ∩ M 2 has the "expected" dimension ![ $$\\left \(\\dim \\left \({M}_{1}\\right \) +\\dim \\left \({M}_{2}\\right \)\\right \) -\\dim V$$ ](A81414_1_En_1_Chapter_IEq423.gif). Hint: Use the dimension formula on the linear map L : M 1 ×M 2 -> V defined by ![ $$L\\left \({x}_{1},{x}_{2}\\right \) = {x}_{1} - {x}_{2}$$ ](A81414_1_En_1_Chapter_IEq424.gif). Alternatively, select a suitable basis for M 1 + M 2 by starting with a basis for M 1 ∩ M 2. 9. Let M ⊂ V be a subspace and V , W finite-dimensional vector spaces. Show that the subset of ![ $$\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(V,W\\right \)$$ ](A81414_1_En_1_Chapter_IEq425.gif) consisting of maps that vanish on M, i.e., L | M = 0, is a subspace of dimension ![ $${\\dim }_{\\mathbb{F}}W \\cdot \\left \({\\dim }_{\\mathbb{F}}V {-\\dim }_{\\mathbb{F}}M\\right \)$$ ](A81414_1_En_1_Chapter_IEq426.gif). 10. We say that a linear map L : V -> V is reduced by a direct sum decomposition V = M ⊕ N if both M and N are invariant under L and neither subspace is a trivial subspace. We also say that L : V -> V is decomposable if we can find a nontrivial decomposition that reduces L : V -> V. (a) Show that for L = ![ $$\\left \[\\begin{array}{cc} 0&1\\\\ 0 &0\\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_IEq427.gif) with ![ $$M =\\ker \\left \(L\\right \) =\\mathrm{ im}\\left \(L\\right \)$$ ](A81414_1_En_1_Chapter_IEq428.gif), it is not possible to find N such that V = M ⊕ N reduces L. (b) Show more generally that one cannot find a nontrivial decomposition that reduces L. 11. Let L : V -> V be a linear transformation and M ⊂ V a subspace. Show: (a) If E is a projection onto M and ELE = LE, then M is invariant under L. (b) If M is invariant under L, then ELE = LE for all projections onto M. (c) If V = M ⊕ N and E is the projection onto M along N, then M ⊕ N reduces (see previous exercise) L if and only if EL = LE. 12. Assume V = M ⊕ N. (a) Show that any linear map L : V -> V has a 2 ×2 matrix type decomposition ![ $$\\left \[\\begin{array}{cc} A&B\\\\ C &D\\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_Equgy.gif) where A : M -> M, B : N -> M, C : M -> N, D : N -> N. (b) Show that the projection onto M along N looks like ![ $$E = {1}_{M}\\oplus {0}_{N} = \\left \[\\begin{array}{cc} {1}_{M}& 0 \\\\ 0 &{0}_{N}\\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equgz.gif) (c) Show that if LM ⊂ M, then C = 0. (d) Show that if LM ⊂ M and LN ⊂ N, then B = 0 and C = 0. In this case, L is reduced by M ⊕ N, and we write ![ $$\\begin{array}{rcl} L& =& A \\oplus D \\\\ & =& L{\\vert }_{M} \\oplus L{\\vert }_{N}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ103.gif) 13. Let M 1, M 2 ⊂ V be subspaces of a finite-dimensional vector space. Show that (a) If M 1 ∩ M 2 = 0 and dimM 1 + dimM 2 ≥ dimV, then V = M 1 ⊕ M 2. (b) If ![ $${M}_{1} + {M}_{2} = V$$ ](A81414_1_En_1_Chapter_IEq429.gif) and dimM 1 + dimM 2 ≤ dimV, then V = M 1 ⊕ M 2. 14. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times l}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq430.gif) and consider ![ $${L}_{A} :\\mathrm{{ Mat}}_{l\\times m}\\left \(\\mathbb{F}\\right \) \\rightarrow \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq431.gif) defined by L A X = AX. Find the kernel and image of this map. 15. Let ![ $$0\\frac{{L}_{0}} {\\rightarrow }{V }_{1}\\frac{{L}_{1}} {\\rightarrow }{V }_{2}\\frac{{L}_{2}} {\\rightarrow }\\cdots \\frac{{L}_{n-1}} {\\rightarrow } {V }_{n}\\frac{{L}_{n}} {\\rightarrow } 0$$ ](A81414_1_En_1_Chapter_Equha.gif) be a sequence of linear maps. Note that L 0 and L n are both the trivial linear maps with image ![ $$\\left \\{0\\right \\}$$ ](A81414_1_En_1_Chapter_IEq432.gif). Show that ![ $${\\sum\\nolimits }_{i=1}^{n}{\\left \(-1\\right \)}^{i}\\dim {V }_{ i} ={ \\sum\\nolimits }_{i=1}^{n}{\\left \(-1\\right \)}^{i}\\left \(\\dim \\left \(\\ker \\left \({L}_{ i}\\right \)\\right \) -\\dim \\left \(\\mathrm{im}\\left \({L}_{i-1}\\right \)\\right \)\\right \).$$ ](A81414_1_En_1_Chapter_Equhb.gif) Hint: First, try the case where n = 2. 16. Show that the matrix ![ $$\\left \[\\begin{array}{cc} 0&1\\\\ 0 &0\\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equhc.gif) as a linear map satisfies kerL = imL. 17. Show that ![ $$\\left \[\\begin{array}{cc} 0 &0\\\\ \\alpha&1\\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equhd.gif) defines a projection for all ![ $$\\alpha\\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq433.gif). Compute the kernel and image. 18. For any integer n > 1, give examples of linear maps L : ℂ n -> ℂ n such that (a) ℂ n = kerL ⊕ imL is a nontrivial direct sum decomposition. (b) ![ $$\\left \\{0\\right \\}\\neq \\ker \\left \(L\\right \) \\subset \\mathrm{ im}\\left \(L\\right \)$$ ](A81414_1_En_1_Chapter_IEq434.gif). 19. For P n ⊂ ℝt and 2n + 1 points a 0 < b 0 < a 1 < b 1 < ⋯ < a n < b n , consider the map L : P n -> ℝ n + 1 defined by ![ $$L\\left \(p\\right \) = \\left \[\\begin{array}{c} \\frac{1} {{b}_{0}-{a}_{0}} { \\int\\nolimits \\nolimits }_{{a}_{0}}^{{b}_{0}}p\\left \(t\\right \)\\mathrm{d}t\\\\ \\vdots \\\\ \\frac{1} {{b}_{n}-{a}_{n}}{ \\int\\nolimits \\nolimits }_{{a}_{n}}^{{b}_{n}}p\\left \(t\\right \)\\mathrm{d}t\\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equhe.gif) Show that L is a linear isomorphism. ## 1.12 Linear Independence In this section, we fin Ally come around to studying the concepts of linear dependence and independence as well as how they tie in with kernels and images of linear maps. Definition 1.12.1. Let x 1,..., x m be vectors in a vector space V. We say that x 1,..., x m are linearly independent if ![ $${x}_{1}{\\alpha }_{1} + \\cdots+ {x}_{m}{\\alpha }_{m} = 0$$ ](A81414_1_En_1_Chapter_Equhf.gif) implies that ![ $${\\alpha }_{1} = \\cdots= {\\alpha }_{m} = 0.$$ ](A81414_1_En_1_Chapter_Equhg.gif) In other words, if ![ $$L : {\\mathbb{F}}^{m} \\rightarrow V$$ ](A81414_1_En_1_Chapter_IEq435.gif) is the linear map defined by ![ $$L = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_IEq436.gif), then x 1,..., x m are linearly independent if and only if kerL = 0. The image of the map L can be identified with spanx 1,..., x m and is described as ![ $$\\left \\{{x}_{1}{\\alpha }_{1} + \\cdots+ {x}_{m}{\\alpha }_{m} : {\\alpha }_{1},\\ldots ,{\\alpha }_{m} \\in\\mathbb{F}\\right \\}.$$ ](A81414_1_En_1_Chapter_Equhh.gif) Note that x 1,..., x m is a basis precisely when kerL = 0 and V = imL. The notions of kernel and image therefore enter our investigations of dimension in a very natural way. Definition 1.12.2. Conversely, we say that x 1,..., x m are linearly dependent if they are not linearly independent, i.e., we can find ![ $${\\alpha }_{1},\\ldots ,{\\alpha }_{m} \\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq437.gif) not all zero so that ![ $${x}_{1}{\\alpha }_{1} + \\cdots+ {x}_{m}{\\alpha }_{m} = 0$$ ](A81414_1_En_1_Chapter_IEq438.gif). In the next section, we shall see how Gauss elimin Ation helps us decide when a selection of vectors in ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq439.gif) is linearly dependent or independent. We give here a characterization of linear dependence that is quite useful in both concrete and abstract situations. Lemma 1.12.3. (Characterization of Linear Dependence) Let x 1 ,..., x n ∈ V. Then, x 1 ,...,x n are linearly dependent if and only if either x 1 = 0 or we can find a smallest k ≥ 2 such that x k is a linear combin Ation of x 1 ,...,x k−1 . Proof. First observe that if x 1 = 0, then 1x 1 = 0 is a nontrivial linear combin Ation. Next, if ![ $${x}_{k} = {\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{k-1}{x}_{k-1},$$ ](A81414_1_En_1_Chapter_Equhi.gif) then we also have a nontrivial linear combin Ation ![ $${\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{k-1}{x}_{k-1} + \\left \(-1\\right \){x}_{k} = 0.$$ ](A81414_1_En_1_Chapter_Equhj.gif) Conversely, assume that x 1,..., x n are linearly dependent. Select a nontrivial linear combin Ation such that ![ $${\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{n}{x}_{n} = 0.$$ ](A81414_1_En_1_Chapter_Equhk.gif) Then, we can pick k so that α k ≠0 and ![ $${\\alpha }_{k+1} = \\cdots= {\\alpha }_{n} = 0$$ ](A81414_1_En_1_Chapter_IEq440.gif). If k = 1, then we must have x 1 = 0 and we are finished. Otherwise, ![ $${x}_{k} = -\\frac{{\\alpha }_{1}} {{\\alpha }_{k}}{x}_{1} -\\cdots-\\frac{{\\alpha }_{k-1}} {{\\alpha }_{k}} {x}_{k-1}.$$ ](A81414_1_En_1_Chapter_Equhl.gif) Thus, the set of ks with the property that x k is a linear combin Ation of x 1,..., x k − 1 is a nonempty set that contains some integer ≥ 2. Now, simply select the smallest integer in this set to get the desired choice for k. This immediately leads us to the following criterion for linear independence. Corollary 1.12.4. (Characterization of Linear Independence) Let x 1 ,..., x n ∈ V. Then, x 1 ,...,x n are linearly independent if and only if x 1 ≠0 and for each k ≥ 2 ![ $${x}_{k}\\notin \\mathrm{span}\\left \\{{x}_{1},\\ldots ,{x}_{k-1}\\right \\}.$$ ](A81414_1_En_1_Chapter_Equhm.gif) Example 1.12.5. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq441.gif) be an upper triangular matrix with k nonzero entries on the diagonal. We claim that the rank of A is ≥ k. Select the k column vectors x 1,..., x k that correspond to the nonzero diagonal entries from left to right. Thus, x 1≠0 and ![ $${x}_{l}\\notin \\mathrm{span}\\left \\{{x}_{1},\\ldots ,{x}_{l-1}\\right \\}$$ ](A81414_1_En_1_Chapter_Equhn.gif) since x l has a nonzero entry that lies below all of the nonzero entries for x 1,..., x l − 1. Using the dimension formula (Theorem 1.11.7), we see that dimkerA ≤ n − k. It is possible for A to have rank > k. Consider, e.g., ![ $$A = \\left \[\\begin{array}{ccc} 1&0&0\\\\ 0 &0 &1 \\\\ 0&0&0 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equho.gif) This matrix has rank 2 but only one nonzero entry on the diagonal. Recall from Theorem 1.10.20 that we can choose complements to a subspace by selecting appropriate vectors from a set that spans the vector space. The proof of that result actually supplies us with a bit more information. Corollary 1.12.6. Let M ⊂ V be a subspace and assume that V = span x 1 ,...,x n . If M≠V, then it is possible to select linearly independent ![ $${x}_{{i}_{1}},\\ldots.,{x}_{{i}_{k}} \\in \\left \\{{x}_{1},\\ldots ,{x}_{n}\\right \\}$$ ](A81414_1_En_1_Chapter_IEq442.gif) such that ![ $$V = M \\oplus \\mathrm{ span}\\left \\{{x}_{{i}_{1}},\\ldots ,{x}_{{i}_{k}}\\right \\}$$ ](A81414_1_En_1_Chapter_Equhp.gif) Proof. Recall that ![ $${x}_{{i}_{1}},\\ldots ,{x}_{{i}_{k}}$$ ](A81414_1_En_1_Chapter_IEq443.gif) were selected so that ![ $$\\begin{array}{rcl} {x}_{{i}_{1}}& \\notin & M, \\\\ {x}_{{i}_{2}}& \\notin & M +\\mathrm{ span}\\left \\{{x}_{{i}_{1}}\\right \\}, \\\\ & & \\vdots \\\\ {x}_{{i}_{k}}& \\notin & M +\\mathrm{ span}\\left \\{{x}_{{i}_{1}},\\ldots ,{x}_{{i}_{k-1}}\\right \\}, \\\\ V & =& M +\\mathrm{ span}\\left \\{{x}_{{i}_{1}},\\ldots ,{x}_{{i}_{k}}\\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ104.gif) In particular, ![ $${x}_{{i}_{1}}\\neq 0$$ ](A81414_1_En_1_Chapter_IEq444.gif) and ![ $${x}_{{i}_{l}}\\notin \\mathrm{span}\\left \\{{x}_{{i}_{1}},\\ldots ,{x}_{{i}_{l-1}}\\right \\}$$ ](A81414_1_En_1_Chapter_IEq445.gif) for l = 2,..., k so Corollary 1.12.4 proves the claim. A more traditional method for establishing that all bases for a vector space have the same number of elements is based on the following classical result, often referred to as the replacement theorem. Theorem 1.12.7. (Steinitz Replacement) Let y 1 ,...,y m ∈ V be linearly independent and V = span x 1 ,...,x n . Then, m ≤ n and V has a basis of the form y 1 ,..., y m , ![ $${x}_{{i}_{1}},$$ ](A81414_1_En_1_Chapter_IEq446.gif)..., ![ $${x}_{{i}_{l}}$$ ](A81414_1_En_1_Chapter_IEq447.gif) where l ≤ n − m. Proof. Corollary 1.12.6 immediately gives us linearly independent ![ $${x}_{{i}_{1}},$$ ](A81414_1_En_1_Chapter_IEq448.gif)..., ![ $${x}_{{i}_{l}}$$ ](A81414_1_En_1_Chapter_IEq449.gif) such that ![ $$\\mathrm{span}\\left \\{{x}_{{i}_{1}},\\ldots ,{x}_{{i}_{l}}\\right \\}$$ ](A81414_1_En_1_Chapter_IEq450.gif) is a complement to M = spany 1,..., y m . Thus, y 1,..., y m , ![ $${x}_{{i}_{1}},$$ ](A81414_1_En_1_Chapter_IEq451.gif)..., ![ $${x}_{{i}_{l}}$$ ](A81414_1_En_1_Chapter_IEq452.gif) must form a basis for V. The subspace theorem (Theorem 1.11.6) tells us that ![ $$m + l =\\dim \\left \(V \\right \)$$ ](A81414_1_En_1_Chapter_IEq453.gif). The fact that n ≥ dimV is a direct application of Corollary 1.12.6 with M = 0. It is, however, possible to give a more direct argument that does not refer to the concept of dimension. Instead, we use a simple algorithm that shows directly that l ≤ n − m. Observe that y 1, x 1,..., x n are linearly dependent since y 1 is a linear combin Ation of x 1,..., x n . As y 1≠0, this shows that some x i is a linear combin Ation of the previous vectors. Thus, also ![ $$\\mathrm{span}\\left \\{{y}_{1},{x}_{1},\\ldots ,\\hat{{x}_{i}},\\ldots ,{x}_{n}\\right \\} = V,$$ ](A81414_1_En_1_Chapter_Equhq.gif) where ![ $$\\hat{{x}_{i}}$$ ](A81414_1_En_1_Chapter_IEq454.gif) refers to having deleted x i . Now, repeat the argument with y 2 in place of y 1 and y 1, ![ $${x}_{1},\\ldots ,\\hat{{x}_{i}},\\ldots ,{x}_{n}$$ ](A81414_1_En_1_Chapter_IEq455.gif) in place of x 1,..., x n . Thus, ![ $${y}_{2},{y}_{1},{x}_{1},\\ldots ,\\hat{{x}_{i}},\\ldots ,{x}_{n}$$ ](A81414_1_En_1_Chapter_Equhr.gif) is linearly dependent, and since y 2, y 1 are linearly independent, some x j is a linear combin Ation of the previous vectors. Continuing in this fashion, we get a set of n vectors ![ $${y}_{m},\\ldots ,{y}_{1},{x}_{{j}_{1}},\\ldots {x}_{{j}_{n-m}}$$ ](A81414_1_En_1_Chapter_Equhs.gif) that spans V. Fin Ally, we can use Corollary 1.12.6 to elimin Ate vectors to obtain a basis. Note that the proof (Corollary 1.12.6) shows that the basis will be of the form ![ $${y}_{m},\\ldots ,{y}_{1},{x}_{{i}_{1}},\\ldots {x}_{{i}_{l}}$$ ](A81414_1_En_1_Chapter_Equht.gif) as y m ,..., y 1 are linearly independent. This shows that l ≤ n − m. Remark 1.12.8. This theorem leads us to a new proof of the fact that any two bases must contain the same number of elements. It also shows that a linearly independent collection of vectors contains no more vectors than a basis, while a spanning set contains no fewer elements than a basis. Next, we prove a remarkable theorem for matrices, that we shall revisit many more times in this text. Definition 1.12.9. For ![ $$A = \\left \[{\\alpha }_{ij}\\right \] \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq456.gif), define the transpose ![ $${A}^{t} = \\left \[{\\beta }_{ij}\\right \] \\in \\mathrm{{ Mat}}_{m\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq457.gif) by β ij = α ji . Thus, the columns of A t are the rows of A (see also Exercise 14 in Sect. 1.8). Definition 1.12.10. The column rank of a matrix is the dimension of the column space, i.e., the space spanned by the column vectors. In other words, it is the maximal number of linearly independent column vectors. This is also the dimension of the image of the matrix viewed as a linear map.Similarly, the row rank is the dimension of the row space, i.e., the space spanned by the row vectors. This is the dimension of the image of the transposed matrix. Theorem 1.12.11. (The Rank Theorem) Any n × m matrix has the property that the row rank is equal to the column rank. Proof. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq458.gif) and ![ $${x}_{1},\\ldots ,{x}_{r} \\in{\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq459.gif) be a basis for the column space of A. Next, write the columns of A as linear combin Ations of this basis ![ $$\\begin{array}{rcl} A& =& \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{r} \\end{array} \\right \]\\left \[\\begin{array}{ccc} {\\beta }_{11} & & {\\beta }_{1m}\\\\ \\\\ {\\beta }_{r1} & & {\\beta }_{rm} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{r} \\end{array} \\right \]B \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ105.gif) By taking transposes, we obtain ![ $${A}^{t} = {B}^{t}{\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{r} \\end{array} \\right \]}^{t}.$$ ](A81414_1_En_1_Chapter_Equhu.gif) But this shows that the columns of A t , i.e., the rows of A, are linear combin Ations of the r vectors that form the columns of B t ![ $$\\left \[\\begin{array}{c} {\\beta }_{11}\\\\ \\vdots \\\\ {\\beta }_{1m} \\end{array} \\right \],\\ldots ,\\left \[\\begin{array}{c} {\\beta }_{r1}\\\\ \\vdots \\\\ {\\beta }_{rm} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equhv.gif) Thus, the row space is spanned by r vectors. This shows that there cannot be more than r linearly independent rows. A similar argument starting with a basis for the row space of A shows that the reverse inequality also holds. There is a very interesting example associated to the rank theorem. Example 1.12.12. Let ![ $${t}_{1},\\ldots ,{t}_{n} \\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq460.gif) be distinct. We claim that the vectors ![ $$\\left \[\\begin{array}{c} 1\\\\ {t}_{ 1}\\\\ \\vdots \\\\ {t}_{1}^{n-1} \\end{array} \\right \],\\ldots ,\\left \[\\begin{array}{c} 1\\\\ {t}_{ n}\\\\ \\vdots \\\\ {t}_{n}^{n-1} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equhw.gif) are a basis for ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq461.gif). To show this, we have to show that the rank of the corresponding matrix ![ $$\\left \[\\begin{array}{cccc} 1 & 1 &\\cdots & 1\\\\ {t}_{ 1} & {t}_{2} & & {t}_{n}\\\\ \\vdots & & & \\vdots \\\\ {t}_{1}^{n-1} & {t}_{2}^{n-1} & \\cdots &{t}_{n}^{n-1} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equhx.gif) is n. The simplest way to do this is by considering the row rank. If the rows are linearly dependent, then we can find ![ $${\\alpha }_{0},\\ldots ,{\\alpha }_{n-1} \\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq462.gif) so that ![ $${\\alpha }_{0}\\left \[\\begin{array}{c} 1\\\\ 1\\\\ \\vdots \\\\ 1 \\end{array} \\right \]+{\\alpha }_{1}\\left \[\\begin{array}{c} {t}_{1} \\\\ {t}_{2}\\\\ \\vdots \\\\ {t}_{n}\\end{array} \\right \]+\\cdots +{\\alpha }_{n-1}\\left \[\\begin{array}{c} {t}_{1}^{n-1} \\\\ {t}_{2}^{n-1}\\\\ \\vdots \\\\ {t}_{n}^{n-1} \\end{array} \\right \] = 0.$$ ](A81414_1_En_1_Chapter_Equhy.gif) Thus, the polynomial ![ $$p\\left \(t\\right \) = {\\alpha }_{0} + {\\alpha }_{1}t + \\cdots+ {\\alpha }_{n-1}{t}^{n-1}$$ ](A81414_1_En_1_Chapter_Equhz.gif) has t 1,..., t n as roots. In other words we have a polynomial of degree ≤ n − 1 with n roots. This is not possible unless α1 = ⋯ ![ $$= {\\alpha }_{n-1} = 0$$ ](A81414_1_En_1_Chapter_IEq463.gif) (see also Sect. 2.1). The criteria for linear dependence lead to an important result about the powers of a linear operator. Before going into that, we observe that there is a connection between polynomials and linear combin Ations of powers of a linear operator. Let L : V -> V be a linear operator on an n-dimensional vector space. If ![ $$p\\left \(t\\right \) = {\\alpha }_{k}{t}^{k} + \\cdots+ {\\alpha }_{ 1}t + {\\alpha }_{0} \\in\\mathbb{F}\\left \[t\\right \],$$ ](A81414_1_En_1_Chapter_Equia.gif) then ![ $$p\\left \(L\\right \) = {\\alpha }_{k}{L}^{k} + \\cdots+ {\\alpha }_{ 1}L + {\\alpha }_{0}{1}_{V }$$ ](A81414_1_En_1_Chapter_Equib.gif) is a linear combin Ation of ![ $${L}^{k},\\ldots ,L,{1}_{ V }.$$ ](A81414_1_En_1_Chapter_Equic.gif) Conversely, any linear combin Ation of L k ,..., L, 1 V must look like this. Since HomV, V has dimension n 2, it follows that ![ $${1}_{V },L,{L}^{2},\\ldots ,{L}^{{n}^{2} }$$ ](A81414_1_En_1_Chapter_IEq464.gif) are linearly dependent. This means that we can find a smallest positive integer k ≤ n 2 such that 1 V , L, L 2,..., L k are linearly dependent. Thus, 1 V , L, L 2,..., L l are linearly independent for l < k and ![ $${L}^{k} \\in \\mathrm{ span}\\left \\{{1}_{ V },L,{L}^{2},\\ldots ,{L}^{k-1}\\right \\}.$$ ](A81414_1_En_1_Chapter_Equid.gif) In Sect. 2.7, we shall show that k ≤ n. The fact that ![ $${L}^{k} \\in \\mathrm{ span}\\left \\{{1}_{ V },L,{L}^{2},\\ldots ,{L}^{k-1}\\right \\}$$ ](A81414_1_En_1_Chapter_Equie.gif) means that we have a polynomial ![ $${\\mu }_{L}\\left \(t\\right \) = {t}^{k} + {\\alpha }_{ k-1}{t}^{k-1} + \\cdots+ {\\alpha }_{ 1}t + {\\alpha }_{0}$$ ](A81414_1_En_1_Chapter_Equif.gif) such that ![ $${\\mu }_{L}\\left \(L\\right \) = 0.$$ ](A81414_1_En_1_Chapter_Equig.gif) This is the so-called minimal polynomial for L. Apparently, there is no polynomial of smaller degree that has L as a root. For a more in-depth analysis of the minimal polynomial, see Sect. 2.4. Recall that we characterized projections as linear operators that satisfy L 2 = L (see Theorem 1.11.12). Thus, nontrivial projections are precisely the operators whose minimal polynomial is ![ $${\\mu }_{L}\\left \(t\\right \) = {t}^{2} - t$$ ](A81414_1_En_1_Chapter_IEq465.gif). Note that the two trivial projections 1 V and 0 V have minimal polynomials ![ $${\\mu }_{{1}_{V }} = t - 1$$ ](A81414_1_En_1_Chapter_IEq466.gif) and ![ $${\\mu }_{{0}_{V }} = t$$ ](A81414_1_En_1_Chapter_IEq467.gif). Example 1.12.13. Let ![ $$\\begin{array}{rcl} A& =& \\left \[\\begin{array}{cc} \\lambda &1\\\\ 0 &\\lambda\\end{array} \\right \] \\\\ B& =& \\left \[\\begin{array}{ccc} \\lambda &0 & 0\\\\ 0 &\\lambda& 1 \\\\ 0 & 0 &\\lambda\\end{array} \\right \] \\\\ C& =& \\left \[\\begin{array}{ccc} 0& - 1&0\\\\ 1 & 0 &0 \\\\ 0& 0 & i \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ106.gif) We note that A is not proportional to 1 V , so μ A cannot have degree 1. But ![ $$\\begin{array}{rcl}{ A}^{2}& =&{ \\left \[\\begin{array}{cc} \\lambda &1 \\\\ 0 &\\lambda\\end{array} \\right \]}^{2} \\\\ & =& \\left \[\\begin{array}{cc} {\\lambda }^{2} & 2\\lambda\\\\ 0 & {\\lambda }^{2} \\end{array} \\right \] \\\\ & =& 2\\lambda \\left \[\\begin{array}{cc} \\lambda &1\\\\ 0 &\\lambda\\end{array} \\right \] - {\\lambda }^{2}\\left \[\\begin{array}{cc} 1&0 \\\\ 0&1 \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ107.gif) Thus, ![ $${\\mu }_{A}\\left \(t\\right \) = {t}^{2} - 2\\lambda t + {\\lambda }^{2} ={ \\left \(t - \\lambda \\right \)}^{2}.$$ ](A81414_1_En_1_Chapter_Equih.gif) The calculation for B is similar and evidently yields the same minimal polynomial ![ $${\\mu }_{B}\\left \(t\\right \) = {t}^{2} - 2\\lambda t + {\\lambda }^{2} ={ \\left \(t - \\lambda \\right \)}^{2}.$$ ](A81414_1_En_1_Chapter_Equii.gif) Fin Ally, for C, we note that ![ $${C}^{2} = \\left \[\\begin{array}{ccc} - 1& 0 & 0\\\\ 0 & - 1 & 0 \\\\ 0 & 0 & - 1 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equij.gif) Thus, ![ $${\\mu }_{C}\\left \(t\\right \) = {t}^{2} + 1.$$ ](A81414_1_En_1_Chapter_Equik.gif) In the theory of differential equations, it is also important to understand when functions are linearly independent. We start with vector-valued functions ![ $${x}_{1}\\left \(t\\right \),\\ldots ,{x}_{k}\\left \(t\\right \) : I \\rightarrow{\\mathbb{F}}^{n},$$ ](A81414_1_En_1_Chapter_IEq468.gif) where I is any set but usually an interval. These k functions are linearly independent provided they are linearly independent at just one point t 0 ∈ I. In other words, if the k vectors ![ $${x}_{1}\\left \({t}_{0}\\right \),\\ldots ,{x}_{k}\\left \({t}_{0}\\right \) \\in{\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq469.gif) are linearly independent, then the functions are also linearly independent. The converse statement is, not true in general. To see why this is we give a specific example. Example 1.12.14. It is an important fact from analysis that there are functions ϕt ∈ C ∞ ℝ, ℝ such that ![ $$\\phi \\left \(t\\right \) = \\left \\{\\begin{array}{cc} 0&t \\leq0,\\\\ 1 &t \\geq1. \\end{array} \\right.$$ ](A81414_1_En_1_Chapter_Equil.gif) These can easily be pictured, but it takes some work to construct them. Given this function, we consider ![ $${x}_{1},{x}_{2} :{ \\mathbb{R} \\rightarrow\\mathbb{R}}^{2}$$ ](A81414_1_En_1_Chapter_IEq470.gif) defined by ![ $$\\begin{array}{rcl}{ x}_{1}\\left \(t\\right \)& =& \\left \[\\begin{array}{c} \\phi \\left \(t\\right \)\\\\ 0 \\end{array} \\right \], \\\\ {x}_{2}\\left \(t\\right \)& =& \\left \[\\begin{array}{c} 0\\\\ \\phi \\left \(-t\\right \) \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ108.gif) When t ≤ 0, we have that x 1 = 0 so the two functions are linearly dependent on ( − ∞, 0]. When t ≥ 0, we have that x 2 t = 0 so the functions are also linearly dependent on [0, ∞). Now, assume that we can find λ1, λ2 ∈ ℝ such that ![ $${\\lambda }_{1}{x}_{1}\\left \(t\\right \) + {\\lambda }_{2}{x}_{2}\\left \(t\\right \) = 0\\text{ for all}\\mathrm{\\ }t \\in\\mathbb{R}.$$ ](A81414_1_En_1_Chapter_Equim.gif) If t ≥ 1, this implies that ![ $$\\begin{array}{rcl} 0& =& {\\lambda }_{1}{x}_{1}\\left \(t\\right \) + {\\lambda }_{2}{x}_{2}\\left \(t\\right \) \\\\ & =& {\\lambda }_{1}\\left \[\\begin{array}{c} 1\\\\ 0 \\end{array} \\right \] + {\\lambda }_{2}\\left \[\\begin{array}{c} 0\\\\ 0 \\end{array} \\right \] \\\\ & =& {\\lambda }_{1}\\left \[\\begin{array}{c} 1\\\\ 0 \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ109.gif) Thus, λ1 = 0. Similarly, we have for t ≤ − 1 ![ $$\\begin{array}{rcl} 0& =& {\\lambda }_{1}{x}_{1}\\left \(t\\right \) + {\\lambda }_{2}{x}_{2}\\left \(t\\right \) \\\\ & =& {\\lambda }_{1}\\left \[\\begin{array}{c} 0\\\\ 0 \\end{array} \\right \] + {\\lambda }_{2}\\left \[\\begin{array}{c} 1\\\\ 0 \\end{array} \\right \] \\\\ & =& {\\lambda }_{2}\\left \[\\begin{array}{c} 1\\\\ 0 \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ110.gif) So λ2 = 0. This shows that the two functions x 1 and x 2 are linearly independent as functions on ℝ even though the vectors x 1 t, x 2 t are linearly dependent for each t ∈ ℝ. Next, we want to study what happens in the special case where n = 1, i.e., we have functions ![ $${x}_{1}\\left \(t\\right \),\\ldots ,{x}_{k}\\left \(t\\right \) : I \\rightarrow\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq471.gif). In this case, the above strategy for determining linear independence at a point completely fails as the values lie in a one-dimensional vector space. We can, however, construct auxiliary vector-valued functions by taking derivatives. In order to be able to take derivatives, we have to assume either that ![ $$I = \\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq472.gif) and ![ $${x}_{i} \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq473.gif) are polynomials with the formal derivatives defined as in Exercise 2 in Sect. 1.6 or that I ⊂ ℝ is an interval, ![ $$\\mathbb{F} = \\mathbb{C},$$ ](A81414_1_En_1_Chapter_IEq474.gif) and x i ∈ C ∞ I, ℂ. In either case, we can then construct new vector-valued functions ![ $${z}_{1},\\ldots ,{z}_{k} : I \\rightarrow{\\mathbb{F}}^{k}$$ ](A81414_1_En_1_Chapter_IEq475.gif) by listing x i and its first k − 1 derivatives in column form ![ $${z}_{i}\\left \(t\\right \) = \\left \[\\begin{array}{c} {x}_{i}\\left \(t\\right \) \\\\ \\left \(D{x}_{i}\\right \)\\left \(t\\right \)\\\\ \\\\ \\left \({D}^{k-1}{x}_{i}\\right \)\\left \(t\\right \) \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equin.gif) First, we claim that x 1,..., x k are linearly dependent if and only if z 1,..., z k are linearly dependent. This is quite simple and depends on the fact that D n is linear. We only need to observe that ![ $$\\begin{array}{rcl}{ \\alpha }_{1}{z}_{1} + \\cdots+ {\\alpha }_{k}{z}_{k}& =& {\\alpha }_{1}\\left \[\\begin{array}{c} {x}_{1} \\\\ D{x}_{1}\\\\ \\vdots \\\\ {D}^{k-1}{x}_{1} \\end{array} \\right \] + \\cdots+ {\\alpha }_{k}\\left \[\\begin{array}{c} {x}_{k} \\\\ D{x}_{k}\\\\ \\vdots \\\\ {D}^{k-1}{x}_{k} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{c} {\\alpha }_{1}{x}_{1} \\\\ {\\alpha }_{1}D{x}_{1}\\\\ \\vdots \\\\ {\\alpha }_{1}{D}^{k-1}{x}_{1} \\end{array} \\right \] + \\cdots+ \\left \[\\begin{array}{c} {\\alpha }_{k}{x}_{k} \\\\ {\\alpha }_{k}D{x}_{k}\\\\ \\vdots \\\\ {\\alpha }_{k}{D}^{k-1}{x}_{k} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{c} {\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{k}{x}_{k} \\\\ {\\alpha }_{1}D{x}_{1} + \\cdots+ {\\alpha }_{k}D{x}_{k}\\\\ \\vdots \\\\ {\\alpha }_{1}{D}^{k-1}{x}_{1} + \\cdots+ {\\alpha }_{k}{D}^{k-1}{x}_{k} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{c} {\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{k}{x}_{k} \\\\ D\\left \({\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{k}{x}_{k}\\right \)\\\\ \\vdots \\\\ {D}^{k-1}\\left \({\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{k}{x}_{k}\\right \) \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ111.gif) Thus, ![ $${\\alpha }_{1}{z}_{1} + \\cdots+ {\\alpha }_{k}{z}_{k} = 0$$ ](A81414_1_En_1_Chapter_IEq476.gif) if and only if ![ $${\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{k}{x}_{k} = 0$$ ](A81414_1_En_1_Chapter_IEq477.gif). This shows the claim. Let us now see how this works in action. Example 1.12.15. Let x i t = expλ i t, where λ i ∈ ℂ are distinct. Then, ![ $${z}_{i}\\left \(t\\right \) = \\left \[\\begin{array}{c} \\exp \\left \({\\lambda }_{i}t\\right \) \\\\ {\\lambda }_{i}\\exp \\left \({\\lambda }_{i}t\\right \)\\\\ \\vdots \\\\ {\\lambda }_{i}^{k-1}\\exp \\left \({\\lambda }_{i}t\\right \) \\end{array} \\right \] = \\left \[\\begin{array}{c} 1\\\\ {\\lambda }_{ i}\\\\ \\vdots \\\\ {\\lambda }_{i}^{k-1} \\end{array} \\right \]\\exp \\left \({\\lambda }_{i}t\\right \).$$ ](A81414_1_En_1_Chapter_Equio.gif) Thus, expλ1 t,..., expλ k t are linearly independent as we saw in Example 1.12.12 that the vectors ![ $$\\left \[\\begin{array}{c} 1\\\\ {\\lambda }_{ 1}\\\\ \\vdots \\\\ {\\lambda }_{1}^{k-1} \\end{array} \\right \],\\ldots ,\\left \[\\begin{array}{c} 1\\\\ {\\lambda }_{ k}\\\\ \\vdots \\\\ {\\lambda }_{k}^{k-1} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equip.gif) are linearly independent. Example 1.12.16. Let x k t = coskt, k = 0, 1, 2,..., n. In this case, direct check will involve a matrix that has both cosines and sines in alternating rows. Instead, we can use Euler's formula that ![ $${x}_{k}\\left \(t\\right \) =\\cos \\left \(kt\\right \) = \\frac{1} {2}{\\mathrm{e}}^{ikt} -\\frac{1} {2}{\\mathrm{e}}^{-ikt}.$$ ](A81414_1_En_1_Chapter_Equiq.gif) We know from the previous exercise that the 2n + 1 functions expikt, k = 0, ± 1,..., ± n are linearly independent. Thus, the origin Al n + 1 cosine functions are also linearly independent. Note that if we added the n sine functions y k t = sinkt, k = 1,..., n, we have 2n + 1 cosine and sine functions that also become linearly independent. ### 1.12.1 Exercises 1. (Characterization of Linear Independence) Show that, x 1,..., x n ∈ V − 0 are linearly independent if and only if ![ $$\\mathrm{span}\\left \\{{x}_{1},\\ldots ,\\hat{{x}}_{i},\\ldots ,{x}_{n}\\right \\}\\neq \\mathrm{span}\\left \\{{x}_{1},\\ldots ,{x}_{n}\\right \\}$$ ](A81414_1_En_1_Chapter_Equir.gif) for all i = 1,..., n. Here the "hat" ![ $$\\hat{{x}}_{i}$$ ](A81414_1_En_1_Chapter_IEq478.gif) over a vector means that it has been deleted from the set. 2. (Characterization of Linear Independence) Show that x 1,..., x n ∈ V − 0 are linearly independent if and only if ![ $$\\mathrm{span}\\left \\{{x}_{1},\\ldots ,{x}_{n}\\right \\} =\\mathrm{ span}\\left \\{{x}_{1}\\right \\} \\oplus \\cdots\\oplus \\mathrm{ span}\\left \\{{x}_{n}\\right \\}.$$ ](A81414_1_En_1_Chapter_Equis.gif) 3. Assume that we have nonzero vectors x 1,..., x k ∈ V and a direct sum of subspaces ![ $${M}_{1} + \\cdots+ {M}_{k} = {M}_{1} \\oplus \\cdots\\oplus{M}_{k}.$$ ](A81414_1_En_1_Chapter_Equit.gif) Show that if x i ∈ M i , then x 1,..., x k are linearly independent. 4. Show that ![ $${t}^{3} + {t}^{2} + 1,{t}^{3} + {t}^{2} + t,{t}^{3} + t + 2$$ ](A81414_1_En_1_Chapter_IEq479.gif) are linearly independent in P 3. Which of the standard basis vectors 1, t, t 2, t 3 can be added to this collection to create a basis for P 3 ? 5. Show that, if ![ $${p}_{0}\\left \(t\\right \),\\ldots ,{p}_{n}\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq480.gif) all have degree ≤ n and all vanish at t 0, then they are linearly dependent. 6. Assume that we have two fields ![ $$\\mathbb{F} \\subset\\mathbb{L},$$ ](A81414_1_En_1_Chapter_IEq481.gif) such as ![ $$\\mathbb{R} \\subset\\mathbb{C}$$ ](A81414_1_En_1_Chapter_IEq482.gif). Show that (a) If x 1,..., x m form a basis for ![ $${\\mathbb{F}}^{m},$$ ](A81414_1_En_1_Chapter_IEq483.gif) then they also form a basis for ![ $${\\mathbb{L}}^{m}$$ ](A81414_1_En_1_Chapter_IEq484.gif). (b) If x 1,..., x k are linearly independent in ![ $${\\mathbb{F}}^{m},$$ ](A81414_1_En_1_Chapter_IEq485.gif) then they are also linearly independent in ![ $${\\mathbb{L}}^{m}$$ ](A81414_1_En_1_Chapter_IEq486.gif). (c) If x 1,..., x k are linearly dependent in ![ $${\\mathbb{F}}^{m},$$ ](A81414_1_En_1_Chapter_IEq487.gif) then they are also linearly dependent in ![ $${\\mathbb{L}}^{m}$$ ](A81414_1_En_1_Chapter_IEq488.gif). (d) If ![ $${x}_{1},\\ldots ,{x}_{k} \\in{\\mathbb{F}}^{m},$$ ](A81414_1_En_1_Chapter_IEq489.gif) then ![ $${\\dim }_{\\mathbb{F}}\\mathrm{{span}}_{\\mathbb{F}}\\left \\{{x}_{1},\\ldots ,{x}_{k}\\right \\} {=\\dim }_{\\mathbb{L}}\\mathrm{{span}}_{\\mathbb{L}}\\left \\{{x}_{1},\\ldots ,{x}_{k}\\right \\}.$$ ](A81414_1_En_1_Chapter_Equiu.gif) (e) If ![ $$M \\subset{\\mathbb{F}}^{m}$$ ](A81414_1_En_1_Chapter_IEq490.gif) is a subspace, then ![ $$M =\\mathrm{{ span}}_{\\mathbb{L}}\\left \(M\\right \) \\cap{\\mathbb{F}}^{m}.$$ ](A81414_1_En_1_Chapter_Equiv.gif) (f) Let A ∈ Mat n ×m ![ $$\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq491.gif). Then, ![ $$A : {\\mathbb{F}}^{m} \\rightarrow{\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq492.gif) is one-to-one (resp. onto) if and only if ![ $$A : {\\mathbb{L}}^{m} \\rightarrow{\\mathbb{L}}^{n}$$ ](A81414_1_En_1_Chapter_IEq493.gif) is one-to-one (resp. onto). 7. Show that ![ $${\\dim }_{\\mathbb{F}}V \\leq n$$ ](A81414_1_En_1_Chapter_IEq494.gif) if and only if every collection of n + 1 vectors is linearly dependent. 8. Let L : V -> W be a linear map. (a) Show that if x 1,..., x k span V and L is not one-to-one, then Lx 1,..., Lx k are linearly dependent. (b) Show that if x 1,..., x k are linearly dependent, then Lx 1,..., Lx k are linearly dependent. (c) Show that if Lx 1,..., Lx k are linearly independent, then x 1,..., x k are linearly independent. 9. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq495.gif) and assume that y 1,..., y m ∈ V ![ $$\\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{m}\\end{array} \\right \] = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n}\\end{array} \\right \]A,$$ ](A81414_1_En_1_Chapter_Equiw.gif) where x 1,..., x n form a basis for V. (a) Show that y 1,..., y m span V if and only if A has rank n. Conclude that m ≥ n. (b) Show that y 1,..., y m are linearly independent if and only if kerA = 0. Conclude that m ≤ n. (c) Show that y 1,..., y m form a basis for V if and only if A is invertible. Conclude that m = n. ## 1.13 Row Reduction In this section, we give a brief and rigorous outline of the standard procedures involved in solving systems of linear equations. The goal in the context of what we have already learned is to find a way of computing the image and kernel of a linear map that is represented by a matrix. Along the way, we shall reprove that the dimension is well defined as well as the dimension formula for linear maps. The usual way of writing n equations with m variables is ![ $$\\begin{array}{ccc} {a}_{11}{x}_{1} + \\cdots+ {a}_{1m}{x}_{m} & =& {b}_{1}\\\\ \\vdots & \\vdots & \\vdots \\\\ {a}_{n1}{x}_{1} + \\cdots+ {a}_{nm}{x}_{m}& =&{b}_{n}, \\end{array}$$ ](A81414_1_En_1_Chapter_Equix.gif) where the variables are x 1,..., x m . The goal is to understand for which choices of constants a ij and b i such systems can be solved and then list all the solutions. To conform to our already specified notation, we change the system so that it looks like ![ $$\\begin{array}{ccc} {\\alpha }_{11}{\\xi }_{1} + \\cdots+ {\\alpha }_{1m}{\\xi }_{m} & =& {\\beta }_{1}\\\\ \\vdots&\\vdots&\\vdots \\\\ {\\alpha }_{n1}{\\xi }_{1} + \\cdots+ {\\alpha }_{nm}{\\xi }_{m}& =&{\\beta }_{n}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equiy.gif) In matrix form, this becomes ![ $$\\left \[\\begin{array}{ccc} {\\alpha }_{11} & \\cdots & {\\alpha }_{1m}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\alpha }_{n1} & \\cdots &{\\alpha }_{nm} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\xi }_{1}\\\\ \\vdots \\\\ {\\xi }_{m} \\end{array} \\right \] = \\left \[\\begin{array}{c} {\\beta }_{1}\\\\ \\vdots \\\\ {\\beta }_{n} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equiz.gif) and can be abbreviated to ![ $$Ax = b.$$ ](A81414_1_En_1_Chapter_Equja.gif) As such, we can easily use the more abstract language of linear algebra to address some general points. Proposition 1.13.1. Let L : V -> W be a linear map. (1) Lx = b can be solved if and only if b ∈ im L. (2) If Lx 0 = b and x ∈ ker L, then Lx + x 0 = b. (3) If Lx 0 = b and Lx 1 = b, then x 0 − x 1 ∈ ker L. Therefore, we can find all solutions to Lx = b provided we can find the kernel kerL and just one solution x 0. Note that the kernel consists of the solutions to what we call the homogeneous system: Lx = 0. Definition 1.13.2. With this behind us, we are now ready to address the issue of how to make the necessary calculations that allow us to find a solution to ![ $$\\left \[\\begin{array}{ccc} {\\alpha }_{11} & \\cdots & {\\alpha }_{1m}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\alpha }_{n1} & \\cdots &{\\alpha }_{nm} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\xi }_{1}\\\\ \\vdots \\\\ {\\xi }_{m} \\end{array} \\right \] = \\left \[\\begin{array}{c} {\\beta }_{1}\\\\ \\vdots \\\\ {\\beta }_{n} \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equjb.gif) The usual method is through elementary row operations. To keep things more conceptual, think of the actual linear equations ![ $$\\begin{array}{ccc} {\\alpha }_{11}{\\xi }_{1} + \\cdots+ {\\alpha }_{1m}{\\xi }_{m} & =& {\\beta }_{1}\\\\ \\vdots&\\vdots&\\vdots \\\\ {\\alpha }_{n1}{\\xi }_{1} + \\cdots+ {\\alpha }_{nm}{\\xi }_{m}& =&{\\beta }_{n}\\end{array}$$ ](A81414_1_En_1_Chapter_Equjc.gif) and observe that we can perform the following three operations without changing the solutions to the equations: (1) Interchanging equations (or rows). (2) Adding a multiple of an equation (or row) to a different equation (or row). (3) Multiplying an equation (or row) by a nonzero number. Using these operations, one can put the system in row echelon form.This is most easily done by considering the augmented matrix, where the variables have disappeared ![ $$\\left \[\\begin{array}{ccc} {\\alpha }_{11} & \\cdots & {\\alpha }_{1m}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\alpha }_{n1} & \\cdots &{\\alpha }_{nm} \\end{array} \\right.\\left \\vert \\begin{array}{c} {\\beta }_{1}\\\\ \\vdots \\\\ {\\beta }_{n} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equjd.gif) and then performing the above operations, now on rows, until it takes the special form where 1. The first nonzero entry in each row is normalized to be 1. This is also called the leading 1 for the row. 2. The leading 1s appear in echelon form, i.e., as we move down along the rows the leading 1s will appear farther to the right. The method by which we put a matrix into row echelon form is called Gauss elimin Ation. Having put the system into this simple form, one can then solve it by starting from the last row or equation. When doing the process on A itself, we denote the resulting row echelon matrix by A ref. There are many ways of doing row reductions so as to come up with a row echelon form for A, and it is quite likely that one ends up with different echelon forms. To see why, consider ![ $$A = \\left \[\\begin{array}{lll} 1&1&0\\\\ 0 &1 &1 \\\\ 0&0&1 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equje.gif) This matrix is clearly in row echelon form. However, we can subtract the second row from the first row to obtain a new matrix which is still in row echelon form: ![ $$\\left \[\\begin{array}{lll} 1&0& - 1\\\\ 0 &1 &1 \\\\ 0&0&1 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equjf.gif) It is now possible to use two more elementary row operations to arrive at ![ $$\\left \[\\begin{array}{lll} 1&0&0\\\\ 0 &1 &0 \\\\ 0&0&1 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equjg.gif) The important information about A ref is the placement of the leading 1 in each row, and this placement will always be the same for any row echelon form.To get a unique row echelon form, we need to reduce the matrix using Gauss-Jordan elimin Ation. This process is what we just performed on the above matrix A to get it into fin Al form. The idea is to first arrive at some row echelon form A ref and then, starting with the second row, elimin Ate all entries above the leading 1; this is then repeated with row three and so on. In this way, we end up with a matrix that is still in row echelon form, but also has the property that all entries below and above the leading 1 in each row are zero.We say that such a matrix is in reduced row echelon form. If we start with a matrix A, then the resulting reduced row echelon form is denoted A rref. For example, if we have ![ $${A}_{\\mathrm{ref}} = \\left \[\\begin{array}{ccccccc} 0&1&4&1& 0 &3& - 1\\\\ 0 &0 &0 &1 & - 2 &5 & - 4 \\\\ 0&0&0&0& 0 &0& 1\\\\ 0 &0 &0 &0 & 0 &0 & 0 \\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_Equjh.gif) then we can reduce further to get a new reduced row echelon form ![ $${A}_{\\mathrm{rref}} = \\left \[\\begin{array}{ccccccc} 0&1&4&0& 2 & - 2&0\\\\ 0 &0 &0 &1 & - 2 & 5 &0 \\\\ 0&0&0&0& 0 & 0 &1\\\\ 0 &0 &0 &0 & 0 & 0 &0 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equji.gif) The row echelon form and reduced row echelon form of a matrix can more abstractly be characterized as follows. Suppose that we have an n ×m matrix ![ $$A = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_IEq496.gif)where ![ $${x}_{1},\\ldots ,{x}_{m} \\in{\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq497.gif) correspond to the columns of A. Let ![ $${e}_{1},\\ldots ,{e}_{n} \\in{\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq498.gif) be the canonical basis. The matrix is in row echelon form if we can find 1 ≤ j 1 < ⋯ < j k ≤ m, where k ≤ n, such that ![ $${x}_{{j}_{s}} = {e}_{s} +{ \\sum\\nolimits }_{i<s}{\\alpha }_{i{j}_{s}}{e}_{i}$$ ](A81414_1_En_1_Chapter_Equjj.gif) for s = 1,..., k. For all other indices j, we have ![ $$\\begin{array}{rcl}{ x}_{j}& =& 0,\\text{ if }j < {j}_{1}, \\\\ {x}_{j}& \\in & \\mathrm{span}\\left \\{{e}_{1},\\ldots ,{e}_{s}\\right \\},\\mathrm{ if }{j}_{s} < j < {j}_{s+1}, \\\\ {x}_{j}& \\in & \\mathrm{span}\\left \\{{e}_{1},\\ldots ,{e}_{k}\\right \\},\\mathrm{ if }{j}_{k} < j.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ112.gif) Moreover, the matrix is in reduced row echelon form if in addition we assume that ![ $${x}_{{j}_{s}} = {e}_{s}.$$ ](A81414_1_En_1_Chapter_Equjk.gif) Below, we shall prove that the reduced row echelon form of a matrix is unique, but before doing so, it is convenient to reinterpret the row operations as matrix multiplication. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq499.gif) be the matrix we wish to row reduce. The row operations we have described can be accomplished by multiplying A by certain invertible n ×n matrices on the left.These matrices are called elementary matrices. Definition 1.13.3. To define these matrices, we use the standard basis matrices E kl where the kl entry is 1 while all other entries are 0. The matrix product E kl A is a matrix whose kth row is the lth row of A and all other rows vanish. 1. Interchanging rows k and l: This can be accomplished by the matrix multiplication I kl A, where ![ $$\\begin{array}{rcl}{ I}_{kl}& =& {E}_{kl} + {E}_{lk} +{ \\sum\\nolimits }_{i\\neq k,l}{E}_{ii} \\\\ & =& {E}_{kl} + {E}_{lk} + {1}_{{\\mathbb{F}}^{n}} - {E}_{kk} - {E}_{ll}, \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ113.gif) or in other words, the ij entries α ij in I kl satisfy α kl = α lk = 1, α ii = 1 if i≠k, l, and α ij = 0 otherwise. Note that I kl = I lk and ![ $${I}_{kl}{I}_{lk} = {1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_1_Chapter_IEq500.gif). Thus I kl is invertible. 2. Multiplying row l by ![ $$\\alpha\\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq501.gif) and adding it to row k≠l. This can be accomplished via R kl αA, where ![ $${R}_{kl}\\left \(\\alpha \\right \) = {1}_{{\\mathbb{F}}^{n}} + \\alpha {E}_{kl},$$ ](A81414_1_En_1_Chapter_Equjl.gif) or in other words, the ij entries α ij in R kl α look like α ii = 1, α kl = α, and α ij = 0 otherwise. This time, we note that ![ $${R}_{kl}\\left \(\\alpha \\right \){R}_{kl}\\left \(-\\alpha \\right \) = {1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_1_Chapter_IEq502.gif). 3. Multiplying row k by ![ $$\\alpha\\in\\mathbb{F}-\\left \\{0\\right \\}$$ ](A81414_1_En_1_Chapter_IEq503.gif). This can be accomplished by M k αA, where ![ $$\\begin{array}{rcl}{ M}_{k}\\left \(\\alpha \\right \)& =& \\alpha {E}_{kk} +{ \\sum\\nolimits }_{i\\neq k}{E}_{ii} \\\\ & =& {1}_{{\\mathbb{F}}^{n}} + \\left \(\\alpha- 1\\right \){E}_{kk}, \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ114.gif) or in other words, the ij entries α ij of M k α are α kk = α, α ii = 1 if i≠k, and α ij = 0 otherwise. Clearly, ![ $${M}_{k}\\left \(\\alpha \\right \){M}_{k}\\left \({\\alpha }^{-1}\\right \) = {1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_1_Chapter_IEq504.gif). Performing row reductions on A is now the same as doing a matrix multiplication PA, where ![ $$P \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq505.gif) is a product of the elementary matrices. Note that such P are invertible and that P − 1 is also a product of elementary matrices. The elementary 2 ×2 matrices look like. ![ $$\\begin{array}{rcl} {I}_{12}& =& \\left \[\\begin{array}{cc} 0&1\\\\ 1 &0 \\end{array} \\right \], \\\\ {R}_{12}\\left \(\\alpha \\right \)& =& \\left \[\\begin{array}{cc} 1&\\alpha \\\\ 0 & 1 \\end{array} \\right \], \\\\ {R}_{21}\\left \(\\alpha \\right \)& =& \\left \[\\begin{array}{cc} 1 &0\\\\ \\alpha&1 \\end{array} \\right \], \\\\ {M}_{1}\\left \(\\alpha \\right \)& =& \\left \[\\begin{array}{cc} \\alpha &0\\\\ 0 &1 \\end{array} \\right \], \\\\ {M}_{2}\\left \(\\alpha \\right \)& =& \\left \[\\begin{array}{cc} 1& 0\\\\ 0 &\\alpha\\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ115.gif) If we multiply these matrices onto A from the left, we obtain the desired operations: ![ $$\\begin{array}{rcl} {I}_{12}A& = \\left \[\\begin{array}{cc} 0&1\\\\ 1 &0 \\end{array} \\right \]\\left \[\\begin{array}{cc} {\\alpha }_{11} & {\\alpha }_{12} \\\\ {\\alpha }_{21} & {\\alpha }_{22} \\end{array} \\right \] = \\left \[\\begin{array}{cc} {\\alpha }_{21} & {\\alpha }_{22} \\\\ {\\alpha }_{11} & {\\alpha }_{12} \\end{array} \\right \]& \\\\ {R}_{12}\\left \(\\alpha \\right \)A& = \\left \[\\begin{array}{cc} 1&\\alpha \\\\ 0 & 1 \\end{array} \\right \]\\left \[\\begin{array}{cc} {\\alpha }_{11} & {\\alpha }_{12} \\\\ {\\alpha }_{21} & {\\alpha }_{22} \\end{array} \\right \] = \\left \[\\begin{array}{cc} {\\alpha }_{11} + \\alpha {\\alpha }_{21} & {\\alpha }_{12} + \\alpha {\\alpha }_{22} \\\\ {\\alpha }_{21} & {\\alpha }_{22} \\end{array} \\right \]& \\\\ {R}_{21}\\left \(\\alpha \\right \)A& = \\left \[\\begin{array}{cc} 1 &0\\\\ \\alpha&1 \\end{array} \\right \]\\left \[\\begin{array}{cc} {\\alpha }_{11} & {\\alpha }_{12} \\\\ {\\alpha }_{21} & {\\alpha }_{22} \\end{array} \\right \] = \\left \[\\begin{array}{cc} {\\alpha }_{11} & {\\alpha }_{12} \\\\ \\alpha {\\alpha }_{11} + {\\alpha }_{21} & \\alpha {\\alpha }_{12} + {\\alpha }_{22} \\end{array} \\right \]& \\\\ {M}_{1}\\left \(\\alpha \\right \)A& = \\left \[\\begin{array}{cc} \\alpha &0\\\\ 0 &1 \\end{array} \\right \]\\left \[\\begin{array}{cc} {\\alpha }_{11} & {\\alpha }_{12} \\\\ {\\alpha }_{21} & {\\alpha }_{22} \\end{array} \\right \] = \\left \[\\begin{array}{cc} \\alpha {\\alpha }_{11} & \\alpha {\\alpha }_{12} \\\\ {\\alpha }_{21} & {\\alpha }_{22} \\end{array} \\right \]& \\\\ {M}_{2}\\left \(\\alpha \\right \)A& = \\left \[\\begin{array}{cc} 1& 0\\\\ 0 &\\alpha\\end{array} \\right \]\\left \[\\begin{array}{cc} {\\alpha }_{11} & {\\alpha }_{12} \\\\ {\\alpha }_{21} & {\\alpha }_{22} \\end{array} \\right \] = \\left \[\\begin{array}{cc} {\\alpha }_{11} & {\\alpha }_{12} \\\\ \\alpha {\\alpha }_{21} & \\alpha {\\alpha }_{22} \\end{array} \\right \].& \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ116.gif) We can now move on to the important result mentioned above. Theorem 1.13.4. (Uniqueness of Reduced Row Echelon Form) The reduced row echelon form of an n × m matrix is unique. Proof. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq506.gif) and assume that we have two reduced row echelon forms ![ $$\\begin{array}{rcl} PA& =& \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \], \\\\ QA& =& \\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{m} \\end{array} \\right \],\\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ117.gif) where ![ $$P,Q \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq507.gif) are invertible. In particular, we have that ![ $$R\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{m} \\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_Equjm.gif) where ![ $$R \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq508.gif) is invertible. We shall show that x i = y i , i = 1,..., m by induction on n. First, observe that if A = 0, then there is nothing to prove. If A≠0, then both of the reduced row echelon forms have to be nontrivial. Then, we have that ![ $$\\begin{array}{rcl}{ x}_{{i}_{1}}& =& {e}_{1}, \\\\ {x}_{i}& =& 0\\quad \\text{ for }i < {i}_{1} \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ118.gif) and ![ $$\\begin{array}{rcl}{ y}_{{j}_{1}}& =& {e}_{1}, \\\\ {y}_{i}& =& 0\\quad \\text{ for }i < {j}_{1}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ119.gif) The relationship Rx i = y i shows that y i = 0 if x i = 0. Thus, j 1 ≥ i 1. Similarly, the relationship y i = R − 1 x i shows that x i = 0 if y i = 0. Hence, also j 1 ≤ i 1. Thus, i 1 = j 1 and ![ $${x}_{{i}_{1}} = {e}_{1} = {y}_{{j}_{1}}$$ ](A81414_1_En_1_Chapter_IEq509.gif). This implies that Re 1 = e 1 and R − 1 e 1 = e 1. In other words, ![ $$R = \\left \[\\begin{array}{cc} 1& 0\\\\ 0 &{R}^{{\\prime}} \\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_Equjn.gif) where ![ $${R}^{{\\prime}}\\in \\mathrm{{ Mat}}_{\\left \(n-1\\right \)\\times \\left \(n-1\\right \)}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq510.gif) is invertible. In the special case where n = 1, we are finished as we have shown that R = 1. This anchors our induction. We can now assume the induction hypothesis: All ![ $$\\left \(n - 1\\right \) \\times m$$ ](A81414_1_En_1_Chapter_IEq511.gif) matrices have unique reduced row echelon forms. If we define x i ′ , ![ $${y}_{i}^{{\\prime}}\\in{\\mathbb{F}}^{n-1}$$ ](A81414_1_En_1_Chapter_IEq512.gif) as the last n − 1 entries in x i and y i , i.e., ![ $$\\begin{array}{rcl}{ x}_{i}& =& \\left \[\\begin{array}{c} {\\xi }_{1i} \\\\ {x}_{i}^{{\\prime}} \\end{array} \\right \], \\\\ {y}_{i}& =& \\left \[\\begin{array}{c} {\\upsilon }_{1i} \\\\ {y}_{i}^{{\\prime}} \\end{array} \\right \],\\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ120.gif) then we see that ![ $$\\left \[\\begin{array}{ccc} {x}_{1}^{{\\prime}}&\\cdots &{x}_{m}^{{\\prime}} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_IEq513.gif) and ![ $$\\left \[\\begin{array}{ccc} {y}_{1}^{{\\prime}}&\\cdots &{y}_{m}^{{\\prime}} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_IEq514.gif) are still in reduced row echelon form. Moreover, the relationship ![ $$\\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{m} \\end{array} \\right \] = R\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equjo.gif) now implies that ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{ccc} {\\upsilon }_{11} & \\cdots &{\\upsilon }_{1m} \\\\ {y}_{1}^{{\\prime}}&\\cdots & {y}_{m}^{{\\prime}} \\end{array} \\right \]& =& \\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{m} \\end{array} \\right \] \\\\ & =& R\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{cc} 1& 0\\\\ 0 &{R}^{{\\prime}} \\end{array} \\right \]\\left \[\\begin{array}{ccc} {\\xi }_{11} & \\cdots &{\\xi }_{1m} \\\\ {x}_{1}^{{\\prime}}&\\cdots &{x}_{m}^{{\\prime}} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {\\xi }_{11} & \\cdots & {\\xi }_{1m} \\\\ {R}^{{\\prime}}{x}_{1}^{{\\prime}}&\\cdots &{R}^{{\\prime}}{x}_{m}^{{\\prime}} \\end{array} \\right \] \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ121.gif) Thus, ![ $${R}^{{\\prime}}\\left \[\\begin{array}{ccc} {x}_{ 1}^{{\\prime}}&\\cdots &{x}_{ m}^{{\\prime}} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {y}_{1}^{{\\prime}}&\\cdots &{y}_{ m}^{{\\prime}} \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equjp.gif) The induction hypothesis now implies that x i ′ = y i ′ . This combined with ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{m} \\end{array} \\right \]& =& \\left \[\\begin{array}{ccc} {\\upsilon }_{11} & \\cdots &{\\upsilon }_{1m} \\\\ {y}_{1}^{{\\prime}}&\\cdots & {y}_{m}^{{\\prime}} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {\\xi }_{11} & \\cdots & {\\xi }_{1m} \\\\ {R}^{{\\prime}}{x}_{1}^{{\\prime}}&\\cdots &{R}^{{\\prime}}{x}_{m}^{{\\prime}} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \]\\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ122.gif) shows that x i = y i for all i = 1,..., m. We are now ready to explain how the reduced row echelon form can be used to identify the kernel and image of a matrix. Along the way, we shall reprove some of our earlier results. Suppose that ![ $$A \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq515.gif) and ![ $$\\begin{array}{rcl} PA& =& {A}_{\\mathrm{rref}} \\\\ & =& \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \],\\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ123.gif) where we can find 1 ≤ j 1 < ⋯ < j k ≤ m, such that ![ $$\\begin{array}{rcl}{ x}_{{j}_{s}}& =& {e}_{s}\\text{ for }i = 1,\\ldots ,k \\\\ {x}_{j}& =& 0,\\text{ if }j < {j}_{1}, \\\\ {x}_{j}& \\in & \\mathrm{span}\\left \\{{e}_{1},\\ldots ,{e}_{s}\\right \\}\\!,\\mathrm{ if }{j}_{s} < j < {j}_{s+1}, \\\\ {x}_{j}& \\in & \\mathrm{span}\\left \\{{e}_{1},\\ldots ,{e}_{k}\\right \\}\\!,\\mathrm{ if }{j}_{k} < j.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ124.gif) Fin Ally, let i 1 < ⋯ < i m − k be the indices complementary to j 1, . . , j k , i.e., ![ $$\\left \\{1,\\ldots ,m\\right \\} = \\left \\{{j}_{1},..,{j}_{k}\\right \\} \\cup \\left \\{{i}_{1},\\ldots ,{i}_{m-k}\\right \\}.$$ ](A81414_1_En_1_Chapter_Equjq.gif) We are first going to study the kernel of A. Since P is invertible, we see that Ax = 0 if and only if A rref x = 0. Thus we only need to study the equation A rref x = 0. If we let x = ξ1,..., ξ m , then the nature of the equations A rref x = 0 will tell us that ![ $$\\left \({\\xi }_{1},\\ldots ,{\\xi }_{m}\\right \)$$ ](A81414_1_En_1_Chapter_IEq516.gif) are uniquely determined by ![ $${\\xi }_{{i}_{1}},\\ldots ,{\\xi }_{{i}_{m-k}}$$ ](A81414_1_En_1_Chapter_IEq517.gif). To see why this is, we note that if we have A rref = α ij , then the reduced row echelon form tells us that ![ $$\\begin{array}{rcl} {\\xi }_{{j}_{1}} + {\\alpha }_{1{i}_{1}}{\\xi }_{{i}_{1}} + \\cdots+ {\\alpha }_{1{i}_{m-k}}{\\xi }_{{i}_{m-k}}& =& 0, \\\\ \\vdots& \\vdots & \\vdots \\\\ {\\xi }_{{j}_{k}} + {\\alpha }_{k{i}_{1}}{\\xi }_{{i}_{1}} + \\cdots+ {\\alpha }_{k{i}_{m-k}}{\\xi }_{{i}_{m-k}}& =& 0.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ125.gif) Thus, ![ $${\\xi }_{{j}_{1}},\\ldots ,{\\xi }_{{j}_{k}}$$ ](A81414_1_En_1_Chapter_IEq518.gif) have explicit formulas in terms of ![ $${\\xi }_{{i}_{1}},\\ldots ,{\\xi }_{{i}_{m-k}}$$ ](A81414_1_En_1_Chapter_IEq519.gif). We actually get a bit more information: If we take ![ $$\\left \({\\alpha }_{1},\\ldots ,{\\alpha }_{m-k}\\right \) \\in{\\mathbb{F}}^{m-k}$$ ](A81414_1_En_1_Chapter_IEq520.gif) and construct the unique solution x = ξ1,..., ξ m such that ![ $${\\xi }_{{i}_{1}} = {\\alpha }_{1},\\ldots ,{\\xi }_{{i}_{m-k}} = {\\alpha }_{m-k}$$ ](A81414_1_En_1_Chapter_IEq521.gif), then we have actually constructed a map ![ $$\\begin{array}{rcl} {\\mathbb{F}}^{m-k}& \\rightarrow & \\ker \\left \({A}_{\\mathrm{ rref}}\\right \) \\\\ \\left \({\\alpha }_{1},\\ldots ,{\\alpha }_{m-k}\\right \)& \\rightarrow & \\left \({\\xi }_{1},\\ldots ,{\\xi }_{m}\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ126.gif) We have just seen that this map is onto. The construction also gives us explicit formulas for ![ $${\\xi }_{{j}_{1}},\\ldots ,{\\xi }_{{j}_{k}}$$ ](A81414_1_En_1_Chapter_IEq522.gif) that are linear in ![ $${\\xi }_{{i}_{1}} = {\\alpha }_{1},\\ldots ,{\\xi }_{{i}_{m-k}} = {\\alpha }_{m-k}$$ ](A81414_1_En_1_Chapter_IEq523.gif). Thus, the map is linear. Fin Ally, if ![ $$\\left \({\\xi }_{1},\\ldots ,{\\xi }_{m}\\right \) = 0,$$ ](A81414_1_En_1_Chapter_IEq524.gif) then we clearly also have ![ $$\\left \({\\alpha }_{1},\\ldots ,{\\alpha }_{m-k}\\right \) = 0,$$ ](A81414_1_En_1_Chapter_IEq525.gif) so the map is one-to-one. All in all, it is a linear isomorphism. This leads us to the following result. Theorem 1.13.5. (Uniqueness of Dimension) Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_1_Chapter_IEq526.gif) if n < m, then ker A≠0. Consequently, ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq527.gif) and ![ $${\\mathbb{F}}^{m}$$ ](A81414_1_En_1_Chapter_IEq528.gif) are not isomorphic. Proof. Using the above notation, we have k ≤ n < m. Thus, m − k > 0. From what we just saw, this implies kerA = kerA rref≠0. In particular, it is not possible for A to be invertible. This shows that ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq529.gif) and ![ $${\\mathbb{F}}^{m}$$ ](A81414_1_En_1_Chapter_IEq530.gif) cannot be isomorphic. Having now shown that the dimension of a vector space is well defined, we can then establish the dimension formula. Part of the proof of this theorem is to identify a basis for the image of a matrix. Note that this proof does not depend on the result that subspaces of finite-dimensional vector spaces are finite-dimensional. In fact, for the subspaces under consideration, namely, the kernel and image, it is part of the proof to show that they are finite-dimensional. Theorem 1.13.6. (The Dimension Formula) Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_1_Chapter_IEq531.gif) then ![ $$m =\\dim \\left \(\\ker \\left \(A\\right \)\\right \) +\\dim \\left \(\\mathrm{im}\\left \(A\\right \)\\right \).$$ ](A81414_1_En_1_Chapter_Equjr.gif) Proof. We use the above notation. We just saw that dimkerA = m − k, so it remains to check why dimimA = k. If ![ $$A = \\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{m} \\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_Equjs.gif) then we have y i = P − 1 x i , where ![ $${A}_{\\mathrm{rref}} = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equjt.gif) We know that each ![ $${x}_{j} \\in \\mathrm{ span}\\left \\{{e}_{1},\\ldots ,{e}_{k}\\right \\} =\\mathrm{ span}\\left \\{{x}_{{j}_{1}},\\ldots ,{x}_{{j}_{k}}\\right \\};$$ ](A81414_1_En_1_Chapter_Equju.gif) thus, we have that ![ $${y}_{j} \\in \\mathrm{ span}\\left \\{{y}_{{j}_{1}},\\ldots ,{y}_{{j}_{k}}\\right \\}.$$ ](A81414_1_En_1_Chapter_Equjv.gif) Moreover, as P is invertible, we see that ![ $${y}_{{j}_{1}},\\ldots ,{y}_{{j}_{k}}$$ ](A81414_1_En_1_Chapter_IEq532.gif) must be linearly independent as e 1,..., e k are linearly independent. This proves that ![ $${y}_{{j}_{1}},\\ldots ,{y}_{{j}_{k}}$$ ](A81414_1_En_1_Chapter_IEq533.gif) form a basis for imA. Corollary 1.13.7. (Subspace Theorem) Let ![ $$M \\subset{\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq534.gif) be a subspace. Then, M is finite-dimensional and dim M ≤ n. Proof. Recall from Sect. 1.10 that every subspace ![ $$M \\subset{\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq535.gif) has a complement. This means that we can construct a projection as in Sect. 1.11 that has M as kernel. This means that M is the kernel for some ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq536.gif). Thus, the previous theorem implies the claim. It might help to see an example of how the above constructions work. Example 1.13.8. Suppose that we have a 4 ×7 matrix ![ $$A = \\left \[\\begin{array}{ccccccc} 0&1&4&1& 0 &3& - 1\\\\ 0 &0 &0 &1 & - 2 &5 & - 4 \\\\ 0&0&0&0& 0 &0& 1\\\\ 0 &0 &0 &0 & 0 &0 & 1 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equjw.gif) Then ![ $${A}_{\\mathrm{rref}} = \\left \[\\begin{array}{ccccccc} 0&1&4&0& 2 & - 2&0\\\\ 0 &0 &0 &1 & - 2 & 5 &0 \\\\ 0&0&0&0& 0 & 0 &1\\\\ 0 &0 &0 &0 & 0 & 0 &0 \\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equjx.gif) Thus, j 1 = 2, j 2 = 4, and j 3 = 7. The complementary indices are i 1 = 1, i 2 = 3, i 3 = 5, and i 4 = 6. Hence, ![ $$\\mathrm{im}\\left \(A\\right \) =\\mathrm{ span}\\left \\{\\left \[\\begin{array}{c} 1\\\\ 0 \\\\ 0\\\\ 0 \\end{array} \\right \],\\left \[\\begin{array}{c} 1\\\\ 1 \\\\ 0\\\\ 0 \\end{array} \\right \],\\left \[\\begin{array}{c} - 1\\\\ - 4 \\\\ 1\\\\ 1 \\end{array} \\right \]\\right \\}$$ ](A81414_1_En_1_Chapter_Equjy.gif) and ![ $$\\ker \\left \(A\\right \) = \\left \\{\\left \[\\begin{array}{c} {\\xi }_{1} \\\\ - 4{\\xi }_{3} - 2{\\xi }_{5} + 2{\\xi }_{6} \\\\ {\\xi }_{3} \\\\ 2{\\xi }_{5} - 5{\\xi }_{6} \\\\ {\\xi }_{5} \\\\ {\\xi }_{6} \\\\ 0 \\end{array} \\right \] : {\\xi }_{1},{\\xi }_{3},{\\xi }_{5},{\\xi }_{6} \\in\\mathbb{F}\\right \\}.$$ ](A81414_1_En_1_Chapter_Equjz.gif) Our method for finding a basis for the image of a matrix leads us to a different proof of the rank theorem.The column rank of a matrix is simply the dimension of the image, in other words, the maximal number of linearly independent column vectors.Similarly, the row rank is the maximal number of linearly independent rows. In other words, the row rank is the dimension of the image of the transposed matrix. Theorem 1.13.9. (The Rank Theorem) Any n × m matrix has the property that the row rank is equal to the column rank. Proof. We just saw that the column rank for A and A rref is the same and equal to k with the above notation. Because of the row operations we use, it is clear that the rows of A rref are linear combin Ations of the rows of A. As the process can be reversed, the rows of A are also linear combin Ations of the rows A rref. Hence, A and A rref also have the same row rank. Now, A rref has k linearly independent rows and must therefore have row rank k. Using the rank theorem together with the dimension formula leads to an interesting corollary. Corollary 1.13.10. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq537.gif) . Then, ![ $$\\dim \\left \(\\ker \\left \(A\\right \)\\right \) =\\dim \\left \(\\ker \\left \({A}^{t}\\right \)\\right \),$$ ](A81414_1_En_1_Chapter_Equka.gif) where ![ $${A}^{t} \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq538.gif) is the transpose of A. We are now going to clarify what type of matrices P occur when we do the row reduction to obtain PA = A rref. If we have an n ×n matrix A with trivial kernel, then it must follow that ![ $${A}_{\\mathrm{rref}} = {1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_1_Chapter_IEq539.gif). Therefore, if we perform Gauss-Jordan elimin Ation on the augmented matrix ![ $$A\\vert {1}_{{\\mathbb{F}}^{n}},$$ ](A81414_1_En_1_Chapter_Equkb.gif) then we end up with an answer that looks like ![ $${1}_{{\\mathbb{F}}^{n}}\\vert B.$$ ](A81414_1_En_1_Chapter_Equkc.gif) The matrix B evidently satisfies ![ $$BA = {1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_1_Chapter_IEq540.gif). To be sure that this is the inverse we must also check that ![ $$AB = {1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_1_Chapter_IEq541.gif). However, we know that A has an inverse A − 1. If we multiply the equation ![ $$BA = {1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_1_Chapter_IEq542.gif) by A − 1 on the right we obtain ![ $$B = {A}^{-1}$$ ](A81414_1_En_1_Chapter_IEq543.gif). This settles the uncertainty. Definition 1.13.11. The space of all invertible n ×n matrices is called the general linear group and is denoted by ![ $$G{l}_{n}\\left \(\\mathbb{F}\\right \) = \\left \\{A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)\\vert \\,\\exists \\mathrm{ }{A}^{-1} \\in \\mathrm{{ Mat}}_{ n\\times n}\\left \(\\mathbb{F}\\right \) : \\mathrm{ }A{A}^{-1} = {A}^{-1}A = {1}_{{ \\mathbb{F}}^{n}}\\right \\}.$$ ](A81414_1_En_1_Chapter_Equkd.gif) This space is a so-called group. Definition 1.13.12. This means that we have a set G and a product operation G ×G -> G denoted by ![ $$\\left \(g,h\\right \) \\rightarrow gh$$ ](A81414_1_En_1_Chapter_IEq544.gif). This product operation must satisfy: 1. Associativity: ![ $$\\left \({g}_{1}{g}_{2}\\right \){g}_{3} = {g}_{1}\\left \({g}_{2}{g}_{3}\\right \)$$ ](A81414_1_En_1_Chapter_IEq545.gif). 2. Existence of a unit e ∈ G such that ![ $$eg = ge = g$$ ](A81414_1_En_1_Chapter_IEq546.gif). 3. Existence of inverses: For each g ∈ G, there is g − 1 ∈ G such that ![ $$g{g}^{-1} = {g}^{-1}g = e$$ ](A81414_1_En_1_Chapter_IEq547.gif). If we use matrix multiplication in ![ $$G{l}_{n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq548.gif) and ![ $${1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_1_Chapter_IEq549.gif) as the unit, then it is clear that ![ $$G{l}_{n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq550.gif) is a group. Note that we do not assume that the product operation in a group is commutative, and indeed, it is not commutative in ![ $$G{l}_{n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq551.gif) unless n = 1. Definition 1.13.13. If a possibly infinite subset S ⊂ G of a group has the property that any element in G can be written as a product of elements in S, then we say that SgeneratesG. We can now prove, Theorem 1.13.14. The general linear group ![ $$G{l}_{n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq552.gif) is generated by the elementary matrices I kl , R kl α, and M k α. Proof. We already observed that I kl , R kl α, and M k α are invertible and hence form a subset in ![ $$G{l}_{n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq553.gif). Let ![ $$A \\in G{l}_{n}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_1_Chapter_IEq554.gif) then we know that also ![ $${A}^{-1} \\in G{l}_{n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq555.gif). Now, observe that we can find ![ $$P \\in G{l}_{n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq556.gif) as a product of elementary matrices such that ![ $$P{A}^{-1} = {1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_1_Chapter_IEq557.gif). This was the content of the Gauss-Jordan elimin Ation process for finding the inverse of a matrix. This means that P = A and hence A is a product of elementary matrices. The row echelon representation of a matrix tells us: Corollary 1.13.15. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_1_Chapter_IEq558.gif) then it is possible to find ![ $$P \\in G{l}_{n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq559.gif) such that PA is upper triangular: ![ $$PA = \\left \[\\begin{array}{llll} {\\beta }_{11} & {\\beta }_{12} & \\cdots &{\\beta }_{1n} \\\\ 0 &{\\beta }_{22} & \\cdots &{\\beta }_{2n}\\\\ \\vdots &\\vdots &\\ddots &\\vdots \\\\ 0 &0 &\\cdots &{\\beta }_{nn} \\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equke.gif) Moreover, ![ $$\\ker \\left \(A\\right \) =\\ker \\left \(PA\\right \)$$ ](A81414_1_En_1_Chapter_Equkf.gif) and ker A≠0 if and only if the product of the diagonal elements in PA is zero: ![ $${\\beta }_{11}{\\beta }_{22}\\cdots {\\beta }_{nn} = 0.$$ ](A81414_1_En_1_Chapter_Equkg.gif) We are now ready to see how the process of calculating A rref using row operations can be interpreted as a change of basis in the image space. Definition 1.13.16. Two matrices ![ $$A,B \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq560.gif) are said to be row equivalent if we can find ![ $$P \\in G{l}_{n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq561.gif) such that A = PB. Thus, row equivalent matrices are the matrices that can be obtained from each other via row operations. We can also think of row equivalent matrices as being different matrix representations of the same linear map with respect to different bases in ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq562.gif). To see this, consider a linear map ![ $$L : {\\mathbb{F}}^{m} \\rightarrow{\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq563.gif) that has matrix representation A with respect to the standard bases. If we perform a change of basis in ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq564.gif) from the standard basis f 1,..., f n to a basis y 1,..., y n such that ![ $$\\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {f}_{1} & \\cdots &{f}_{n} \\end{array} \\right \]P,$$ ](A81414_1_En_1_Chapter_Equkh.gif) i.e., the columns of P are regarded as a new basis for ![ $${\\mathbb{F}}^{n},$$ ](A81414_1_En_1_Chapter_IEq565.gif) then ![ $$B = {P}^{-1}A$$ ](A81414_1_En_1_Chapter_IEq566.gif) is simply the matrix representation for ![ $$L : {\\mathbb{F}}^{m} \\rightarrow{\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq567.gif) when we have changed the basis in ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_1_Chapter_IEq568.gif) according to P. This information can be encoded in the diagram ![ $$\\begin{array}{rll} {\\mathbb{F}}^{m}& \\rightarrow ^{A} &{\\mathbb{F}}^{n} \\\\ {1}_{{\\mathbb{F}}^{m}} \\downarrow & &\\downarrow{1}_{{\\mathbb{F}}^{n}} \\\\ {\\mathbb{F}}^{m}& \\rightarrow ^{L} &{\\mathbb{F}}^{n} \\\\ {1}_{{\\mathbb{F}}^{m}} \\uparrow & &\\uparrow P \\\\ {\\mathbb{F}}^{m}& \\rightarrow ^{B}&{\\mathbb{F}}^{n}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equki.gif) When we consider abstract matrices rather than systems of equations, we could equally well have performed column operations.This is accomplished by multiplying the elementary matrices on the right rather than the left. We can see explicitly what happens in the 2 ×2 case: ![ $$\\begin{array}{rcl} A{I}_{12}& = \\left \[\\begin{array}{cc} {\\alpha }_{11} & {\\alpha }_{12} \\\\ {\\alpha }_{21} & {\\alpha }_{22} \\end{array} \\right \]\\left \[\\begin{array}{cc} 0&1\\\\ 1 &0 \\end{array} \\right \] = \\left \[\\begin{array}{cc} {\\alpha }_{12} & {\\alpha }_{11} \\\\ {\\alpha }_{22} & {\\alpha }_{21} \\end{array} \\right \]& \\\\ A{R}_{12}\\left \(\\alpha \\right \)& = \\left \[\\begin{array}{cc} {\\alpha }_{11} & {\\alpha }_{12} \\\\ {\\alpha }_{21} & {\\alpha }_{22} \\end{array} \\right \]\\left \[\\begin{array}{cc} 1&\\alpha \\\\ 0 & 1 \\end{array} \\right \] = \\left \[\\begin{array}{cc} {\\alpha }_{11} & \\alpha {\\alpha }_{11} + {\\alpha }_{12} \\\\ {\\alpha }_{21} & \\alpha {\\alpha }_{21} + {\\alpha }_{22} \\end{array} \\right \]& \\\\ A{R}_{21}\\left \(\\alpha \\right \)& = \\left \[\\begin{array}{cc} {\\alpha }_{11} & {\\alpha }_{12} \\\\ {\\alpha }_{21} & {\\alpha }_{22} \\end{array} \\right \]\\left \[\\begin{array}{cc} 1 &0\\\\ \\alpha&1 \\end{array} \\right \] = \\left \[\\begin{array}{cc} {\\alpha }_{11} + \\alpha {\\alpha }_{12} & {\\alpha }_{12} \\\\ {\\alpha }_{21} + \\alpha {\\alpha }_{22} & {\\alpha }_{22} \\end{array} \\right \]& \\\\ A{M}_{1}\\left \(\\alpha \\right \)& = \\left \[\\begin{array}{cc} {\\alpha }_{11} & {\\alpha }_{12} \\\\ {\\alpha }_{21} & {\\alpha }_{22} \\end{array} \\right \]\\left \[\\begin{array}{cc} \\alpha &0\\\\ 0 &1 \\end{array} \\right \] = \\left \[\\begin{array}{cc} \\alpha {\\alpha }_{11} & {\\alpha }_{12} \\\\ \\alpha {\\alpha }_{21} & {\\alpha }_{22} \\end{array} \\right \]& \\\\ A{M}_{2}\\left \(\\alpha \\right \)& = \\left \[\\begin{array}{cc} {\\alpha }_{11} & {\\alpha }_{12} \\\\ {\\alpha }_{21} & {\\alpha }_{22} \\end{array} \\right \]\\left \[\\begin{array}{cc} 1& 0\\\\ 0 &\\alpha\\end{array} \\right \] = \\left \[\\begin{array}{cc} {\\alpha }_{11} & \\alpha {\\alpha }_{12} \\\\ {\\alpha }_{21} & \\alpha {\\alpha }_{22} \\end{array} \\right \].& \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ127.gif) The only important and slightly confusing thing to be aware of is that, while R kl α as a row operation multiplies row l by α and then adds it to row k, it now multiplies column k by α and adds it to column l as a column operation. This is because AE kl is the matrix whose lth column is the kth column of A and whose other columns vanish. Definition 1.13.17. Two matrices ![ $$A,B \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq569.gif) are said to be column equivalent if A = BQ for some ![ $$Q \\in G{l}_{m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq570.gif). According to the above interpretation, this corresponds to a change of basis in the domain space ![ $${\\mathbb{F}}^{m}$$ ](A81414_1_En_1_Chapter_IEq571.gif). Definition 1.13.18. More generally, we say that ![ $$A,B \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq572.gif) are equivalent if A = PBQ, where ![ $$P \\in G{l}_{n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq573.gif) and ![ $$Q \\in G{l}_{m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq574.gif). The diagram for the change of basis then looks like ![ $$\\begin{array}{rll} {\\mathbb{F}}^{m}& \\rightarrow ^{A} &{\\mathbb{F}}^{n} \\\\ {1}_{{\\mathbb{F}}^{m}} \\downarrow & &\\downarrow{1}_{{\\mathbb{F}}^{n}} \\\\ {\\mathbb{F}}^{m}& \\rightarrow ^{L} &{\\mathbb{F}}^{n} \\\\ {Q}^{-1} \\uparrow & &\\uparrow P \\\\ {\\mathbb{F}}^{m}& \\rightarrow ^{B}&{\\mathbb{F}}^{n}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equkj.gif) In this way, we see that two matrices are equivalent if and only if they are matrix representations for the same linear map. Recall from Sect. 1.12 that any linear map between finite-dimensional spaces always has a matrix representation of the form ![ $$\\left \[\\begin{array}{llllll} 1&\\cdots & &0& &0\\\\ \\vdots &\\ddots & &\\vdots & &\\vdots\\\\ 0 &\\cdots&1 &\\vdots & &\\vdots \\\\ \\vdots & &\\vdots &0& &0\\\\ \\vdots & &\\vdots &\\vdots &\\ddots &\\vdots\\\\ 0 &\\cdots&0 &0 &\\cdots&0 \\end{array} \\right \],$$ ](A81414_1_En_1_Chapter_Equkk.gif) where there are k ones in the diagonal if the linear map has rank k. This implies Corollary 1.13.19. (Characterization of Equivalent Matrices) ![ $$A,B \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq575.gif) are equivalent if and only if they have the same rank. Moreover, any matrix of rank k is equivalent to a matrix that has k ones on the diagonal and zeros elsewhere. ### 1.13.1 Exercises 1. Find bases for kernel and image for the following matrices: (a) ![ $$\\left \[\\begin{array}{llll} 1&3&5&1\\\\ 2 &0 &6 &0 \\\\ 0&1&7&2\\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equkl.gif) (b) ![ $$\\left \[\\begin{array}{ll} 1&2\\\\ 0 &3 \\\\ 1&4\\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equkm.gif) (c) ![ $$\\left \[\\begin{array}{lll} 1&0&1\\\\ 0 &1 &0 \\\\ 1&0&1\\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equkn.gif) (d) ![ $$\\left \[\\begin{array}{llll} {\\alpha }_{11} & 0 &\\cdots &0 \\\\ {\\alpha }_{21} & {\\alpha }_{22} & \\cdots &0\\\\ \\vdots &\\vdots &\\ddots &\\vdots\\\\ {\\alpha }_{ n1} & {\\alpha }_{n2} & \\cdots &{\\alpha }_{nn}\\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equko.gif) In this case, it will be necessary to discuss whether or not α ii = 0 for each i = 1,..., n. 2. Find A − 1 for each of the following matrices: (a) ![ $$\\left \[\\begin{array}{cccc} 0&0&0&1\\\\ 0 &0 &1 &0 \\\\ 0&1&0&0\\\\ 1 &0 &0 &0\\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equkp.gif) (b) ![ $$\\left \[\\begin{array}{cccc} 0&0&0&1\\\\ 1 &0 &0 &0 \\\\ 0&1&0&0\\\\ 0 &0 &1 &0\\end{array} \\right \]$$ ](A81414_1_En_1_Chapter_Equkq.gif) (c) ![ $$\\left \[\\begin{array}{cccc} 0&1&0&1\\\\ 1 &0 &0 &0 \\\\ 0&0&1&0\\\\ 0 &0 &0 &1\\end{array} \\right \].$$ ](A81414_1_En_1_Chapter_Equkr.gif) 3. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq576.gif). Show that we can find ![ $$P \\in G{l}_{n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq577.gif) that is only a product of matrices of the types I ij and R ij α such that PA is upper triangular. 4. Let ![ $$A =\\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq578.gif). We say that A has an LU decomposition if A = LU, where L is lower triangular with 1s on the diagonal and U is upper triangular. Show that A has an LU decomposition if all the leading principalminors are invertible. The leading principal k ×k minor is the k ×k submatrix gotten from A by elimin Ating the last n − k rows and columns. Hint: Do Gauss elimin Ation using only R ij α. 5. Assume that A = PB, where ![ $$P \\in G{l}_{n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq579.gif). (a) Show that kerA = kerB. (b) Show that if the column vectors ![ $${y}_{{i}_{1}},\\ldots ,{y}_{{i}_{k}}$$ ](A81414_1_En_1_Chapter_IEq580.gif) of B form a basis for imB, then the corresponding column vectors ![ $${x}_{{i}_{1}},\\ldots ,{x}_{{i}_{k}}$$ ](A81414_1_En_1_Chapter_IEq581.gif) for A form a basis for imA. 6. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq582.gif). (a) Show that the m ×m elementary matrices I ij , R ij α, M i α when multiplied on the right correspond to column operations. (b) Show that we can find Q ∈ ![ $$G{l}_{m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq583.gif) such that AQ is lower triangular. (c) Use this to conclude that imA = imAQ and describe a basis for imA. (d) Use Q to find a basis for kerA given a basis for kerAQ and describe how you select a basis for kerAQ. 7. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq584.gif) be upper triangular. (a) Show that dimkerA ≤ number of zero entries on the diagonal. (b) Give an example where dimkerA < number of zero entries on the diagonal. 8. In this exercise, you are asked to show some relationships between the elementary matrices. (a) Show that M i α = I ij M j αI ji . (b) Show that ![ $${R}_{ij}\\left \(\\alpha \\right \) = {M}_{j}\\left \({\\alpha }^{-1}\\right \){R}_{ij}\\left \(1\\right \){M}_{j}\\left \(\\alpha \\right \)$$ ](A81414_1_En_1_Chapter_IEq585.gif). (c) Show that ![ $${I}_{ij} = {R}_{ij}\\left \(-1\\right \){R}_{ji}\\left \(1\\right \){R}_{ij}\\left \(-1\\right \){M}_{j}\\left \(-1\\right \)$$ ](A81414_1_En_1_Chapter_IEq586.gif). (d) Show that R kl α = I ki I lj R ij αI jl I ik , where in case i = k or j = k we interpret ![ $${I}_{kk} = {I}_{ll} = {1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_1_Chapter_IEq587.gif). 9. A matrix ![ $$A \\in G{l}_{n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq588.gif) is a permutation matrix (see also Example 1.7.7) if Ae 1 = e σi for some bijective map (permutation) ![ $$\\sigma: \\left \\{1,\\ldots ,n\\right \\} \\rightarrow \\left \\{1,\\ldots ,n\\right \\}.$$ ](A81414_1_En_1_Chapter_Equks.gif) (a) Show that ![ $$A ={ \\sum\\nolimits }_{i=1}^{n}{E}_{ \\sigma \\left \(i\\right \)i}.$$ ](A81414_1_En_1_Chapter_Equkt.gif) (b) Show that A is a permutation matrix if and only if A has exactly one entry in each row and column which is 1 and all other entries are zero. (c) Show that A is a permutation matrix if and only if it is a product of the elementary matrices I ij . 10. Assume that we have two fields ![ $$\\mathbb{F} \\subset\\mathbb{L},$$ ](A81414_1_En_1_Chapter_IEq589.gif) such as ℝ ⊂ ℂ, and consider ![ $$A \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq590.gif). Let ![ $${A}_{\\mathbb{L}} \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{L}\\right \)$$ ](A81414_1_En_1_Chapter_IEq591.gif) be the matrix A thought of as an element of ![ $$\\mathrm{{Mat}}_{n\\times m}\\left \(\\mathbb{L}\\right \)$$ ](A81414_1_En_1_Chapter_IEq592.gif). Show that ![ $${\\dim }_{\\mathbb{F}}\\left \(\\ker \\left \(A\\right \)\\right \) {=\\dim }_{\\mathbb{L}}\\left \(\\ker \\left \({A}_{\\mathbb{L}}\\right \)\\right \)$$ ](A81414_1_En_1_Chapter_IEq593.gif) and ![ $${\\dim }_{\\mathbb{F}}\\left \(\\mathrm{im}\\left \(A\\right \)\\right \) {=\\dim }_{\\mathbb{L}}\\left \(\\mathrm{im}\\left \({A}_{\\mathbb{L}}\\right \)\\right \)$$ ](A81414_1_En_1_Chapter_IEq594.gif). Hint: Show that A and ![ $${A}_{\\mathbb{L}}$$ ](A81414_1_En_1_Chapter_IEq595.gif) have the same reduced row echelon form. 11. Given ![ $${\\alpha }_{ij} \\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq596.gif) for i < j and i, j = 1,..., n, we wish to solve ![ $$\\frac{{\\xi }_{i}} {{\\xi }_{j}} = {\\alpha }_{ij}.$$ ](A81414_1_En_1_Chapter_Equku.gif) (a) Show that this system either has no solutions or infinitely many solutions. Hint: Try n = 2, 3 first. (b) Give conditions on α ij that guarantee an infinite number of solutions. (c) Rearrange this system into a linear system and explain the above results. ## 1.14 Dual Spaces* Definition 1.14.1. For a vector space V over ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq597.gif), we define the dual vector space ![ $${V }^{{\\prime}} =\\mathrm{ Hom}\\left \(V, \\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq598.gif) as the set of linear functions on V. One often sees the notation V ∗ for V ′ . However, we have reserved V ∗ for the conjugate vector space to a complex vector space (see Exercise 6 in Sect. 1.4). When V is finite-dimensional we know that V and V ′ have the same dimension. In this section, we shall see how the dual vector space can be used as a substitute for an inner product on V in case V does not come with a natural inner product (see Chap. 3 for the theory on inner product spaces). We have a natural dual pairing ![ $$V \\times{V }^{{\\prime}}\\rightarrow\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq599.gif) defined by ![ $$\\left \(x,f\\right \) = f\\left \(x\\right \)$$ ](A81414_1_En_1_Chapter_IEq600.gif) for x ∈ V and f ∈ V ′ . We are going to think of ![ $$\\left \(x,f\\right \)$$ ](A81414_1_En_1_Chapter_IEq601.gif) as a sort of inner product between x and f. Using this notation will enable us to make the theory virtually the same as for inner product spaces. Observe that this pairing is linear in both variables. Linearity in the first variable is a consequence of using linear functions in the second variable. Linearity in the second variable is completely trivial: ![ $$\\begin{array}{rcl} \\left \(\\alpha x + \\beta y,f\\right \)& =& f\\left \(\\alpha x + \\beta y\\right \) \\\\ & =& \\alpha f\\left \(x\\right \) + \\beta f\\left \(y\\right \) \\\\ & =& \\alpha \\left \(x,f\\right \) + \\beta \\left \(y,f\\right \), \\\\ \\left \(x,\\alpha f + \\beta g\\right \)& =& \\left \(\\alpha f + \\beta g\\right \)\\left \(x\\right \) \\\\ & =& \\alpha f\\left \(x\\right \) + \\beta g\\left \(x\\right \) \\\\ & =& \\alpha \\left \(x,f\\right \) + \\beta \\left \(x,g\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ128.gif) We start with our construction of a dual basis; these are similar to orthonormal bases. Let V have a basis x 1,..., x n , and define linear functions f i by f i x j = δ ij . Thus, ![ $$\\left \({x}_{i},{f}_{j}\\right \) = {f}_{j}\\left \({x}_{i}\\right \) = {\\delta }_{ij}$$ ](A81414_1_En_1_Chapter_IEq602.gif). Example 1.14.2. Recall that we defined dx i : ℝ n -> ℝ as the linear function such that dx i e j = δ ij , where e 1,..., e n is the canonical basis for ℝ n . Thus, dx i is the dual basis to the canonical basis. Proposition 1.14.3. The vectors f 1 ,...,f n ∈ V ′ form a basis called the dual basis of x 1 ,...,x n . Moreover, for x ∈ V and f ∈ V ′ , we have the expansions ![ $$\\begin{array}{rcl} x& =& \\left \(x,{f}_{1}\\right \){x}_{1} + \\cdots+ \\left \(x,{f}_{n}\\right \){x}_{n}, \\\\ f& =& \\left \({x}_{1},f\\right \){f}_{1} + \\cdots+ \\left \({x}_{n},f\\right \){f}_{n}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ129.gif) Proof. Consider a linear combin Ation ![ $${\\alpha }_{1}{f}_{1} + \\cdots+ {\\alpha }_{n}{f}_{n}$$ ](A81414_1_En_1_Chapter_IEq603.gif). Then, ![ $$\\begin{array}{rcl} \\left \({x}_{i},{\\alpha }_{1}{f}_{1} + \\cdots+ {\\alpha }_{n}{f}_{n}\\right \)& =& {\\alpha }_{1}\\left \({x}_{i},{f}_{1}\\right \) + \\cdots+ {\\alpha }_{n}\\left \({x}_{i},{f}_{n}\\right \) \\\\ & =& {\\alpha }_{i}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ130.gif) Thus, α i = 0 if ![ $${\\alpha }_{1}{f}_{1} + \\cdots+ {\\alpha }_{n}{f}_{n} = 0$$ ](A81414_1_En_1_Chapter_IEq604.gif). Since V and V ′ have the same dimension, this shows that f 1,.... f n form a basis for V ′ . Moreover, if we have an expansion ![ $$f = {\\alpha }_{1}{f}_{1} + \\cdots+ {\\alpha }_{n}{f}_{n},$$ ](A81414_1_En_1_Chapter_IEq605.gif) then it follows that ![ $${\\alpha }_{i} = \\left \({x}_{i},f\\right \) = f\\left \({x}_{i}\\right \)$$ ](A81414_1_En_1_Chapter_IEq606.gif). Fin Ally, assume that ![ $$x = {\\beta }_{1}{x}_{1} + \\cdots+ {\\beta }_{n}{x}_{n}$$ ](A81414_1_En_1_Chapter_IEq607.gif). Then, ![ $$\\begin{array}{rcl} \\left \(x,{f}_{i}\\right \)& =& \\left \({\\beta }_{1}{x}_{1} + \\cdots+ {\\beta }_{n}{x}_{n},{f}_{i}\\right \) \\\\ & =& {\\beta }_{1}\\left \({x}_{1},{f}_{i}\\right \) + \\cdots+ {\\beta }_{n}\\left \({x}_{n},{f}_{i}\\right \) \\\\ & =& {\\beta }_{i}, \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ131.gif) which is what we wanted to prove. Next, we define annihilators; these are counterparts to orthogonal complements. Definition 1.14.4. Let M ⊂ V be a subspace and define the annihilator to M in V as the subspace M o ⊂ V ′ given by ![ $$\\begin{array}{rcl}{ M}^{o}& =& \\left \\{f \\in{V }^{{\\prime}} : \\left \(x,f\\right \) = 0\\text{ for all }x \\in M\\right \\} \\\\ & =& \\left \\{f \\in{V }^{{\\prime}} : f\\left \(x\\right \) = 0\\text{ for all }x \\in M\\right \\} \\\\ & =& \\left \\{f \\in{V }^{{\\prime}} : f\\left \(M\\right \) = \\left \\{0\\right \\}\\right \\} \\\\ & =& \\left \\{f \\in{V }^{{\\prime}} : f{\\vert }_{ M} = 0\\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ132.gif) Using dual bases, we can get a slightly better grip on these annihilators. Proposition 1.14.5. If M ⊂ V is a subspace of a finite-dimensional space and x 1 ,...,x n is a basis for V such that ![ $$M =\\mathrm{ span}\\left \\{{x}_{1},\\ldots ,{x}_{m}\\right \\},$$ ](A81414_1_En_1_Chapter_Equkv.gif) then ![ $${M}^{o} =\\mathrm{ span}\\left \\{{f}_{ m+1},\\ldots ,{f}_{n}\\right \\},$$ ](A81414_1_En_1_Chapter_Equkw.gif) where f 1 ,...,f n is the dual basis. In particular, we have ![ $$\\dim \\left \(M\\right \) +\\dim \\left \({M}^{o}\\right \) =\\dim \\left \(V \\right \) =\\dim \\left \({V }^{{\\prime}}\\right \).$$ ](A81414_1_En_1_Chapter_Equkx.gif) Proof. If M = spanx 1,..., x m , then f m + 1,..., f n ∈ M o by definition of the annihilator as each of f m + 1,..., f n vanish on the vectors x 1,..., x m . Conversely, take f ∈ M o and expand it ![ $$f = {\\alpha }_{1}{f}_{1} + \\cdots+ {\\alpha }_{n}{f}_{n}$$ ](A81414_1_En_1_Chapter_IEq608.gif). If 1 ≤ i ≤ m, then ![ $$0 = \\left \({x}_{i},f\\right \) = {\\alpha }_{i}.$$ ](A81414_1_En_1_Chapter_Equky.gif) So ![ $$f = {\\alpha }_{m+1}{f}_{m+1} + \\cdots+ {\\alpha }_{n}{f}_{n}$$ ](A81414_1_En_1_Chapter_IEq609.gif) as desired. We are now ready to discuss the reflexive property. This will allow us to go from V ′ back to V itself rather than to ![ $${\\left \({V }^{{\\prime}}\\right \)}^{{\\prime}} = {V }^{{\\prime\\prime}}$$ ](A81414_1_En_1_Chapter_IEq610.gif). Thus, we have to find a natural identification V -> V ′′ . There is, indeed, a natural linear map ![ $$\\mathrm{ev} : V \\rightarrow{V }^{{\\prime\\prime}}$$ ](A81414_1_En_1_Chapter_Equkz.gif) that takes each x ∈ V to a linear function on V ′ defined by ![ $$\\mathrm{{ev}}_{x}\\left \(f\\right \) = \\left \(x,f\\right \) = f\\left \(x\\right \)$$ ](A81414_1_En_1_Chapter_IEq611.gif). To see that it is linear, observe that ![ $$\\begin{array}{rcl} \\left \(\\alpha x + \\beta y,f\\right \)& =& f\\left \(\\alpha x + \\beta y\\right \) \\\\ & =& \\alpha f\\left \(x\\right \) + \\beta f\\left \(y\\right \) \\\\ & =& \\alpha \\left \(x,f\\right \) + \\beta \\left \(y,f\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ133.gif) Evidently, we have defined ev x in such a way that ![ $$\\left \(x,f\\right \) = \\left \(f,\\mathrm{{ev}}_{x}\\right \).$$ ](A81414_1_En_1_Chapter_Equla.gif) The map x -> ev x always has trivial kernel. To prove this in the finite-dimensional case, select a dual basis f 1,..., f n for V ′ and observe that since ev x f i = x, f i records the coordin Ates of x, it is not possible for x to be in the kernel unless it is zero. Fin Ally, we use that ![ $$\\dim \\left \(V \\right \) =\\dim \\left \({V }^{{\\prime}}\\right \) =\\dim \\left \({V }^{{\\prime\\prime}}\\right \)$$ ](A81414_1_En_1_Chapter_IEq612.gif) to conclude that this map is an isomorphism. Thus, any element of V ′′ is of the form ev x for a unique x ∈ V. The first interesting observation we make is that if f 1,..., f n is dual to x 1,..., x n , then ![ $$\\mathrm{{ev}}_{{x}_{1}},\\ldots ,\\mathrm{{ev}}_{{x}_{n}}$$ ](A81414_1_En_1_Chapter_IEq613.gif) is dual to f 1,..., f n as ![ $$\\mathrm{{ev}}_{{x}_{i}}\\left \({f}_{j}\\right \) = \\left \({x}_{i},{f}_{j}\\right \) = {\\delta }_{ij}.$$ ](A81414_1_En_1_Chapter_Equlb.gif) If we agree to identify V ′′ with V , i.e., we think of x as identified with ev x , then we can define the annihilator of a subspace N ⊂ V ′ by ![ $$\\begin{array}{rcl}{ N}^{o}& =& \\left \\{x \\in V : \\left \(x,f\\right \) = 0\\text{ for all }f \\in N\\right \\} \\\\ & =& \\left \\{x \\in V : f\\left \(x\\right \) = 0\\text{ for all }f \\in N\\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ134.gif) The claim then is, that for M ⊂ V and N ⊂ V ′ , we have M oo = M and N oo = N. Both identities follow directly from the above proposition about the construction of a basis for the annihilator. We now come to an interesting relationship between annihilators and the dual spaces of subspaces. Proposition 1.14.6. Assume that V is finite-dimensional. If V = M ⊕ N, then V ′ = M o ⊕ N o and the restriction maps V ′ -> M ′ and V ′ -> N ′ give isomorphisms ![ $$\\begin{array}{rcl}{ M}^{o}& \\approx & {N}^{{\\prime}}, \\\\ {N}^{o}& \\approx & {M}^{{\\prime}}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ135.gif) Proof. Select a basis x 1,..., x n for V such that ![ $$\\begin{array}{rcl} M& =& \\mathrm{span}\\left \\{{x}_{1},\\ldots ,{x}_{m}\\right \\}, \\\\ N& =& \\mathrm{span}\\left \\{{x}_{m+1},\\ldots ,{x}_{n}\\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ136.gif) Let f 1,..., f n be the dual basis and observe that ![ $$\\begin{array}{rcl}{ M}^{o}& =& \\mathrm{span}\\left \\{{f}_{ m+1},\\ldots ,{f}_{n}\\right \\}, \\\\ {N}^{o}& =& \\mathrm{span}\\left \\{{f}_{ 1},\\ldots ,{f}_{m}\\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ137.gif) This proves that V ′ = M o ⊕ N o . Next, we note that ![ $$\\begin{array}{rcl} \\dim \\left \({M}^{o}\\right \)& =& \\dim \\left \(V \\right \) -\\dim \\left \(M\\right \) \\\\ & =& \\dim \\left \(N\\right \) \\\\ & =& \\dim \\left \({N}^{{\\prime}}\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ138.gif) So at least M o and N ′ have the same dimension. What is more, if we restrict f m + 1,..., f n to N, then we still have that ![ $$\\left \({x}_{j},{f}_{i}\\right \) = {\\delta }_{ij}$$ ](A81414_1_En_1_Chapter_IEq614.gif) for ![ $$j = m + 1,\\ldots ,n$$ ](A81414_1_En_1_Chapter_IEq615.gif). As ![ $$N =\\mathrm{ span}\\left \\{{x}_{m+1},\\ldots ,{x}_{n}\\right \\},$$ ](A81414_1_En_1_Chapter_IEq616.gif) this means that f m + 1 | N ,..., f n | N form a basis for N ′ . The proof that N o ≈ M ′ is similar. The main problem with using dual spaces rather than inner products is that while we usually have a good picture of what V is, we rarely get a good independent description of the dual space. Thus, the constructions mentioned here should be thought of as being theoretical and strictly auxiliary to the developments of the theory of linear operators on a fixed vector space V. Below, we consider a few examples of constructions of dual spaces. Example 1.14.7. Let ![ $$V =\\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_1_Chapter_IEq617.gif) then we can identify ![ $${V }^{{\\prime}} =\\mathrm{{ Mat}}_{m\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_1_Chapter_IEq618.gif). For each ![ $$A \\in \\mathrm{{ Mat}}_{m\\times n}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_1_Chapter_IEq619.gif) the corresponding linear function is ![ $${f}_{A}\\left \(X\\right \) =\\mathrm{ tr}\\left \(AX\\right \) =\\mathrm{ tr}\\left \(XA\\right \).$$ ](A81414_1_En_1_Chapter_Equlc.gif) Example 1.14.8. If V is a finite-dimensional inner product space, then f y x = x | y defines a linear function, and we know that all linear functions are of that form. Thus, we can identify V ′ with V. Note, however, that in the complex case, y -> f y is not complex linear. It is in fact conjugate linear, i.e., ![ $${f}_{\\lambda y} =\\bar{ \\lambda }{f}_{y}$$ ](A81414_1_En_1_Chapter_IEq620.gif). Thus, V ′ is identified with V ∗ (see Exercise 6 in Sect. 1.4). This conforms with the idea that the inner product defines a bilinear paring on V ×V ∗ via ![ $$\\left \(x,y\\right \) \\rightarrow \\left \(x\\vert y\\right \)$$ ](A81414_1_En_1_Chapter_IEq621.gif) that is linear in both variables! Example 1.14.9. If we think of V as ℝ with ℚ as scalar multiplication, then it is not at all clear that we have any linear functions ![ $$f : \\mathbb{R} \\rightarrow\\mathbb{Q}$$ ](A81414_1_En_1_Chapter_IEq622.gif). In fact, the axiom of choice has to be invoked in order to show that they exist. Example 1.14.10. Fin Ally, we have an exceedingly interesting infinite-dimensional example where the dual gets quite a bit bigger. Let ![ $$V = \\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_1_Chapter_IEq623.gif) be the vector space of polynomials. We have a natural basis 1, t, t 2,.... Thus, a linear map ![ $$f : \\mathbb{F}\\left \[t\\right \] \\rightarrow\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq624.gif) is determined by its values on this basis α n = ft n . Conversely, given an infinite sequence ![ $${\\alpha }_{0},{\\alpha }_{1},{\\alpha }_{2},\\ldots\\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq625.gif), we have a linear map such that ft n = α n . So while V consists of finite sequences of elements from ![ $$\\mathbb{F},$$ ](A81414_1_En_1_Chapter_IEq626.gif) the dual consists of infinite sequences of elements from ![ $$\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq627.gif). We can evidently identify ![ $${V }^{{\\prime}} = \\mathbb{F}\\left \[\\left \[t\\right \]\\right \]$$ ](A81414_1_En_1_Chapter_IEq628.gif) with power series by recording the values on the basis as coefficients: ![ $${\\sum\\nolimits }_{n=0}^{\\infty }{\\alpha }_{ n}{t}^{n} ={ \\sum\\nolimits }_{n=0}^{\\infty }f\\left \({t}^{n}\\right \){t}^{n}.$$ ](A81414_1_En_1_Chapter_Equld.gif) This means that V ′ inherits a product structure through taking products of power series. There is a large literature on this whole setup under the title Umbral Calculus. For more on this, see [Roman]. Definition 1.14.11. The dual space construction leads to a dual mapL ′ : W ′ -> V ′ for a linear map L : V -> W. This dual map is a generalization of the transpose of a matrix. The definition is quite simple: ![ $${L}^{{\\prime}}\\left \(g\\right \) = g \\circ L.$$ ](A81414_1_En_1_Chapter_Equle.gif) Thus, if g ∈ W ′ , then we get a linear function ![ $$g \\circ L : V \\rightarrow\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq629.gif) since L : V -> W. The dual to L is often denoted L ′ = L t as with matrices. This will be justified in the exercises to this section. Note that if we use the pairing ![ $$\\left \(x,f\\right \)$$ ](A81414_1_En_1_Chapter_IEq630.gif) between V and V ′ , then the dual map satisfies ![ $$\\left \(L\\left \(x\\right \),g\\right \) = \\left \(x,{L}^{{\\prime}}\\left \(g\\right \)\\right \)$$ ](A81414_1_En_1_Chapter_Equlf.gif) for all x ∈ V and g ∈ W ′ . Thus, the dual map really is defined in a manner analogous to the adjoint. The following properties follow almost immediately from the definition. Proposition 1.14.12. Let ![ $$L,\\tilde{L} : V \\rightarrow W$$ ](A81414_1_En_1_Chapter_IEq631.gif) and K : W -> U be linear maps between finite-dimensional vector spaces; then: (1) ![ $${\\left \(\\alpha L + \\beta \\tilde{L}\\right \)}^{{\\prime}} = \\alpha {L}^{{\\prime}} + \\beta \\tilde{{L}}^{{\\prime}}.$$ ](A81414_1_En_1_Chapter_IEq632.gif) (2) ![ $${\\left \(K \\circ L\\right \)}^{{\\prime}} = {L}^{{\\prime}}\\circ{K}^{{\\prime}}.$$ ](A81414_1_En_1_Chapter_IEq633.gif) (3) ![ $${L}^{{\\prime\\prime}} ={ \\left \({L}^{{\\prime}}\\right \)}^{{\\prime}} = L$$ ](A81414_1_En_1_Chapter_IEq634.gif) if we identify V ″ = V and W ″ = W. (4) If M ⊂ V and N ⊂ W are subspaces with LM ⊂ N, then L ′ N o ⊂ M o . Proof. 1. Just note that ![ $$\\begin{array}{rcl}{ \\left \(\\alpha L + \\beta \\tilde{L}\\right \)}^{{\\prime}}\\left \(g\\right \)& =& g \\circ \\left \(\\alpha L + \\beta \\tilde{L}\\right \) \\\\ & =& \\alpha g \\circ L + \\beta g \\circ \\tilde{ L} \\\\ & =& \\alpha {L}^{{\\prime}}\\left \(g\\right \) + \\beta \\tilde{{L}}^{{\\prime}}\\left \(g\\right \) \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ139.gif) as g is linear. 2. This comes from ![ $$\\begin{array}{rcl}{ \\left \(K \\circ L\\right \)}^{{\\prime}}\\left \(h\\right \)& =& h \\circ \\left \(K \\circ L\\right \) \\\\ & =& \\left \(h \\circ K\\right \) \\circ L \\\\ & =& {K}^{{\\prime}}\\left \(h\\right \) \\circ L \\\\ & =& {L}^{{\\prime}}\\left \({K}^{{\\prime}}\\left \(h\\right \)\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ140.gif) 3. Note that L ′′ : V ′′ -> W ′′ . If we take ev x ∈ V ′′ and use ![ $$\\left \(x,f\\right \) = \\left \(f,\\mathrm{{ev}}_{x}\\right \)$$ ](A81414_1_En_1_Chapter_IEq635.gif), then ![ $$\\begin{array}{rcl} \\left \(g,{L}^{{\\prime\\prime}}\\left \(\\mathrm{{ev}}_{ x}\\right \)\\right \)& =& \\left \({L}^{{\\prime}}\\left \(g\\right \),\\mathrm{{ev}}_{ x}\\right \) \\\\ & =& \\left \(x,{L}^{{\\prime}}\\left \(g\\right \)\\right \) \\\\ & =& \\left \(L\\left \(x\\right \),g\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ141.gif) This shows that L ′′ ev x is identified with Lx as desired. 4. If g ∈ V ′ , then we have that ![ $$\\left \(x,{L}^{{\\prime}}\\left \(g\\right \)\\right \) = \\left \(L\\left \(x\\right \),g\\right \)$$ ](A81414_1_En_1_Chapter_IEq636.gif). So if x ∈ M, then we have Lx ∈ N, and hence, gLx = 0 for g ∈ N o . This means that L ′ g ∈ M o . Just like for adjoint maps, we have a type of Fredholm alternative for dual maps. Theorem 1.14.13. (The Generalized Fredholm Alternative) Let L : V -> W be a linear map between finite-dimensional vector spaces. Then, ![ $$\\begin{array}{rcl} \\ker \\left \(L\\right \)& =& \\mathrm{im}\\left \({L}^{{\\prime}}\\right \), \\\\ \\ker \\left \({L}^{{\\prime}}\\right \)& =& \\mathrm{im}{\\left \(L\\right \)}^{o}, \\\\ \\ker {\\left \(L\\right \)}^{o}& =& \\mathrm{im}\\left \({L}^{{\\prime}}\\right \), \\\\ \\ker {\\left \({L}^{{\\prime}}\\right \)}^{o}& =& \\mathrm{im}\\left \(L\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ142.gif) Proof. We only need to prove the first statement as L ′′ = L and M oo = M, ![ $$\\begin{array}{rcl} \\ker \\left \(L\\right \)& =& \\left \\{x \\in V : Lx = 0\\right \\}, \\\\ \\mathrm{im}{\\left \({L}^{{\\prime}}\\right \)}^{o}& =& \\left \\{x \\in V : \\left \(x,{L}^{{\\prime}}\\left \(g\\right \)\\right \) = 0\\text{ for all }g \\in W\\right \\}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ143.gif) Using that ![ $$\\left \(x,{L}^{{\\prime}}\\left \(g\\right \)\\right \) = \\left \(L\\left \(x\\right \),g\\right \)$$ ](A81414_1_En_1_Chapter_IEq637.gif), we note first that if x ∈ kerL, then it must also belong to imL ′ o . Conversely, if ![ $$0 = \\left \(x,{L}^{{\\prime}}\\left \(g\\right \)\\right \) = \\left \(L\\left \(x\\right \),g\\right \)$$ ](A81414_1_En_1_Chapter_IEq638.gif) for all g ∈ W, it must follow that Lx = 0 and hence x ∈ kerL. As a corollary, we obtain a new version of the rank theorem (Theorem 1.12.11). Corollary 1.14.14. (The Rank Theorem) Let L : V -> W be a linear map between finite-dimensional vector spaces. Then, ![ $$\\mathrm{rank}\\left \(L\\right \) =\\mathrm{ rank}\\left \({L}^{{\\prime}}\\right \).$$ ](A81414_1_En_1_Chapter_Equlg.gif) Proof. The Fredholm alternative together with the dimension formula (Theorem 1.11.7) immediately shows: ![ $$\\begin{array}{rcl} \\mathrm{rank}\\left \(L\\right \)& =& \\dim V -\\dim \\ker \\left \(L\\right \) \\\\ & =& \\dim V -\\dim \\mathrm{ im}{\\left \({L}^{{\\prime}}\\right \)}^{o} \\\\ & =& \\dim V -\\dim V +\\mathrm{ im}\\left \({L}^{{\\prime}}\\right \) \\\\ & =& \\mathrm{rank}\\left \({L}^{{\\prime}}\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ144.gif) ### 1.14.1 Exercises 1. Let x 1,..., x n be a basis for V and f 1,..., f n a dual basis for V ′ . Show that the inverses to the isomorphisms ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n}\\end{array} \\right \]& :& {\\mathbb{F}}^{n} \\rightarrow V, \\\\ \\left \[\\begin{array}{ccc} {f}_{1} & \\cdots &{f}_{n}\\end{array} \\right \]& :& {\\mathbb{F}}^{n} \\rightarrow{V }^{{\\prime}}\\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ145.gif) are given by ![ $$\\begin{array}{rcl}{ \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n}\\end{array} \\right \]}^{-1}\\left \(x\\right \)& =& \\left \[\\begin{array}{c} {f}_{1}\\left \(x\\right \)\\\\ \\vdots \\\\ {f}_{n}\\left \(x\\right \)\\end{array} \\right \], \\\\ {\\left \[\\begin{array}{ccc} {f}_{1} & \\cdots &{f}_{n}\\end{array} \\right \]}^{-1}\\left \(f\\right \)& =& \\left \[\\begin{array}{c} f\\left \({x}_{1}\\right \)\\\\ \\vdots \\\\ f\\left \({x}_{n}\\right \)\\end{array} \\right \].\\end{array}$$ ](A81414_1_En_1_Chapter_Equ146.gif) 2. Let L : V -> W with basis x 1,..., x m for V, y 1,..., y n for W and dual basis g 1,..., g n for W ′ . Show that we have ![ $$\\begin{array}{rcl} L& =& \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m}\\end{array} \\right \]\\left \[L\\right \]{\\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n}\\end{array} \\right \]}^{-1} \\\\ & =& \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m}\\end{array} \\right \]\\left \[L\\right \]\\left \[\\begin{array}{c} {g}_{1}\\\\ \\vdots \\\\ {g}_{n}\\end{array} \\right \], \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ147.gif) where ![ $$\\left \[L\\right \]$$ ](A81414_1_En_1_Chapter_IEq639.gif) is the matrix representation for L with respect to the given bases. 3. Given the basis 1, t, t 2 for P 2, identify P 2 with ℂ 3 (column vectors) and ![ $${\\left \({P}_{2}\\right \)}^{{\\prime}}$$ ](A81414_1_En_1_Chapter_IEq640.gif) with Mat1 ×3 ℂ (row vectors). (a) Using these identifications, find a dual basis to ![ $$1,1 + t,1 + t + {t}^{2}$$ ](A81414_1_En_1_Chapter_IEq641.gif) in ![ $${\\left \({P}_{2}\\right \)}^{{\\prime}}$$ ](A81414_1_En_1_Chapter_IEq642.gif). (b) Using these identifications, find the matrix representation for f ∈ P 2 ′ defined by ![ $$f\\left \(p\\right \) = p\\left \({t}_{0}\\right \).$$ ](A81414_1_En_1_Chapter_Equlh.gif) (c) Using these identifications, find the matrix representation for f ∈ P 2 ′ defined by ![ $$f\\left \(p\\right \) ={ \\int\\nolimits \\nolimits }_{a}^{b}p\\left \(t\\right \)\\mathrm{d}t.$$ ](A81414_1_En_1_Chapter_Equli.gif) (d) Are all elements in ![ $${\\left \({P}_{2}\\right \)}^{{\\prime}}$$ ](A81414_1_En_1_Chapter_IEq643.gif) represented by the types of linear functions described in either (b) or (c)? 4. (Lagrange Multiplier Construction) Let f, g ∈ V ′ and assume that g≠0. Show that f = λg for some ![ $$\\lambda\\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq644.gif) if and only if kerf ⊃ kerg. 5. Let M ⊂ V be a subspace. Show that we have linear maps ![ $${M}^{o} \\frac{i} {\\rightarrow }{V }^{{\\prime}} \\frac{\\pi } {\\rightarrow }{M}^{{\\prime}},$$ ](A81414_1_En_1_Chapter_Equlj.gif) where ι is one-to-one, π is onto, and imi = kerπ. Conclude that V ′ is isomorphic to M o ×M ′ . 6. Let V and W be finite-dimensional vector spaces. Exhibit an isomorphism between V ′ ×W ′ and ![ $${\\left \(V \\times W\\right \)}^{{\\prime}}$$ ](A81414_1_En_1_Chapter_IEq645.gif) that does not depend on choosing bases for V and W. 7. Let M, N ⊂ V be subspaces of a finite-dimensional vector space. Show that ![ $$\\begin{array}{rcl} {M}^{o} + {N}^{o}& =&{ \\left \(M \\cap N\\right \)}^{o}, \\\\ {\\left \(M + N\\right \)}^{o}& =& {M}^{o} \\cap{N}^{o}.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ148.gif) 8. Let L : V -> W and assume that we have bases x 1,..., x m for V , y 1,..., y n for W and corresponding dual bases f 1,..., f m , for V ′ and g 1,..., g n for W ′ . Show that if ![ $$\\left \[L\\right \]$$ ](A81414_1_En_1_Chapter_IEq646.gif) is the matrix representation for L with respect to the given bases, then ![ $${\\left \[L\\right \]}^{t} = \\left \[{L}^{{\\prime}}\\right \]$$ ](A81414_1_En_1_Chapter_IEq647.gif) with respect to the dual bases. 9. Assume that L : V -> W is a linear map and that LM ⊂ N for subspaces M ⊂ V and N ⊂ W. Is there a relationship between ![ $${\\left \(L{\\vert }_{M}\\right \)}^{{\\prime}} : {N}^{{\\prime}}\\rightarrow{M}^{{\\prime}}$$ ](A81414_1_En_1_Chapter_IEq648.gif) and ![ $${L}^{{\\prime}}{\\vert }_{{N}^{o}} : {N}^{o} \\rightarrow $$ ](A81414_1_En_1_Chapter_IEq649.gif) M o ? 10. (The Rank Theorem) This exercise is an abstract version of what happened in the proof of the Rank Theorem 1.12.11 in Sect. 1.12. Let L : V -> W and x 1,..., x k a basis for imL. (a) Show that ![ $$L\\left \(x\\right \) = \\left \(x,{f}_{1}\\right \){x}_{1} + \\cdots+ \\left \(x,{f}_{k}\\right \){x}_{k}$$ ](A81414_1_En_1_Chapter_Equlk.gif) for suitable f 1,..., f k ∈ V ′ . (b) Show that ![ $${L}^{{\\prime}}\\left \(f\\right \) = \\left \({x}_{ 1},f\\right \){f}_{1} + \\cdots+ \\left \({x}_{k},f\\right \){f}_{k}$$ ](A81414_1_En_1_Chapter_Equll.gif) for f ∈ W ′ . (c) Conclude that rankL ′ ≤ rankL. (d) Show that rankL ′ = rankL. 11. Let M ⊂ V be a finite-dimensional subspace of V and x 1,..., x k a basis for M. Let ![ $$L\\left \(x\\right \) = \\left \(x,{f}_{1}\\right \){x}_{1} + \\cdots+ \\left \(x,{f}_{k}\\right \){x}_{k}$$ ](A81414_1_En_1_Chapter_Equlm.gif) for f 1,..., f k ∈ V ′ . (a) If ![ $$\\left \({x}_{j},{f}_{i}\\right \) = {\\delta }_{ij},$$ ](A81414_1_En_1_Chapter_IEq650.gif) then L is a projection onto M, i.e., L 2 = L and imL = M. (b) If E is a projection onto M, then ![ $$E = \\left \(x,{f}_{1}\\right \){x}_{1} + \\cdots+ \\left \(x,{f}_{k}\\right \){x}_{k},$$ ](A81414_1_En_1_Chapter_Equln.gif) with ![ $$\\left \({x}_{j},{f}_{i}\\right \) = {\\delta }_{ij}$$ ](A81414_1_En_1_Chapter_IEq651.gif). 12. Let M, N ⊂ V be subspaces of a finite-dimensional vector space and consider L : M ×N -> V defined by ![ $$L\\left \(x,y\\right \) = x - y$$ ](A81414_1_En_1_Chapter_IEq652.gif). (a) Show that ![ $${L}^{{\\prime}}\\left \(f\\right \)\\left \(x,y\\right \) = f\\left \(x\\right \) - f\\left \(y\\right \)$$ ](A81414_1_En_1_Chapter_IEq653.gif). (b) Show that kerL ′ can be identified with both M o ∩ N o and ![ $${\\left \(M + N\\right \)}^{o}$$ ](A81414_1_En_1_Chapter_IEq654.gif). ## 1.15 Quotient Spaces* In Sect. 1.14, we saw that if M ⊂ V is a subspace of a general vector space, then the annihilator subspace M o ⊂ V ′ can play the role of a canonical complement of M. One thing missing from this setup, however, is the projection whose kernel is M. In this section, we shall construct a different type of vector space that can substitute as a complement to M. It is called the quotient space of V over M and is denoted V ∕ M. In this case, there is an onto linear map P : V -> V ∕ M whose kernel is M. The quotient space construction is somewhat abstract, but it is also quite general and can be developed with a minimum of information as we shall see. It is in fact quite fundamental and can be used to prove several of the important results mentioned in Sect. 1.11. Similar to addition for subspaces in Sect. 1.10, we can in fact define addition for any subsets of a vector space. Definition 1.15.1. If S, T ⊂ V are subsets, then we define ![ $$S + T = \\left \\{x + y : x \\in S\\text{ and }y \\in T\\right \\}.$$ ](A81414_1_En_1_Chapter_Equlo.gif) It is immediately clear that this addition on subsets is associative and commutative. In case one of the sets contains only one element, we simplify the notation by writing ![ $$S + \\left \\{{x}_{0}\\right \\} = S + {x}_{0} = \\left \\{x + {x}_{0} : x \\in S\\right \\},$$ ](A81414_1_En_1_Chapter_Equlp.gif) and we call S + x 0 a translate of S. Geometrically, all of the sets S + x 0 appear to be parallel pictures of S (see Fig. 1.4) that are translated in V as we change x 0. We also say that S and T are parallel and denote it S ∥ T if ![ $$T = S + {x}_{0}$$ ](A81414_1_En_1_Chapter_IEq655.gif) for some x 0 ∈ V. Fig. 1.4 Parallel subspaces It is also possible to scale subsets ![ $$\\alpha S = \\left \\{\\alpha x : x \\in S\\right \\}.$$ ](A81414_1_En_1_Chapter_Equlq.gif) This scalar multiplication satisfies some of the usual properties of scalar multiplication ![ $$\\begin{array}{rcl} \\left \(\\alpha \\beta \\right \)S& =& \\alpha \\left \(\\beta S\\right \), \\\\ 1S& =& S, \\\\ \\alpha \\left \(S + T\\right \)& =& \\alpha S + \\alpha T.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ149.gif) However, the other distributive law can fail ![ $$\\left \(\\alpha+ \\beta \\right \)S\\mathop{ =}\\limits^{?}\\alpha S + \\beta S$$ ](A81414_1_En_1_Chapter_Equlr.gif) since it may not be true that ![ $$2S\\mathop{ =}\\limits^{?}S + S.$$ ](A81414_1_En_1_Chapter_Equls.gif) Certainly, 2S ⊂ S + S, but elements x + y do not have to belong to 2S if x, y ∈ S are distinct. Take, e.g., ![ $$S = \\left \\{x,-x\\right \\},$$ ](A81414_1_En_1_Chapter_IEq656.gif) where x≠0. Then, ![ $$2S = \\left \\{2x,-2x\\right \\},$$ ](A81414_1_En_1_Chapter_IEq657.gif) while ![ $$S + S = \\left \\{2x,0,-2x\\right \\}$$ ](A81414_1_En_1_Chapter_IEq658.gif). Definition 1.15.2. Our picture of the quotient space V ∕ M, when M ⊂ V is a subspace, is the set of all translates M + x 0 for x 0 ∈ V ![ $$V/M = \\left \\{M + {x}_{0} : {x}_{0} \\in V \\right \\}$$ ](A81414_1_En_1_Chapter_Eqult.gif) Several of these translates are in fact equal as ![ $${x}_{1} + M = {x}_{2} + M$$ ](A81414_1_En_1_Chapter_Equlu.gif) precisely when x 1 − x 2 ∈ M. To see why this is, note that if z ∈ M, then ![ $$z + M = M$$ ](A81414_1_En_1_Chapter_IEq659.gif) since M is a subspace. Thus, x 1 − x 2 ∈ M implies that ![ $$\\begin{array}{rcl}{ x}_{1} + M& =& {x}_{2} + \\left \({x}_{1} - {x}_{2}\\right \) + M \\\\ & =& {x}_{2} + M.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ150.gif) Conversely, if ![ $${x}_{1} + M = {x}_{2} + M,$$ ](A81414_1_En_1_Chapter_IEq660.gif) then ![ $${x}_{1} = {x}_{2} + x$$ ](A81414_1_En_1_Chapter_IEq661.gif) for some x ∈ M implying that x 1 − x 2 ∈ M. We see that in the trivial case where M = 0, the translates of ![ $$\\left \\{0\\right \\}$$ ](A81414_1_En_1_Chapter_IEq662.gif) can be identified with V itself. Thus, V ∕ 0 ≈ V. In the other trivial case where M = V , all the translates are simply V itself. So V ∕ V is the one element set ![ $$\\left \\{V \\right \\}$$ ](A81414_1_En_1_Chapter_IEq663.gif) whose element is the vector space V. We now need to see how addition and scalar multiplication works on V ∕ M. The important property that simplifies calculations and will turn V ∕ M into a vector space is the fact that M is a subspace, i.e., for all scalars ![ $$\\alpha ,\\beta\\in\\mathbb{F}$$ ](A81414_1_En_1_Chapter_IEq664.gif), ![ $$\\alpha M + \\beta M = M.$$ ](A81414_1_En_1_Chapter_Equlv.gif) This implies that addition and scalar multiplication is considerably simplified. ![ $$\\begin{array}{rcl} \\alpha \\left \(M + x\\right \) + \\beta \\left \(M + y\\right \)& =& \\alpha M + \\beta M + \\alpha x + \\beta y \\\\ & =& M + \\alpha x + \\beta y.\\end{array}$$ ](A81414_1_En_1_Chapter_Equ151.gif) With this in mind, we can show that V ∕ M is a vector space. The zero element is M since ![ $$M + \\left \(M + {x}_{0}\\right \) = M + {x}_{0}$$ ](A81414_1_En_1_Chapter_IEq665.gif). The negative of M + x 0 is the translate M − x 0. Fin Ally, the important distributive law that was not true in general also holds because ![ $$\\begin{array}{rcl} \\left \(\\alpha+ \\beta \\right \)\\left \(M + {x}_{0}\\right \)& =& M + \\left \(\\alpha+ \\beta \\right \){x}_{0} \\\\ & =& M + \\alpha {x}_{0} + \\beta {x}_{0} \\\\ & =& \\left \(M + \\alpha {x}_{0}\\right \) + \\left \(M + \\beta {x}_{0}\\right \) \\\\ & =& \\alpha \\left \(M + {x}_{0}\\right \) + \\beta \\left \(M + {x}_{0}\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ152.gif) The "projection" P : V -> V ∕ M is now defined by ![ $$P\\left \(x\\right \) = M + x.$$ ](A81414_1_En_1_Chapter_Equlw.gif) Clearly, P is onto and Px = 0 if and only if x ∈ M. The fact that P is linear follows from the way we add elements in V ∕ M, ![ $$\\begin{array}{rcl} P\\left \(\\alpha x + \\beta y\\right \)& =& M + \\alpha x + \\beta y \\\\ & =& \\alpha \\left \(M + x\\right \) + \\beta \\left \(M + y\\right \) \\\\ & =& \\alpha P\\left \(x\\right \) + \\beta P\\left \(y\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ153.gif) This projection can be generalized to the setting where M ⊂ N ⊂ V. Here we get ![ $$V/M \\rightarrow V/N$$ ](A81414_1_En_1_Chapter_IEq666.gif) by mapping x + M to x + N. If L : V -> W and M ⊂ V , LM ⊂ N ⊂ W, then we get an induced map ![ $$L : V/M \\rightarrow W/N$$ ](A81414_1_En_1_Chapter_IEq667.gif) by sending x + M to Lx + N. We need to check that this indeed gives a well-defined map. Assuming that ![ $${x}_{1} + M = {x}_{2} + M$$ ](A81414_1_En_1_Chapter_IEq668.gif), we have to show that ![ $$L\\left \({x}_{1}\\right \) + N = L\\left \({x}_{2}\\right \) + N$$ ](A81414_1_En_1_Chapter_IEq669.gif). The first condition is equivalent to x 1 − x 2 ∈ M; thus, ![ $$\\begin{array}{rcl} L\\left \({x}_{1}\\right \) - L\\left \({x}_{2}\\right \)& =& L\\left \({x}_{1} - {x}_{2}\\right \) \\\\ & \\in & L\\left \(M\\right \) \\subset N, \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ154.gif) implying that ![ $$L\\left \({x}_{1}\\right \) + N = L\\left \({x}_{2}\\right \) + N$$ ](A81414_1_En_1_Chapter_IEq670.gif). We are now going to investigate how the quotient space can be used to understand some of the developments from Sect. 1.11. For any linear map, we have that LkerL = 0. Thus, L induces a linear map ![ $$V/\\left \(\\ker \\left \(L\\right \)\\right \) \\rightarrow W/\\left \\{0\\right \\} \\approx W.$$ ](A81414_1_En_1_Chapter_Equlx.gif) Since the image of kerL + x is ![ $$\\left \\{0\\right \\} + L\\left \(x\\right \) \\approx L\\left \(x\\right \),$$ ](A81414_1_En_1_Chapter_IEq671.gif) we see that the induced map has trivial kernel. This implies that we have an isomorphism ![ $$V/\\left \(\\ker \\left \(L\\right \)\\right \) \\rightarrow \\mathrm{ im}\\left \(L\\right \).$$ ](A81414_1_En_1_Chapter_Equly.gif) We can put all of this into a commutative diagram: ![ $$\\begin{array}{ccc} V & \\rightarrow ^{L} & W\\\\ P \\downarrow &&\\uparrow\\\\ V/\\left \(\\ker \\left \(L\\right \)\\right \)& \\rightarrow ^{ \\approx }&\\mathrm{im}\\left \(L\\right \)\\end{array}$$ ](A81414_1_En_1_Chapter_Equlz.gif) Note that, as yet, we have not used any of the facts we know about finite-dimensional spaces. The two facts we shall use are that the dimension of a vector space is well defined (Theorem 1.8.4) and that any subspace in a finite-dimensional vector space has a finite-dimensional complement (Theorem 1.10.20 and Corollary 1.12.6). We start by considering subspaces. Theorem 1.15.3. (The Subspace Theorem) Let V be a finite-dimensional vector space. If M ⊂ V is a subspace, then both M and V∕M are finite-dimensional and ![ $$\\dim V =\\dim M +\\dim \\left \(V/M\\right \).$$ ](A81414_1_En_1_Chapter_Equma.gif) Proof. We start by selecting a finite-dimensional subspace N ⊂ V that is complementary to M (see Corollary 1.12.6). If we restrict the projection P : V -> V ∕ M to P | N : N -> V ∕ M, then it has trivial kernel as M ∩ N = 0, so P | N is one-to-one. P | N is also onto since any z ∈ V can be written as ![ $$z = x + y$$ ](A81414_1_En_1_Chapter_IEq672.gif) where x ∈ M and y ∈ N, so it follows that ![ $$\\begin{array}{rcl} M + z& =& M + x + y \\\\ & =& M + y \\\\ & =& P\\left \(y\\right \).\\end{array}$$ ](A81414_1_En_1_Chapter_Equ155.gif) Thus, P | N : N -> V ∕ M is an isomorphism. This shows that V ∕ M is finite-dimensional. In the same way, we see that the projection Q : V -> V ∕ N restricts to an isomorphism Q | M : M -> V ∕ N. By selecting a finite-dimensional complement for N ⊂ V , we also get that V ∕ N is finite-dimensional. This in turn shows that M is finite-dimensional. We can now use that V = M ⊕ N to show that ![ $$\\begin{array}{rcl} \\dim V & =& \\dim M +\\dim N \\\\ & =& \\dim M +\\dim \\left \(V/M\\right \). \\\\ & & \\\\ \\end{array}$$ ](A81414_1_En_1_Chapter_Equ156.gif) The dimension formula now follows from our observations above. Corollary 1.15.4. (The Dimension Formula) Let V be a finite-dimensional vector space. If L : V -> W is a linear map, then ![ $$\\dim V =\\dim \\left \(\\ker \\left \(L\\right \)\\right \) +\\dim \\left \(\\mathrm{im}\\left \(L\\right \)\\right \).$$ ](A81414_1_En_1_Chapter_Equmb.gif) Proof. We just saw that ![ $$\\dim V =\\dim \\left \(\\ker \\left \(L\\right \)\\right \) +\\dim \\left \(V/\\left \(\\ker \\left \(L\\right \)\\right \)\\right \).$$ ](A81414_1_En_1_Chapter_Equmc.gif) In addition, we have an isomorphism ![ $$V/\\left \(\\ker \\left \(L\\right \)\\right \)\\rightarrow \\mathrm{im}\\left \(L\\right \).$$ ](A81414_1_En_1_Chapter_Equmd.gif) This proves the claim. ### 1.15.1 Exercises 1. An affine subspace A ⊂ V is a subset such that if x 1,..., x k ∈ A, ![ $${\\alpha }_{1},\\ldots ,{\\alpha }_{k} \\in\\mathbb{F},$$ ](A81414_1_En_1_Chapter_IEq673.gif) and ![ $${\\alpha }_{1} + \\cdots+ {\\alpha }_{k} = 1,$$ ](A81414_1_En_1_Chapter_IEq674.gif) then ![ $${\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{k}{x}_{k} \\in A$$ ](A81414_1_En_1_Chapter_IEq675.gif). Show that V ∕ M consists of all of the affine subspaces parallel to M. 2. Find an example of a nontrivial linear operator L : V -> V and a subspace M ⊂ V such that L | M = 0 and the induced map ![ $$L : V/M \\rightarrow V/M$$ ](A81414_1_En_1_Chapter_IEq676.gif) is also zero. 3. This exercise requires knowledge of the characteristic polynomial (see Sects. 2.3, 2.7, or 5.7). Let L : V -> V be a linear operator with an invariant subspace M ⊂ V. Show that χ L t is the product of the characteristic polynomials of L | M and the induced map ![ $$L : V/M \\rightarrow V/M$$ ](A81414_1_En_1_Chapter_IEq677.gif). 4. Let M ⊂ V be a subspace and assume that we have x 1,..., x n ∈ V such that x 1,..., x k form a basis for M and ![ $${x}_{k+1} + M,\\ldots ,{x}_{n} + M$$ ](A81414_1_En_1_Chapter_IEq678.gif) form a basis for V ∕ M. Show that x 1,..., x n is a basis for V. 5. Let L : V -> W be a linear map and assume that LM ⊂ N. How does the induced map ![ $$L : V/M \\rightarrow W/N$$ ](A81414_1_En_1_Chapter_IEq679.gif) compare to the dual maps constructed in Exercise 2 in Sect. 1.14? 6. Let M ⊂ V be a subspace. Show that there is a natural or canonical isomorphism M o -> V ∕ M ′ , i.e., an isomorphism that does not depend on a choice of basis for the spaces. References Axler. Axler, S.: Linear Algebra Done Right. Springer-Verlag, New York (1997) Bretscher. Bretscher, O.: Linear Algebra with Applications, 2nd edn. Prentice-Hall, Upper Saddle River (2001) Curtis. Curtis, C.W.: Linear Algebra: An Introductory Approach. Springer-Verlag, New York (1984) Greub. Greub, W.: Linear Algebra, 4th edn. Springer-Verlag, New York (1981) Halmos. Halmos, P.R.: Finite-Dimensional Vector Spaces. Springer-Verlag, New York (1987) Hoffman-Kunze. Hoffman, K., Kunze, R.: Linear Algebra. Prentice-Hall, Upper Saddle River (1961) Lang. Lang, S.: Linear Algebra, 3rd edn. Springer-Verlag, New York (1987) Roman. Roman, S.: Advanced Linear Algebra, 2nd edn. Springer-Verlag, New York (2005) Serre. Serre, D.: Matrices, Theory and Applications. Springer-Verlag, New York (2002) Peter PetersenUndergraduate Texts in MathematicsLinear Algebra201210.1007/978-1-4614-3612-6_2(C) Springer Science+Business Media New York 2012 # 2. Linear Operators Peter Petersen1 (1) Department of Mathematics, University of California, Los Angeles, CA, USA Abstract In this chapter, we are going to present all of the results that relate to linear operators on abstract finite-dimensional vector spaces. Aside from a section on polynomials, we start with a section on linear differential equations in order to motivate both some material from Chap. 1 and also give a reason for why it is desirable to study matrix representations. Eigenvectors and eigenvalues are first introduced in the context of differential equations where they are used to solve such equations. It is, however, possible to start with the Sect. 2.3 and ignore the discussion on differential equations. In this chapter, we are going to present all of the results that relate to linear operators on abstract finite-dimensional vector spaces. Aside from a section on polynomials, we start with a section on linear differential equations in order to motivate both some material from Chap. 1 and also give a reason for why it is desirable to study matrix representations. Eigenvectors and eigenvalues are first introduced in the context of differential equations where they are used to solve such equations. It is, however, possible to start with the Sect. 2.3 and ignore the discussion on differential equations. The material developed in Chap. 1 on Gauss elimination is used to calculate eigenvalues and eigenvectors and to give a "weak" definition of the characteristic polynomial. We also introduce the minimal polynomial and use it to characterize diagonalizable maps. We then move on to cyclic subspaces leading us to fairly simple proofs of the Cayley-Hamilton theorem and the cyclic subspace decomposition. This in turn gives us a nice proof of the Frobenius canonical form as well as the Jordan canonical form. We finish the chapter with the Smith normal form. This result gives a direct method for calculating the Frobenius canonical form as well as a complete set of similarity invariants for a matrix. It also shows how a system of higher order differential equations (or recurrence equations) can be decoupled and solved as independent higher order equations. Various properties of polynomials are used quite a bit in this chapter. Most of these properties are probably already known to the student and in any case are certainly well known from arithmetic of integers nevertheless, we have chosen to collect some of these facts in an optional section at the beginning of this chapter. It is possible to simply cover Sects. 2.3 and 2.5 and then move on to the chapters on inner product spaces. In fact, it is possible to skip this chapter entirely as it is not absolutely necessary in the theory of inner product spaces. ## 2.1 Polynomials* The space of polynomials with coefficients in the field ![ $$\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq1.gif) is denoted ![ $$\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq2.gif). This space consists of expressions of the form ![ $${\\alpha }_{0} + {\\alpha }_{1}t + \\cdots+ {\\alpha }_{k}{t}^{k},$$ ](A81414_1_En_2_Chapter_Equa.gif) where ![ $${\\alpha }_{0},\\ldots,{\\alpha }_{k} \\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq3.gif) and k is a nonnegative integer. One can think of these expressions as functions on ![ $$\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq4.gif), but in this section, we shall only use the formal algebraic structure that comes from writing polynomials in the above fashion. Recall that integers are written in a similar way if we use the standard positional base 10 system (or any other base for that matter): ![ $${a}_{k}\\cdots {a}_{0} = {a}_{k}1{0}^{k} + {a}_{ k-1}1{0}^{k-1} + \\cdots+ {a}_{ 1}10 + {a}_{0}.$$ ](A81414_1_En_2_Chapter_Equb.gif) Indeed, there are many basic number theoretic similarities between integers and polynomials as we shall see below. Addition is defined by adding term by term: ![ $$\\begin{array}{rcl} & \\left \({\\alpha }_{0} + {\\alpha }_{1}t + {\\alpha }_{2}{t}^{2} + \\cdots \\,\\right \) + \\left \({\\beta }_{0} + {\\beta }_{1}t + {\\beta }_{2}{t}^{2} + \\cdots \\,\\right \)& \\\\ & \\quad = \\left \({\\alpha }_{0} + {\\beta }_{0}\\right \) + \\left \({\\alpha }_{1} + {\\beta }_{1}\\right \)t + \\left \({\\alpha }_{2} + {\\beta }_{2}\\right \){t}^{2} + \\cdots& \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ1.gif) Multiplication is a bit more complicated but still completely naturally defined by multiplying all the different terms and then collecting according to the powers of t: ![ $$\\begin{array}{rcl} & \\left \({\\alpha }_{0} + {\\alpha }_{1}t + {\\alpha }_{2}{t}^{2} + \\cdots \\,\\right \) \\cdot \\left \({\\beta }_{0} + {\\beta }_{1}t + {\\beta }_{2}{t}^{2} + \\cdots \\,\\right \) & \\\\ & \\quad = {\\alpha }_{0} \\cdot{\\beta }_{0} + \\left \({\\alpha }_{0}{\\beta }_{1} + {\\alpha }_{1}{\\beta }_{0}\\right \)t + \\left \({\\alpha }_{0}{\\beta }_{2} + {\\alpha }_{1}{\\beta }_{1} + {\\alpha }_{2}{\\beta }_{0}\\right \){t}^{2} + \\cdots & \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ2.gif) Note that in "addition," the indices match the power of t, while in "multiplication," each term has the property that the sum of the indices matches the power of t. The degree of a polynomial ![ $${\\alpha }_{0} + {\\alpha }_{1}t + \\cdots+ {\\alpha }_{n}{t}^{n}$$ ](A81414_1_En_2_Chapter_IEq5.gif) is the largest k such that α k ≠0. In particular, ![ $${\\alpha }_{0} + {\\alpha }_{1}t + \\cdots+ {\\alpha }_{k}{t}^{k} + \\cdots+ {\\alpha }_{ n}{t}^{n} = {\\alpha }_{ 0} + {\\alpha }_{1}t + \\cdots+ {\\alpha }_{k}{t}^{k},$$ ](A81414_1_En_2_Chapter_Equc.gif) where k is the degree of the polynomial. We also write degp = k. The degree satisfies the following elementary properties: ![ $$\\begin{array}{rcl} \\deg \\left \(p + q\\right \)& \\leq & \\max \\left \\{\\deg \(p\),\\deg \\left \(q\\right \)\\right \\}, \\\\ \\deg \\left \(pq\\right \)& =& \\deg \\left \(p\\right \) +\\deg \\left \(q\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ3.gif) Note that if degp = 0, then pt = α0 is simply a scalar. It is often convenient to work with monic polynomials. These are the polynomials of the form ![ $${\\alpha }_{0} + {\\alpha }_{1}t + \\cdots+ 1 \\cdot{t}^{k}.$$ ](A81414_1_En_2_Chapter_Equd.gif) Note that any polynomial can be made into a monic polynomial by diving by the scalar that appears in front of the term of highest degree. Working with monic polynomials is similar to working with positive integers rather than all integers. If ![ $$p,q \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq6.gif), then we say that pdividesq if q = pd for some ![ $$d \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq7.gif). Note that if p divides q, then it must follow that degp ≤ degq. The converse is of course not true, but polynomial long division gives us a very useful partial answer to what might happen. Theorem 2.1.1. (The Euclidean Algorithm) If ![ $$p,q \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq8.gif) and deg p ≤ deg q, then ![ $$q = pd + r$$ ](A81414_1_En_2_Chapter_IEq9.gif) , where deg r < deg p. Proof. The proof is along the same lines as how we do long division with remainder. The idea of the Euclidean algorithm is that whenever degp ≤ degq, it is possible to find d 1 and r 1 such that ![ $$\\begin{array}{rcl} q& =& p{d}_{1} + {r}_{1}, \\\\ \\deg \\left \({r}_{1}\\right \)& <& \\deg \\left \(q\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ4.gif) To establish, this assume ![ $$\\begin{array}{rcl} q& =& {\\alpha }_{n}{t}^{n} + {\\alpha }_{ n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{ 0}, \\\\ p& =& {\\beta }_{m}{t}^{m} + {\\beta }_{ m-1}{t}^{m-1} + \\cdots+ {\\beta }_{ 0}, \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ5.gif) where α n , β m ≠0. Then, define ![ $${d}_{1} = \\frac{{\\alpha }_{n}} {{\\beta }_{m}}{t}^{n-m}$$ ](A81414_1_En_2_Chapter_IEq10.gif) and ![ $$\\begin{array}{rcl}{ r}_{1}& =& q - p{d}_{1} \\\\ & =& \\left \({\\alpha }_{n}{t}^{n} + {\\alpha }_{ n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{ 0}\\right \) \\\\ & & -\\left \({\\beta }_{m}{t}^{m} + {\\beta }_{ m-1}{t}^{m-1} + \\cdots+ {\\beta }_{ 0}\\right \) \\frac{{\\alpha }_{n}} {{\\beta }_{m}}{t}^{n-m} \\\\ & =& \\left \({\\alpha }_{n}{t}^{n} + {\\alpha }_{ n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{ 0}\\right \) \\\\ & & -\\left \({\\alpha }_{n}{t}^{n} + {\\beta }_{ m-1} \\frac{{\\alpha }_{n}} {{\\beta }_{m}}{t}^{n-1} + \\cdots+ {\\beta }_{ 0} \\frac{{\\alpha }_{n}} {{\\beta }_{m}}{t}^{n-m}\\right \) \\\\ & =& 0 \\cdot{t}^{n} + \\left \({\\alpha }_{ n-1} - {\\beta }_{m-1} \\frac{{\\alpha }_{n}} {{\\beta }_{m}}\\right \){t}^{n-1} + \\cdots \\,.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ6.gif) Thus, degr 1 < n = degq. If degr 1 < degp, we are finished; otherwise, we use the same construction to get ![ $$\\begin{array}{rcl} {r}_{1}& =& p{d}_{2} + {r}_{2}, \\\\ \\deg \\left \({r}_{2}\\right \)& <& \\deg \\left \({r}_{1}\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ7.gif) We then continue this process and construct ![ $$\\begin{array}{rcl} {r}_{k}& =& p{d}_{k+1} + {r}_{k+1}, \\\\ \\deg \\left \({r}_{k+1}\\right \)& <& \\deg \\left \({r}_{k}\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ8.gif) Eventually, we must arrive at a situation where degr k ≥ degp while degr k + 1​ < degp. Collecting each step in this process, we see that ![ $$\\begin{array}{rcl} q& =& p{d}_{1} + {r}_{1} \\\\ & =& p{d}_{1} + p{d}_{2} + {r}_{2} \\\\ & =& p\\left \({d}_{1} + {d}_{2}\\right \) + {r}_{2} \\\\ & & \\vdots \\\\ & =& p\\left \({d}_{1} + {d}_{2} + \\cdots+ {d}_{k+1}\\right \) + {r}_{k+1}.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ9.gif) This proves the theorem. □ The Euclidean algorithm is the central construction that makes all of the following results work. Proposition 2.1.2. Let ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq11.gif) and ![ $$\\lambda\\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq12.gif). ![ $$\\left \(t - \\lambda \\right \)$$ ](A81414_1_En_2_Chapter_IEq13.gif) divides p if and only if λ is a root of p, i.e., pλ = 0. Proof. If ![ $$\\left \(t - \\lambda \\right \)$$ ](A81414_1_En_2_Chapter_IEq14.gif) divides p, then ![ $$p = \\left \(t - \\lambda \\right \)q$$ ](A81414_1_En_2_Chapter_IEq15.gif). Hence, ![ $$p\\left \(\\lambda \\right \) = 0 \\cdot q\\left \(\\lambda \\right \) = 0$$ ](A81414_1_En_2_Chapter_IEq16.gif). Conversely, use the Euclidean algorithm to write ![ $$\\begin{array}{rcl} p& =& \\left \(t - \\lambda \\right \)q + r, \\\\ \\deg \\left \(r\\right \)& <& \\deg \\left \(t - \\lambda \\right \) = 1.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ10.gif) This means that ![ $$r = \\beta\\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq17.gif). Now, evaluate this at λ ![ $$\\begin{array}{rcl} 0& =& p\\left \(\\lambda \\right \) \\\\ & =& \\left \(\\lambda- \\lambda \\right \)q\\left \(\\lambda \\right \) + r \\\\ & =& r \\\\ & =& \\beta.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ11.gif) Thus, r = 0 and ![ $$p = \\left \(t - \\lambda \\right \)q$$ ](A81414_1_En_2_Chapter_IEq18.gif). □ This gives us an important corollary. Corollary 2.1.3. Let ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq19.gif) . If deg p = k, then p has no more than k roots. Proof. We prove this by induction. When k = 0 or 1, there is nothing to prove. If p has a root ![ $$\\lambda\\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq20.gif), then ![ $$p = \\left \(t - \\lambda \\right \)q,$$ ](A81414_1_En_2_Chapter_IEq21.gif) where degq < degp. Thus, q has no more than degq roots. In addition, we have that μ≠λ is a root of p if and only if it is a root of q. Thus p, cannot have more than 1 + degq ≤ degp roots. □ In the next proposition, we show that two polynomials always have a greatest common divisor. Proposition 2.1.4. Let ![ $$p,q \\in\\mathbb{F}\\left \[t\\right \],$$ ](A81414_1_En_2_Chapter_IEq22.gif) then there is a unique monic polynomial d = gcd p,q with the property that if d 1 divides both p and q, then d 1 divides d. Moreover, there exist ![ $$r,s \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq23.gif) such that ![ $$d = pr + qs$$ ](A81414_1_En_2_Chapter_IEq24.gif). Proof. Let d be a monic polynomial of smallest degree such that ![ $$d = p{s}_{1} + q{s}_{2}$$ ](A81414_1_En_2_Chapter_IEq25.gif). It is clear that any polynomial d 1 that divides p and q must also divide d. So we must show that d divides p and q. We show more generally that d divides all polynomials of the form ![ $${d}^{{\\prime}} = p{s}_{1}^{{\\prime}} + q{s}_{2}^{{\\prime}}$$ ](A81414_1_En_2_Chapter_IEq26.gif). For such a polynomial, we have ![ $${d}^{{\\prime}} = du + r$$ ](A81414_1_En_2_Chapter_IEq27.gif) where degr < degd. This implies ![ $$\\begin{array}{rcl} r& =& {d}^{{\\prime}}- du \\\\ & =& p\\left \({s}_{1}^{{\\prime}}- u{s}_{ 1}\\right \) + q\\left \({s}_{2}^{{\\prime}}- u{s}_{ 2}\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ12.gif) It must follow that r = 0 as we could otherwise find a monic polynomial of the form ps 1 ′′ + qs 2 ′′ of degree < degd. Thus, d divides d ′ . In particular, d must divide ![ $$p = p \\cdot1 + q \\cdot0$$ ](A81414_1_En_2_Chapter_IEq28.gif) and ![ $$q = p \\cdot0 + q \\cdot1$$ ](A81414_1_En_2_Chapter_IEq29.gif). To check uniqueness, assume d 1 is a monic polynomial with the property that any polynomial that divides p and q also divides d 1. This means that d divides d 1 and also that d 1 divides d. Since both polynomials are monic, this shows that d = d 1. □ We can more generally show that for any finite collection p 1,..., p n of polynomials, there is a greatest common divisor ![ $$d =\\gcd \\left \\{{p}_{1},\\ldots,{p}_{n}\\right \\}.$$ ](A81414_1_En_2_Chapter_Eque.gif) As in the above proposition, the polynomial d is a monic polynomial of smallest degree such that ![ $$d = {p}_{1}{s}_{1} + \\cdots+ {p}_{n}{s}_{n}.$$ ](A81414_1_En_2_Chapter_Equf.gif) Moreover, it has the property that any polynomial that divides p 1,..., p n also divides d. The polynomials ![ $${p}_{1},\\ldots,{p}_{n} \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq30.gif) are said to be relatively prime or have no common factors if the only monic polynomial that divides p 1,..., p n is 1. In other words, gcdp 1,..., p n = 1. We can also show that two polynomials have a least common multiple. Proposition 2.1.5. Let ![ $$p,q \\in\\mathbb{F}\\left \[t\\right \],$$ ](A81414_1_En_2_Chapter_IEq31.gif) then there is a unique monic polynomial m = lcm p,q with the property that if p and q divide m 1 , then m divides m 1. Proof. Let m be the monic polynomial of smallest degree that is divisible by both p and q. Note that such polynomials exist as pq is divisible by both p and q. Next, suppose that p and q divide m 1. Since degm 1 ≥ degm, we have that ![ $${m}_{1} = sm + r$$ ](A81414_1_En_2_Chapter_IEq32.gif) with degr < degm. Since p and q divide m 1 and m, they must also divide ![ $${m}_{1} - sm = r$$ ](A81414_1_En_2_Chapter_IEq33.gif). As m has the smallest degree with this property, it must follow that r = 0. Hence, m divides m 1. □ A monic polynomial ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq34.gif) of degree ≥ 1 is said to be prime or irreducible if the only monic polynomials from ![ $$\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq35.gif) that divide p are 1 and p. The simplest irreducible polynomials are the linear ones t − α. If the field ![ $$\\mathbb{F} = \\mathbb{C},$$ ](A81414_1_En_2_Chapter_IEq36.gif) then all irreducible polynomials are linear. While if the field ![ $$\\mathbb{F} = \\mathbb{R},$$ ](A81414_1_En_2_Chapter_IEq37.gif) then the only other irreducible polynomials are the quadratic ones ![ $${t}^{2} + \\alpha t + \\beta $$ ](A81414_1_En_2_Chapter_IEq38.gif) with negative discriminant ![ $$D = {\\alpha }^{2} - 4\\beta< 0$$ ](A81414_1_En_2_Chapter_IEq39.gif). These two facts are not easy to prove and depend on the Fundamental Theorem of Algebra, which we discuss below. In analogy with the prime factorization of integers, we also have a prime factorization of polynomials. Before establishing this decomposition, we need to prove a very useful property for irreducible polynomials. Lemma 2.1.6. Let ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq40.gif) be irreducible. If p divides q 1 ⋅ q 2 , then p divides either q 1 or q 2. Proof. Let d 1 = gcdp, q 1. Since d 1 divides p, it follows that d 1 = 1 or d 1 = p. In the latter case, d 1 = p divides q 1 so we are finished. If d 1 = 1, then we can write ![ $$1 = pr + {q}_{1}s$$ ](A81414_1_En_2_Chapter_IEq41.gif). In particular, ![ $${q}_{2} = {q}_{2}pr + {q}_{2}{q}_{1}s.$$ ](A81414_1_En_2_Chapter_Equg.gif) Here we have that p divides q 2 q 1 and p. Thus, it also divides ![ $$ {122.39996pt} {q}_{2} = {q}_{2}pr+{q}_{2}{q}_{1}s. {127.20007pt} \\square $$ ](A81414_1_En_2_Chapter_Equh.gif) Theorem 2.1.7. (Unique Factorization of Polynomials) Let ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq42.gif) be a monic polynomial, then p = p 1 ⋯p k is a product of irreducible polynomials. Moreover, except for rearranging these polynomials this factorization is unique. Proof. We can prove this result by induction on degp. If p is only divisible by 1 and p, then p is irreducible and we are finished. Otherwise, p = q 1 ⋅q 2, where q 1 and q 2 are monic polynomials with degq 1, degq 2 < degp. By assumption, each of these two factors can be decomposed into irreducible polynomials; hence, we also get such a decomposition for p. For uniqueness, assume that ![ $$p = {p}_{1}\\cdots {p}_{k} = {q}_{1}\\cdots {q}_{l}$$ ](A81414_1_En_2_Chapter_IEq43.gif) are two decompositions of p into irreducible factors. Using induction again, we see that it suffices to show that p 1 = q i for some i. The previous lemma now shows that p 1 must divide q 1 or q 2⋯q l . In the former case, it follows that p 1 = q 1 as q 1 is irreducible. In the latter case, we get again that p 1 must divide q 2 or q 3⋯q l . Continuing in this fashion, it must follow that p 1 = q i for some i. □ If all the irreducible factors of a monic polynomial ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq44.gif) are linear, then we say that that psplits. Thus, p splits if and only if ![ $$p\\left \(t\\right \) = \\left \(t - {\\alpha }_{1}\\right \)\\cdots \\left \(t - {\\alpha }_{k}\\right \)$$ ](A81414_1_En_2_Chapter_Equi.gif) for ![ $${\\alpha }_{1},\\ldots,{\\alpha }_{k} \\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq45.gif). Finally, we show that all complex polynomials have a root. It is curious that while this theorem is algebraic in nature, the proof is analytic. There are many completely different proofs of this theorem including ones that are far more algebraic. The one presented here, however, seems to be the most elementary. Theorem 2.1.8. (The Fundamental Theorem of Algebra) Any complex polynomial of degree ≥ 1 has a root. Proof. Let pz ∈ ℂz have degree n ≥ 1. Our first claim is that we can find z 0 ∈ ℂ such that ![ $$\\left \\vert p\\left \(z\\right \)\\right \\vert \\geq \\left \\vert p\\left \({z}_{0}\\right \)\\right \\vert $$ ](A81414_1_En_2_Chapter_IEq46.gif) for all z ∈ ℂ. To see why ![ $$\\left \\vert p\\left \(z\\right \)\\right \\vert $$ ](A81414_1_En_2_Chapter_IEq47.gif) has to have a minimum, we first observe that ![ $$\\begin{array}{rcl} \\frac{p\\left \(z\\right \)} {{z}^{n}} & =& \\frac{{a}_{n}{z}^{n} + {a}_{n-1}{z}^{n-1} + \\cdots+ {a}_{1}z + {a}_{0}} {{z}^{n}} \\\\ & =& {a}_{n} + {a}_{n-1}\\frac{1} {z} + \\cdots+ {a}_{1} \\frac{1} {{z}^{n-1}} + {a}_{0} \\frac{1} {{z}^{n}} \\\\ & \\rightarrow & {a}_{n}\\text{ as }z \\rightarrow \\infty.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ13.gif) Since a n ≠0, we can therefore choose R > 0 so that ![ $$\\left \\vert p\\left \(z\\right \)\\right \\vert \\geq\\frac{\\left \\vert {a}_{n}\\right \\vert } {2}{ \\left \\vert z\\right \\vert }^{n}\\text{ for }\\left \\vert z\\right \\vert \\geq R.$$ ](A81414_1_En_2_Chapter_Equj.gif) By possibly increasing R further, we can also assume that ![ $$\\frac{\\left \\vert {a}_{n}\\right \\vert } {2}{ \\left \\vert R\\right \\vert }^{n} \\geq \\left \\vert p\\left \(0\\right \)\\right \\vert.$$ ](A81414_1_En_2_Chapter_Equk.gif) On the compact set ![ $$\\bar{B}\\left \(0,R\\right \) = \\left \\{z \\in\\mathbb{C} : \\left \\vert z\\right \\vert \\leq R\\right \\}$$ ](A81414_1_En_2_Chapter_IEq48.gif), we can now find z 0 such that ![ $$\\left \\vert p\\left \(z\\right \)\\right \\vert \\geq \\left \\vert p\\left \({z}_{0}\\right \)\\right \\vert $$ ](A81414_1_En_2_Chapter_IEq49.gif) for all ![ $$z \\in \\bar{ B}\\left \(0,R\\right \)$$ ](A81414_1_En_2_Chapter_IEq50.gif). By our assumptions, this also holds when ![ $$\\left \\vert z\\right \\vert \\geq R$$ ](A81414_1_En_2_Chapter_IEq51.gif) since in that case ![ $$\\begin{array}{rcl} \\left \\vert p\\left \(z\\right \)\\right \\vert & \\geq & \\frac{\\left \\vert {a}_{n}\\right \\vert } {2}{ \\left \\vert z\\right \\vert }^{n} \\\\ & \\geq & \\frac{\\left \\vert {a}_{n}\\right \\vert } {2}{ \\left \\vert R\\right \\vert }^{n} \\\\ & \\geq & \\left \\vert p\\left \(0\\right \)\\right \\vert\\\\ &\\geq & \\left \\vert p\\left \({z}_{0}\\right \)\\right \\vert.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ14.gif) Thus, we have found our global minimum for ![ $$\\left \\vert p\\left \(z\\right \)\\right \\vert $$ ](A81414_1_En_2_Chapter_IEq52.gif). If ![ $$\\left \\vert p\\left \({z}_{0}\\right \)\\right \\vert= 0$$ ](A81414_1_En_2_Chapter_IEq53.gif), then z 0 is a root and we are finished. Otherwise we can define a new polynomial of degree n ≥ 1: ![ $$q\\left \(z\\right \) = \\frac{p\\left \(z + {z}_{0}\\right \)} {p\\left \({z}_{0}\\right \)}.$$ ](A81414_1_En_2_Chapter_Equl.gif) This polynomial satisfies ![ $$\\begin{array}{rcl} q\\left \(0\\right \)& =& \\frac{p\\left \({z}_{0}\\right \)} {p\\left \({z}_{0}\\right \)} = 1, \\\\ \\left \\vert q\\left \(z\\right \)\\right \\vert & =& \\left \\vert \\frac{p\\left \(z + {z}_{0}\\right \)} {p\\left \({z}_{0}\\right \)} \\right \\vert\\\\ &\\geq & \\left \\vert \\frac{p\\left \({z}_{0}\\right \)} {p\\left \({z}_{0}\\right \)}\\right \\vert\\\\ & =& 1\\end{array}$$ ](A81414_1_En_2_Chapter_Equ15.gif) Thus, ![ $$q\\left \(z\\right \) = 1 + {b}_{k}{z}^{k} + \\cdots+ {b}_{ n}{z}^{n},$$ ](A81414_1_En_2_Chapter_Equm.gif) where b k ≠0. We can now investigate what happens to qz for small z. We first note that ![ $$\\begin{array}{rcl} q\\left \(z\\right \)& =& 1 + {b}_{k}{z}^{k} + {b}_{ k+1}{z}^{k+1} + \\cdots+ {b}_{ n}{z}^{n} \\\\ & =& 1 + {b}_{k}{z}^{k} + \\left \({b}_{ k+1}z + \\cdots+ {b}_{n}{z}^{n-k}\\right \){z}^{k}, \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ16.gif) where ![ $$\\left \({b}_{k+1}z + \\cdots+ {b}_{n}{z}^{n-k}\\right \) \\rightarrow0\\text{ as }z \\rightarrow0.$$ ](A81414_1_En_2_Chapter_Equn.gif) If we write z = re iθ and fix θ so that ![ $${b}_{k}{\\mathrm{e}}^{ik\\theta } = -\\left \\vert {b}_{ k}\\right \\vert,$$ ](A81414_1_En_2_Chapter_Equo.gif) then ![ $$\\begin{array}{rcl} \\left \\vert q\\left \(z\\right \)\\right \\vert & =& \\left \\vert 1 + {b}_{k}{z}^{k} + \\left \({b}_{ k+1}z + \\cdots+ {b}_{n}{z}^{n-k}\\right \){z}^{k}\\right \\vert\\\\ & =& \\left \\vert 1 -\\left \\vert {b}_{k}\\right \\vert {r}^{k} + \\left \({b}_{ k+1}z + \\cdots+ {b}_{n}{z}^{n-k}\\right \){r}^{k}{\\mathrm{e}}^{ik\\theta }\\right \\vert\\\\ &\\leq & 1 -\\left \\vert {b}_{k}\\right \\vert {r}^{k} + \\left \\vert \\left \({b}_{ k+1}z + \\cdots+ {b}_{n}{z}^{n-k}\\right \){r}^{k}{\\mathrm{e}}^{ik\\theta }\\right \\vert\\\\ & =& 1 -\\left \\vert {b}_{k}\\right \\vert {r}^{k} + \\left \\vert {b}_{ k+1}z + \\cdots+ {b}_{n}{z}^{n-k}\\right \\vert {r}^{k} \\\\ & \\leq & 1 -\\frac{\\left \\vert {b}_{k}\\right \\vert } {2} {r}^{k} \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ17.gif) as long as r is chosen so small that 1 − b k r k > 0 and ![ $$\\left \\vert {b}_{k+1}z\\cdots+ {b}_{n}{z}^{n-k}\\right \\vert\\leq\\frac{\\left \\vert {b}_{k}\\right \\vert } {2}$$ ](A81414_1_En_2_Chapter_IEq54.gif). This, however, implies that ![ $$\\left \\vert q\\left \(r{e}^{i\\theta }\\right \)\\right \\vert< 1$$ ](A81414_1_En_2_Chapter_IEq55.gif) for small r. We have therefore arrived at a contradiction. □ ## 2.2 Linear Differential Equations* In this section, we shall study linear differential equations. Everything we have learned about linear independence, bases, special matrix representations, etc. will be extremely useful when trying to solve such equations. In fact, we shall in several sections of this text see that virtually every development in linear algebra can be used to understand the structure of solutions to linear differential equations. It is, however, possible to skip this section if one does not want to be bothered by differential equations while learning linear algebra. We start with systems of differential equations: ![ $$\\begin{array}{ccc} \\dot{{x}}_{1} & =& {a}_{11}{x}_{1} + \\cdots+ {a}_{1m}{x}_{m} + {b}_{1}\\\\ \\vdots&&\\vdots \\\\ \\dot{{x}}_{m}& =&{a}_{n1}{x}_{1} + \\cdots+ {a}_{nm}{x}_{m} + {b}_{n},\\end{array}$$ ](A81414_1_En_2_Chapter_Equp.gif) where a ij , b i ∈ C ∞ a, b, ℂ (or just C ∞ a, b, ℝ) and the functions x j : a, b -> ℂ are to be determined. We can write the system in matrix form and also rearrange it a bit to make it look like we are solving Lx = b. To do this, we use ![ $$x = \\left \[\\begin{array}{c} {x}_{1}\\\\ \\vdots \\\\ {x}_{m} \\end{array} \\right \],\\ b = \\left \[\\begin{array}{c} {b}_{1}\\\\ \\vdots \\\\ {b}_{n}\\end{array} \\right \],\\ A = \\left \[\\begin{array}{ccc} {a}_{11} & \\cdots & {a}_{1m}\\\\ \\vdots & \\ddots & \\vdots \\\\ {a}_{n1} & \\cdots &{a}_{nm} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equq.gif) and define ![ $$\\begin{array}{rcl} L : {C}^{\\infty }\\left \(\\left \[a,b\\right \], {\\mathbb{C}}^{m}\\right \)& \\rightarrow{C}^{\\infty }\\left \(\\left \[a,b\\right \], {\\mathbb{C}}^{n}\\right \)& \\\\ L\\left \(x\\right \)& =\\dot{ x} - Ax. & \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ18.gif) The equation Lx = 0 is called the homogeneous system. We note that the following three properties can be used as a general outline for what to do: 1. Lx = b can be solved if and only if b ∈ imL. 2. If Lx 0 = b and x ∈ kerL, then ![ $$L\\left \(x + {x}_{0}\\right \) = b$$ ](A81414_1_En_2_Chapter_IEq56.gif). 3. If Lx 0 = b and Lx 1 = b, then x 0 − x 1 ∈ kerL. The specific implementation of actually solving the equations, however, is quite different from what we did with systems of (algebraic) equations. First of all, we only consider the case where n = m. This implies that for given t 0 ∈ a, b and x 0 ∈ ℂ n , the initial value problem ![ $$\\begin{array}{rcl} L\\left \(x\\right \)& =& b, \\\\ x\\left \({t}_{0}\\right \)& =& {a}_{0} \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ19.gif) has a unique solution x ∈ C ∞ a, b, ℂ n . We shall not prove this result in this generality, but we shall eventually see why this is true when the matrix A has entries that are constants rather than functions (see Sect. 3.7). As we learn more about linear algebra, we shall revisit this problem and slowly try to gain a better understanding of it. For now, let us just note an important consequence. Theorem 2.2.1. The complete collection of solutions to ![ $$\\begin{array}{ccc} \\dot{{x}}_{1} & =& {a}_{11}{x}_{1} + \\cdots+ {a}_{1n}{x}_{n} + {b}_{1}\\\\ \\vdots&&\\vdots \\\\ \\dot{{x}}_{n}& =&{a}_{n1}{x}_{1} + \\cdots+ {a}_{nn}{x}_{n} + {b}_{n}\\end{array}$$ ](A81414_1_En_2_Chapter_Equr.gif) can be found by finding one solution x 0 and then adding it to the solutions of the homogeneous equation Lz = 0, i.e. ![ $$\\begin{array}{rcl} x& =& z + {x}_{0}, \\\\ L\\left \(z\\right \)& =& 0; \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ20.gif) moreover dim ker L = n. Some particularly interesting and important linear equations are the nth order equations ![ $${D}^{n}x + {a}_{ n-1}{D}^{n-1}x + \\cdots+ {a}_{ 1}Dx + {a}_{0}x = b,$$ ](A81414_1_En_2_Chapter_Equs.gif) where D k x is the kth order derivative of x. If we assume that a n − 1,..., a 0, b ∈ C ∞ a, b, ℂ and define ![ $$\\begin{array}{rcl} L : {C}^{\\infty }\\left \(\\left \[a,b\\right \], \\mathbb{C}\\right \)& \\rightarrow & {C}^{\\infty }\\left \(\\left \[a,b\\right \], \\mathbb{C}\\right \) \\\\ L\\left \(x\\right \)& =& \\left \({D}^{n} + {a}_{ n-1}{D}^{n-1} + \\cdots+ {a}_{ 1}D + {a}_{0}\\right \)\\left \(x\\right \), \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ21.gif) then we have a nice linear problem just as in the previous cases of linear systems of differential or algebraic equations. The problem of solving Lx = b can also be reinterpreted as a linear system of differential equations by defining ![ $${x}_{1} = x,{x}_{2} = Dx,\\ldots,{x}_{n} = {D}^{n-1}x$$ ](A81414_1_En_2_Chapter_Equt.gif) and then considering the system ![ $$\\begin{array}{ccc} \\dot{{x}}_{1} & =& {x}_{2} \\\\ \\dot{{x}}_{2} & =& {x}_{3}\\\\ \\vdots & & \\vdots \\\\ \\dot{{x}}_{n}& =& - {a}_{n-1}{x}_{n} -\\cdots- {a}_{1}{x}_{2} - {a}_{0}{x}_{1} + {b}_{n}.\\end{array}$$ ](A81414_1_En_2_Chapter_Equu.gif) This will not help us in solving the desired equation, but it does tells us that the initial value problem ![ $$\\begin{array}{rcl} L\\left \(x\\right \)& =& b, \\\\ x\\left \({t}_{0}\\right \)& =& {c}_{0},Dx\\left \({t}_{0}\\right \) = {c}_{1},\\ldots,{D}^{n-1}x\\left \({t}_{ 0}\\right \) = {c}_{n-1}, \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ22.gif) has a unique solution, and hence, the above theorem can be paraphrased. Theorem 2.2.2. The complete collection of solutions to ![ $${D}^{n}x + {a}_{ n-1}{D}^{n-1}x + \\cdots+ {a}_{ 1}Dx + {a}_{0}x = b$$ ](A81414_1_En_2_Chapter_Equv.gif) can be found by finding one solution x 0 and then adding it to the solutions of the homogeneous equation Lz = 0, i.e. ![ $$\\begin{array}{rcl} x& =& z + {x}_{0}, \\\\ L\\left \(z\\right \)& =& 0.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ23.gif) Moreover dim ker L = n. It is not hard to give a complete account of how to solve the homogeneous problem Lx = 0 when a 0,..., a n − 1 ∈ ℂ are constants. Let us start with n = 1. Then we are trying to solve ![ $$Dx + {a}_{0}x =\\dot{ x} + {a}_{0}x = 0.$$ ](A81414_1_En_2_Chapter_Equw.gif) Clearly, ![ $$x =\\exp \\left \(-{a}_{0}t\\right \)$$ ](A81414_1_En_2_Chapter_IEq57.gif), is a solution and the complete set of solutions is ![ $$x = c\\exp \\left \(-{a}_{0}t\\right \),c \\in\\mathbb{C}.$$ ](A81414_1_En_2_Chapter_Equx.gif) The initial value problem ![ $$\\begin{array}{rcl} \\dot{x} + {a}_{0}x& =& 0, \\\\ x\\left \({t}_{0}\\right \)& =& {c}_{0} \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ24.gif) has the solution ![ $$x = {c}_{0}\\exp \\left \(-{a}_{0}\\left \(t - {t}_{0}\\right \)\\right \).$$ ](A81414_1_En_2_Chapter_Equy.gif) The trick to solving the higher order case is to note that we can rewrite L as ![ $$\\begin{array}{rcl} L& =& {D}^{n} + {a}_{ n-1}{D}^{n-1} + \\cdots+ {a}_{ 1}D + {a}_{0} \\\\ & =& p\\left \(D\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ25.gif) This makes L look like a polynomial where D is the variable.The corresponding polynomial ![ $$p\\left \(t\\right \) = {t}^{n} + {a}_{ n-1}{t}^{n-1} + \\cdots+ {a}_{ 1}t + {a}_{0}$$ ](A81414_1_En_2_Chapter_Equz.gif) is called the characteristic polynomial. The idea behind solving these equations comes from Proposition 2.2.3. (The Reduction Principle) If ![ $$q\\left \(t\\right \) = {t}^{m} + {b}_{m-1}{t}^{m-1} + \\cdots+ {b}_{0}$$ ](A81414_1_En_2_Chapter_IEq58.gif) is a polynomial that divides ![ $$p\\left \(t\\right \) = {t}^{n} + {a}_{n-1}{t}^{n-1} + \\cdots+ {a}_{1}t + {a}_{0}$$ ](A81414_1_En_2_Chapter_IEq59.gif) , then any solution to qDx = 0 is also a solution to pDx = 0. Proof. This simply hinges of observing that pt = rtqt, then pD = rDqD. So by evaluating the latter on x, we get ![ $$p\\left \(D\\right \)\\left \(x\\right \)=r\\left \(D\\right \)\\left \(q\\left \(D\\right \)\\left \(x\\right \)\\right \) = 0$$ ](A81414_1_En_2_Chapter_IEq60.gif). □ The simplest factors are, of course, the linear factors t − λ, and we know that the solutions to ![ $$\\left \(D - \\lambda \\right \)\\left \(x\\right \) = Dx - \\lambda x = 0$$ ](A81414_1_En_2_Chapter_Equaa.gif) are given by xt = Cexpλt. This means that we should be looking for roots to pt. These roots are called eigenvalues or characteristic values. The Fundamental Theorem of Algebra asserts that any polynomial p ∈ ℂt can be factored over the complex numbers ![ $$\\begin{array}{rcl} p\\left \(t\\right \)& =& {t}^{n} + {a}_{ n-1}{t}^{n-1} + \\cdots+ {a}_{ 1}t + {a}_{0} \\\\ & =&{ \\left \(t - {\\lambda }_{1}\\right \)}^{{k}_{1} }\\cdots {\\left \(t - {\\lambda }_{m}\\right \)}^{{k}_{m} }.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ26.gif) Here the roots λ1,..., λ m are assumed to be distinct, each occurs with multiplicity k 1,..., k m , and ![ $${k}_{1} + \\cdots+ {k}_{m} = n$$ ](A81414_1_En_2_Chapter_IEq61.gif). The original equation ![ $$L = {D}^{n} + {a}_{ n-1}{D}^{n-1} + \\cdots+ {a}_{ 1}D + {a}_{0},$$ ](A81414_1_En_2_Chapter_Equab.gif) then factors ![ $$\\begin{array}{rcl} L& =& {D}^{n} + {a}_{ n-1}{D}^{n-1} + \\cdots+ {a}_{ 1}D + {a}_{0} \\\\ & =&{ \\left \(D - {\\lambda }_{1}\\right \)}^{{k}_{1} }\\cdots {\\left \(D - {\\lambda }_{m}\\right \)}^{{k}_{m} }.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ27.gif) Thus, the original problem has been reduced to solving the equations ![ $$\\begin{array}{rcl}{ \\left \(D - {\\lambda }_{1}\\right \)}^{{k}_{1} }\\left \(x\\right \)& =& 0, \\\\ \\vdots& & \\vdots \\\\ {\\left \(D - {\\lambda }_{m}\\right \)}^{{k}_{m} }\\left \(x\\right \)& =& 0.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ28.gif) Note that if we had not insisted on using the more abstract and less natural complex numbers, we would not have been able to make the reduction so easily. If we are in a case where the differential equation is real and there is a good physical reason for keeping solutions real as well, then we can still solve it as if it were complex and then take real and imaginary parts of the complex solutions to get real ones. It would seem that the n complex solutions would then lead to 2n real ones. This is not really the case. First, observe that each real eigenvalue λ only gives rise to a one parameter family of real solutions cexpλt − t 0. As for complex eigenvalues, we know that real polynomials have the property that complex roots come in conjugate pairs. Then, we note that expλt − t 0 and ![ $$\\exp \\left \(\\bar{\\lambda }\\left \(t - {t}_{0}\\right \)\\right \)$$ ](A81414_1_En_2_Chapter_IEq62.gif) up to sign have the same real and imaginary parts, and so these pairs of eigenvalues only lead to a two-parameter family of real solutions which if ![ $$\\lambda= {\\lambda }_{1} + i{\\lambda }_{2}$$ ](A81414_1_En_2_Chapter_IEq63.gif) looks like ![ $$c\\exp \\left \({\\lambda }_{1}\\left \(t - {t}_{0}\\right \)\\right \)\\cos \\left \({\\lambda }_{2}\\left \(t - {t}_{0}\\right \)\\right \) + d\\exp \\left \({\\lambda }_{1}\\left \(t - {t}_{0}\\right \)\\right \)\\sin \\left \({\\lambda }_{2}\\left \(t - {t}_{0}\\right \)\\right \).$$ ](A81414_1_En_2_Chapter_Equac.gif) Let us return to the complex case again. If m = n and ![ $${k}_{1} = \\cdots= {k}_{m} = 1,$$ ](A81414_1_En_2_Chapter_IEq64.gif) we simply get n first-order equations, and we see that the complete set of solutions to Lx = 0 is given by ![ $$x = {c}_{1}\\exp \\left \({\\lambda }_{1}t\\right \) + \\cdots+ {c}_{n}\\exp \\left \({\\lambda }_{n}t\\right \).$$ ](A81414_1_En_2_Chapter_Equad.gif) It should be noted that we need to show that expλ1 t,..., expλ n t are linearly independent in order to show that we have found all solutions. This was discussed in Example 1.12.15 and will also be established in Sect. 2.5. With a view towards solving the initial value problem, we rewrite the solution as ![ $$x = {d}_{1}\\exp \\left \({\\lambda }_{1}\\left \(t - {t}_{0}\\right \)\\right \) + \\cdots+ {d}_{n}\\exp \\left \({\\lambda }_{n}\\left \(t - {t}_{0}\\right \)\\right \).$$ ](A81414_1_En_2_Chapter_Equae.gif) To solve the initial value problem requires differentiating this expression several times and then solving ![ $$\\begin{array}{rcl} x\\left \({t}_{0}\\right \)& =& {d}_{1} + \\cdots+ {d}_{n}, \\\\ Dx\\left \({t}_{0}\\right \)& =& {\\lambda }_{1}{d}_{1} + \\cdots+ {\\lambda }_{n}{d}_{n}, \\\\ \\vdots& & \\vdots \\\\ {D}^{n-1}x\\left \({t}_{ 0}\\right \)& =& {\\lambda }_{1}^{n-1}{d}_{ 1} + \\cdots+ {\\lambda }_{n}^{n-1}{d}_{ n} \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ29.gif) for d 1,..., d n . In matrix form, this becomes ![ $$\\left \[\\begin{array}{ccc} 1 &\\cdots & 1\\\\ {\\lambda }_{ 1} & \\cdots & {\\lambda }_{n}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\lambda }_{1}^{n-1} & \\cdots &{\\lambda }_{n}^{n-1} \\end{array} \\right \]\\left \[\\begin{array}{c} {d}_{1}\\\\ \\vdots \\\\ {d}_{n} \\end{array} \\right \] = \\left \[\\begin{array}{c} x\\left \({t}_{0}\\right \) \\\\ \\dot{x}\\left \({t}_{0}\\right \)\\\\ \\vdots \\\\ {x}^{\\left \(n-1\\right \)}\\left \({t}_{0}\\right \) \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equaf.gif) In Example 1.12.12, we saw that this matrix has rank n if λ1,..., λ n are distinct. Thus, we can solve for the ds in this case. When roots have multiplicity, things get a little more complicated. We first need to solve the equation ![ $${\\left \(D - \\lambda \\right \)}^{k}\\left \(x\\right \) = 0.$$ ](A81414_1_En_2_Chapter_Equag.gif) One can check that the k functions expλt, texpλt,..., t k − 1expλt are solutions to this equation. One can also prove that they are linearly independent using that 1, t,..., t k − 1 are linearly independent. This will lead us to a complete set of solutions to Lx = 0 even when we have multiple roots. The issue of solving the initial value is somewhat more involved due to the problem of taking derivatives of t l expλt. This can be simplified a little by considering the solutions ![ $$\\exp \\left \(\\lambda \\left \(t - {t}_{0}\\right \)\\right \),\\left \(t - {t}_{0}\\right \)\\exp \\left \(\\lambda \\left \(t - {t}_{0}\\right \)\\right \),\\ldots,{\\left \(t - {t}_{0}\\right \)}^{k-1}\\exp \\left \(\\lambda \\left \(t - {t}_{0}\\right \)\\right \)$$ ](A81414_1_En_2_Chapter_IEq65.gif). For the sake of illustration, let us consider the simplest case of trying to solve ![ $${\\left \(D - \\lambda \\right \)}^{2}\\left \(x\\right \) = 0$$ ](A81414_1_En_2_Chapter_IEq66.gif). The complete set of solutions can be parametrized as ![ $$x = {d}_{1}\\exp \\left \(\\lambda \\left \(t - {t}_{0}\\right \)\\right \) + {d}_{2}\\left \(t - {t}_{0}\\right \)\\exp \\left \(\\lambda \\left \(t - {t}_{0}\\right \)\\right \).$$ ](A81414_1_En_2_Chapter_Equah.gif) Then, ![ $$Dx = \\lambda {d}_{1}\\exp \\left \(\\lambda \\left \(t - {t}_{0}\\right \)\\right \) + \\left \(1 + \\lambda \\left \(t - {t}_{0}\\right \)\\right \){d}_{2}\\exp \\left \(\\lambda \\left \(t - {t}_{0}\\right \)\\right \).$$ ](A81414_1_En_2_Chapter_Equai.gif) Thus, we have to solve ![ $$\\begin{array}{rcl} x\\left \({t}_{0}\\right \)& =& {d}_{1}, \\\\ Dx\\left \({t}_{0}\\right \)& =& \\lambda {d}_{1} + {d}_{2}.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ30.gif) This leads us to the system ![ $$\\left \[\\begin{array}{cc} 1 &0\\\\ \\lambda&1 \\end{array} \\right \]\\left \[\\begin{array}{c} {d}_{1} \\\\ {d}_{2}\\end{array} \\right \] = \\left \[\\begin{array}{c} x\\left \({t}_{0}\\right \) \\\\ Dx\\left \({t}_{0}\\right \) \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equaj.gif) If λ = 0, we are finished. Otherwise, we can multiply the first equation by λ and subtract it from the second to obtain ![ $$\\left \[\\begin{array}{cc} 1&0\\\\ 0 &1 \\end{array} \\right \]\\left \[\\begin{array}{c} {d}_{1} \\\\ {d}_{2}\\end{array} \\right \] = \\left \[\\begin{array}{c} x\\left \({t}_{0}\\right \) \\\\ Dx\\left \({t}_{0}\\right \) - \\lambda x\\left \({t}_{0}\\right \) \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equak.gif) Thus, the solution to the initial value problem is ![ $$x = x\\left \({t}_{0}\\right \)\\exp \\left \(\\lambda \\left \(t - {t}_{0}\\right \)\\right \) + \\left \(Dx\\left \({t}_{0}\\right \) - \\lambda x\\left \({t}_{0}\\right \)\\right \)\\left \(t - {t}_{0}\\right \)\\exp \\left \(\\lambda \\left \(t - {t}_{0}\\right \)\\right \).$$ ](A81414_1_En_2_Chapter_Equal.gif) A similar method of finding a characteristic polynomial and its roots can also be employed in solving linear systems of equations as well as homogeneous systems of linear differential with constant coefficients. The problem lies in deciding what the characteristic polynomial should be and what its roots mean for the system. This will be studied in subsequent sections and chapters. In Sects. 2.6 and 2.7, we shall also see that systems of first-order differential equations can be solved using our knowledge of higher order equations. For now, let us see how one can approach systems of linear differential equations from the point of view of first trying to define the eigenvalues. We are considering the homogeneous problem ![ $$L\\left \(x\\right \) =\\dot{ x} - Ax = 0,$$ ](A81414_1_En_2_Chapter_Equam.gif) where A is an n ×n matrix with real or complex numbers as entries. If the system is decoupled, i.e., ![ $$\\dot{{x}}_{i}$$ ](A81414_1_En_2_Chapter_IEq67.gif), depends only on x i then we have n first-order equations that can be solved as above. In this case, the entries that are not on the diagonal of A are zero. A particularly simple case occurs when A = λ1 ℂ n for some λ. In this case, the general solution is given by ![ $$x = {a}_{0}\\exp \\left \(\\lambda \\left \(t - {t}_{0}\\right \)\\right \).$$ ](A81414_1_En_2_Chapter_Equan.gif) We now observe that for fixed a 0, this is still a solution to the general equation ![ $$\\dot{x} = Ax$$ ](A81414_1_En_2_Chapter_IEq68.gif) provided only that Aa 0 = λa 0.Thus, we are lead to seek pairs of scalars λ and vectors a 0 such that Aa 0 = λa 0. If we can find such pairs where a 0≠0, then we call λ an eigenvalue for A and a 0 and eigenvector for λ. Therefore, if we can find a basis v 1,..., v n for ℝ n or ℂ n of eigenvectors with Av 1 = λ1 v 1,..., Av n = λ n v x , then we have that the complete solution must be ![ $$x = {v}_{1}\\exp \\left \({\\lambda }_{1}\\left \(t - {t}_{0}\\right \)\\right \){c}_{1} + \\cdots+ {v}_{n}\\exp \\left \({\\lambda }_{n}\\left \(t - {t}_{0}\\right \)\\right \){c}_{n}.$$ ](A81414_1_En_2_Chapter_Equao.gif) The initial value problem Lx = 0, xt 0 = x 0 is then handled by solving ![ $${v}_{1}{c}_{1}+\\cdots +{v}_{n}{c}_{n} = \\left \[\\begin{array}{ccc} {v}_{1} & \\cdots &{v}_{n} \\end{array} \\right \]\\left \[\\begin{array}{c} {c}_{1}\\\\ \\vdots \\\\ {c}_{n}\\end{array} \\right \] = {x}_{0}.$$ ](A81414_1_En_2_Chapter_Equap.gif) Since v 1,..., v n was assumed to be a basis, we know that this system can be solved. Gauss elimination can then be used to find c 1,..., c n . What we accomplished by this change of basis was to decouple the system in a different coordinate system. One of the goals in the study of linear operators is to find a basis that makes the matrix representation of the operator as simple as possible. As we have just seen, this can then be used to great effect in solving what might appear to be a rather complicated problem. Even so, it might not be possible to find the desired basis of eigenvectors. This happens if we consider the second-order equation ![ $${\\left \(D - \\lambda \\right \)}^{2} = 0$$ ](A81414_1_En_2_Chapter_IEq69.gif) and convert it to a system ![ $$\\left \[\\begin{array}{c} \\dot{{x}}_{1} \\\\ \\dot{{x}}_{2}\\end{array} \\right \] = \\left \[\\begin{array}{cc} 0 & 1 \\\\ - {\\lambda }^{2} & 2\\lambda\\end{array} \\right \]\\left \[\\begin{array}{c} {x}_{1} \\\\ {x}_{2}\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equaq.gif) Here the general solution to ![ $${\\left \(D - \\lambda \\right \)}^{2} = 0$$ ](A81414_1_En_2_Chapter_IEq70.gif) is of the form ![ $$x = {x}_{1} = {c}_{1}\\exp \\left \(\\lambda t\\right \) + {c}_{2}t\\exp \\left \(\\lambda t\\right \)$$ ](A81414_1_En_2_Chapter_Equar.gif) so, ![ $${x}_{2} =\\dot{ {x}}_{1} = {c}_{1}\\lambda \\exp \\left \(\\lambda t\\right \) + {c}_{2}\\left \(\\lambda t + 1\\right \)\\exp \\left \(\\lambda t\\right \).$$ ](A81414_1_En_2_Chapter_Equas.gif) This means that ![ $$\\left \[\\begin{array}{c} {x}_{1} \\\\ {x}_{2}\\end{array} \\right \] = {c}_{1}\\left \[\\begin{array}{c} 1\\\\ \\lambda\\end{array} \\right \]\\exp \\left \(\\lambda t\\right \)+{c}_{2}\\left \[\\begin{array}{c} t\\\\ \\lambda t + 1 \\end{array} \\right \]\\exp \\left \(\\lambda t\\right \).$$ ](A81414_1_En_2_Chapter_Equat.gif) Since we cannot write this in the form ![ $$\\left \[\\begin{array}{c} {x}_{1} \\\\ {x}_{2}\\end{array} \\right \] = {c}_{1}{v}_{1}\\exp \\left \({\\lambda }_{1}t\\right \)+{c}_{2}{v}_{2}\\exp \\left \({\\lambda }_{2}t\\right \),$$ ](A81414_1_En_2_Chapter_Equau.gif) there cannot be any reason to expect that a basis of eigenvectors can be found even for the simple matrix ![ $$A = \\left \[\\begin{array}{cc} 0&1\\\\ 0 &0 \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equav.gif) In Sect. 2.3 we shall see that any square matrix and indeed any linear operator on a finite-dimensional vector space has a characteristic polynomial whose roots are the eigenvalues of the map. Having done that, we shall in Sect. 2.4 and especially Sect. 2.5 try to determine exactly what properties of the linear map further guarantee that it admits a basis of eigenvectors. In Sects. 2.6-2.8, we shall show that any system of equations can be transformed into a new system that looks like several uncoupled higher order equations. There is another rather intriguing way of solving linear differential equations by reducing them to recurrences. We will emphasize higher order equations, but it works equally well with systems. The goal is to transform the differential equation: ![ $${D}^{n}x + {a}_{ n-1}{D}^{n-1}x + \\cdots+ {a}_{ 1}Dx + {a}_{0}x = p\\left \(D\\right \)\\left \(x\\right \) = 0$$ ](A81414_1_En_2_Chapter_Equaw.gif) into something that can be solved using combinatorial methods. Assume that x is given by its MacLaurin expansion ![ $$\\begin{array}{rcl} x\\left \(t\\right \)& =& {\\sum\\nolimits }_{k=0}^{\\infty }\\left \({D}^{k}x\\right \)\\left \(0\\right \)\\frac{{t}^{k}} {k!} \\\\ & =& {\\sum\\nolimits }_{k=0}^{\\infty }{c}_{ k}\\frac{{t}^{k}} {k!}.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ31.gif) The derivative is then given by ![ $$\\begin{array}{rcl} Dx& =& {\\sum\\nolimits }_{k=1}^{\\infty }{c}_{ k} \\frac{{t}^{k-1}} {\\left \(k - 1\\right \)!} \\\\ & =& {\\sum\\nolimits }_{k=0}^{\\infty }{c}_{ k+1}\\frac{{t}^{k}} {k!}, \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ32.gif) and more generally ![ $${D}^{l}x ={ \\sum\\nolimits }_{k=0}^{\\infty }{c}_{ k+l}\\frac{{t}^{k}} {k!}.$$ ](A81414_1_En_2_Chapter_Equax.gif) Thus, the derivative of x is simply a shift in the index for the sequence ![ $$\\left \({c}_{k}\\right \)$$ ](A81414_1_En_2_Chapter_IEq71.gif). The differential equation now gets to look like ![ $$\\begin{array}{rcl} & {D}^{n}x + {a}_{n-1}{D}^{n-1}x + \\cdots+ {a}_{1}Dx + {a}_{0}x & \\\\ & ={ \\sum\\nolimits }_{k=0}^{\\infty }\\left \({c}_{k+n} + {a}_{n-1}{c}_{k+n-1} + \\cdots+ {a}_{1}{c}_{k+1} + {a}_{0}{c}_{k}\\right \)\\frac{{t}^{k}} {k!}.& \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ33.gif) From this we can conclude that x is a solution if and only if the sequence c k solves the linearnth-order recurrence ![ $${c}_{k+n} + {a}_{n-1}{c}_{k+n-1} + \\cdots+ {a}_{1}{c}_{k+1} + {a}_{0}{c}_{k} = 0$$ ](A81414_1_En_2_Chapter_Equay.gif) or ![ $${c}_{k+n} = -\\left \({a}_{n-1}{c}_{k+n-1} + \\cdots+ {a}_{1}{c}_{k+1} + {a}_{0}{c}_{k}\\right \).$$ ](A81414_1_En_2_Chapter_Equaz.gif) For such a sequence, it is clear that we need to know the initial values c 0,..., c n − 1 in order to find the whole sequence. This corresponds to the initial value problem for the corresponding differential equation as c k = D k x0. The correspondence between systems ![ $$\\dot{x} = Ax$$ ](A81414_1_En_2_Chapter_IEq72.gif) and recurrences of vectors ![ $${c}_{n+1} = A{c}_{n}$$ ](A81414_1_En_2_Chapter_IEq73.gif) comes about by assuming that the solution to the differential equation looks like ![ $$\\begin{array}{rcl} x\\left \(t\\right \)& =& {\\sum\\nolimits }_{n=0}^{\\infty }{c}_{ n}\\frac{{t}^{n}} {n!}, \\\\ {c}_{n}& \\in & {\\mathbb{C}}^{n}.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ34.gif) Finally, we point out that in Sect. 2.9, we offer an explicit algorithm for reducing systems of possibly higher order equations to independent higher order equations. ### 2.2.1 Exercises 1. Find the solution to the differential equations with the general initial values: xt 0 = x 0, ![ $$\\dot{x}\\left \({t}_{0}\\right \) =\\dot{ {x}}_{0},$$ ](A81414_1_En_2_Chapter_IEq74.gif) and ![ $$\\ddot{x}\\left \({t}_{0}\\right \) =\\ddot{ {x}}_{0}$$ ](A81414_1_En_2_Chapter_IEq75.gif). (a) ![ $$\\mathop{x}\\limits^{\\hbox {...}} - 3\\ddot{x} + 3\\dot{x} - x = 0$$ ](A81414_1_En_2_Chapter_IEq76.gif). (b) ![ $$\\mathop{x}\\limits^{\\hbox {...}} - 5\\ddot{x} + 8\\dot{x} - 4x = 0$$ ](A81414_1_En_2_Chapter_IEq77.gif). (c) ![ $$\\mathop{x}\\limits^{\\hbox {...}} + 6\\ddot{x} + 11\\dot{x} + 6x = 0$$ ](A81414_1_En_2_Chapter_IEq78.gif). 2. Find the complete solution to the initial value problems. (a) ![ $$\\left \[\\begin{array}{c} \\dot{x}\\\\ \\dot{y}\\end{array} \\right \] = \\left \[\\begin{array}{cc} 0&2\\\\ 1 &3\\end{array} \\right \]\\left \[\\begin{array}{c} x\\\\ y\\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_IEq79.gif) where ![ $$\\left \[\\begin{array}{c} x\\left \({t}_{0}\\right \) \\\\ y\\left \({t}_{0}\\right \)\\end{array} \\right \] = \\left \[\\begin{array}{c} {x}_{0} \\\\ {y}_{0}\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_IEq80.gif) (b) ![ $$\\left \[\\begin{array}{c} \\dot{x}\\\\ \\dot{y}\\end{array} \\right \] = \\left \[\\begin{array}{cc} 0&1\\\\ 1 &2\\end{array} \\right \]\\left \[\\begin{array}{c} x\\\\ y\\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_IEq81.gif) where ![ $$\\left \[\\begin{array}{c} x\\left \({t}_{0}\\right \) \\\\ y\\left \({t}_{0}\\right \)\\end{array} \\right \] = \\left \[\\begin{array}{c} {x}_{0} \\\\ {y}_{0}\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_IEq82.gif) 3. Find the real solution to the differential equations with the general initial values: xt 0 = x 0, ![ $$\\dot{x}\\left \({t}_{0}\\right \) =\\dot{ {x}}_{0},$$ ](A81414_1_En_2_Chapter_IEq83.gif) and ![ $$\\ddot{x}\\left \({t}_{0}\\right \) =\\ddot{ {x}}_{0}$$ ](A81414_1_En_2_Chapter_IEq84.gif) in the third-order cases. (a) ![ $$\\ddot{x} + x = 0$$ ](A81414_1_En_2_Chapter_IEq85.gif). (b) ![ $$\\mathop{x}\\limits^{\\hbox {...}} +\\dot{ x} = 0$$ ](A81414_1_En_2_Chapter_IEq86.gif). (c) ![ $$\\ddot{x} - 6\\dot{x} + 25x = 0$$ ](A81414_1_En_2_Chapter_IEq87.gif). (d) ![ $$\\mathop{x}\\limits^{\\hbox {...}} - 5\\ddot{x} + 19\\dot{x} + 25 = 0$$ ](A81414_1_En_2_Chapter_IEq88.gif). 4. Consider the vector space C ∞ a, b, ℂ n of infinitely differentiable curves in ℂ n and let z 1,..., z n ∈ C ∞ a, b, ℂ n . (a) Show that if we can find t 0 ∈ a, b so that the vectors z 1 t 0,..., z n t 0 ∈ ℂ n are linearly independent, then the functions z 1,..., z n ∈ C ∞ a, b, ℂ n are also linearly independent. (b) Find a linearly independent pair z 1, z 2 ∈ C ∞ a, b, ℂ 2 so that z 1 t, z 2 t ∈ ℂ 2 are linearly dependent for all t ∈ a, b. (c) Assume now that each z 1,..., z n solves the linear differential equation ![ $$\\dot{x} = Ax$$ ](A81414_1_En_2_Chapter_IEq89.gif). Show that if z 1 t 0,...,z n t 0 ∈ ℂ n are linearly dependent for some t 0, then z 1,..., z n ∈ C ∞ a, b, ℂ n are linearly dependent as well. 5. Let ![ $$p\\left \(t\\right \) = \\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{n}\\right \)$$ ](A81414_1_En_2_Chapter_IEq90.gif), where we allow multiplicities among the roots. (a) Show that ![ $$\\left \(D - \\lambda \\right \)\\left \(x\\right \) = f\\left \(t\\right \)$$ ](A81414_1_En_2_Chapter_IEq91.gif) has ![ $$x =\\exp \\left \(\\lambda t\\right \){\\int\\nolimits \\nolimits }_{0}^{t}\\exp \\left \(-\\lambda s\\right \)f\\left \(s\\right \)\\mathrm{d}s$$ ](A81414_1_En_2_Chapter_Equba.gif) as a solution. (b) Show that a solution x to pDx = f can be found by successively solving ![ $$\\begin{array}{rcl} \\left \(D - {\\lambda }_{1}\\right \)\\left \({z}_{1}\\right \)& =& f, \\\\ \\left \(D - {\\lambda }_{2}\\right \)\\left \({z}_{2}\\right \)& =& {z}_{1}, \\\\ \\vdots& & \\vdots \\\\ \\left \(D - {\\lambda }_{n}\\right \)\\left \({z}_{n}\\right \)& =& {z}_{n-1}.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ35.gif) 6. Show that the initial value problem ![ $$\\begin{array}{rcl} \\dot{x}& =& Ax, \\\\ x\\left \({t}_{0}\\right \)& =& {x}_{0} \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ36.gif) can be solved "explicitly" if A is upper (or lower) triangular. This holds even in the case where the entries of A and b are functions of t. 7. Assume that xt is a solution to ![ $$\\dot{x} = Ax,$$ ](A81414_1_En_2_Chapter_IEq92.gif) where A ∈ Mat n ×n ℂ. (a) Show that the phase shifts ![ $${x}_{\\omega }\\left \(t\\right \) = x\\left \(t + \\omega \\right \)$$ ](A81414_1_En_2_Chapter_IEq93.gif) are also solutions. (b) Show that if the vectors xω1,..., xω n form a basis for ℂ n , then all solutions to ![ $$\\dot{x} = Ax$$ ](A81414_1_En_2_Chapter_IEq94.gif) are linear combinations of the phase-shifted solutions ![ $${x}_{{\\omega }_{1}},\\ldots,{x}_{{\\omega }_{n}}$$ ](A81414_1_En_2_Chapter_IEq95.gif). 8. Assume that x is a solution to pDx = 0, where ![ $$p\\left \(D\\right \) = {D}^{n} + \\cdots+ {a}_{1}D + {a}_{0}$$ ](A81414_1_En_2_Chapter_IEq96.gif). (a) Show that the phase shifts ![ $${x}_{\\omega }\\left \(t\\right \) = x\\left \(t + \\omega \\right \)$$ ](A81414_1_En_2_Chapter_IEq97.gif) are also solutions. (b) Show that, if the vectors ![ $$\\left \[\\begin{array}{c} x\\left \({\\omega }_{1}\\right \) \\\\ Dx\\left \({\\omega }_{1}\\right \)\\\\ \\vdots \\\\ {D}^{n-1}x\\left \({\\omega }_{1}\\right \)\\end{array} \\right \],\\ldots,\\left \[\\begin{array}{c} x\\left \({\\omega }_{n}\\right \) \\\\ Dx\\left \({\\omega }_{n}\\right \)\\\\ \\vdots \\\\ {D}^{n-1}x\\left \({\\omega }_{n}\\right \)\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equbb.gif) form a basis for ℂ n , then all solutions to pDx = 0 are linear combinations of the phase shifted solutions ![ $${x}_{{\\omega }_{1}},\\ldots,{x}_{{\\omega }_{n}}$$ ](A81414_1_En_2_Chapter_IEq98.gif). 9. Let ![ $$p\\left \(t\\right \) = \\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{n}\\right \)$$ ](A81414_1_En_2_Chapter_IEq99.gif). Show that the higher order equation ![ $$L\\left \(y\\right \) = p\\left \(D\\right \)\\left \(y\\right \) = 0$$ ](A81414_1_En_2_Chapter_IEq100.gif) can be made into a system of equations ![ $$\\dot{x} - Ax = 0,$$ ](A81414_1_En_2_Chapter_IEq101.gif) where ![ $$A = \\left \[\\begin{array}{cccc} {\\lambda }_{1} & 1 && 0 \\\\ 0 &{\\lambda }_{2} & \\ddots \\\\ & & \\ddots & 1\\\\ 0 & & &{\\lambda }_{ n}\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equbc.gif) by choosing ![ $$x = \\left \[\\begin{array}{c} y \\\\ \\left \(D - {\\lambda }_{1}\\right \)y\\\\ \\vdots \\\\ \\left \(D - {\\lambda }_{1}\\right \)\\cdots \\left \(D - {\\lambda }_{n-1}\\right \)y\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equbd.gif) 10. Show that ptexpλt solves ![ $${\\left \(D - \\lambda \\right \)}^{k}x = 0$$ ](A81414_1_En_2_Chapter_IEq102.gif) if pt ∈ ℂt and degp ≤ k − 1. Conclude that ker((D − λ) k ) contains a k-dimensional subspace. 11. Let V = spanexpλ1 t,..., expλ n t, where λ1,..., λ n ∈ ℂ are distinct. (a) Show that expλ1 t,..., expλ n t form a basis for V. Hint: One way of doing this is to construct a linear isomorphism ![ $$\\begin{array}{rcl} L : V & \\rightarrow & {\\mathbb{C}}^{n} \\\\ L\\left \(f\\right \)& =& \\left \(f\\left \({t}_{1}\\right \),\\ldots,f\\left \({t}_{n}\\right \)\\right \) \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ37.gif) by selecting suitable points t 1,..., t n ∈ ℝ depending on λ1,..., λ n ∈ ℂ such that Lexpλ i t, i = 1,..., n form a basis. (b) Show that if x ∈ V, then Dx ∈ V. (c) Compute the matrix representation for the linear operator D : V -> V with respect to expλ1 t,..., expλ n t. (d) More generally, show that pD : V -> V, where ![ $$p\\left \(D\\right \) = {a}_{k}{D}^{k} + \\cdots+ {a}_{1}D + {a}_{0}{1}_{V }$$ ](A81414_1_En_2_Chapter_IEq103.gif). (e) Show that pD = 0 if and only if λ1,..., λ n are all roots of pt. 12. Let p ∈ ℂt and consider ![ $$\\ker \\left \(p\\left \(D\\right \)\\right \) = \\left \\{x : p\\left \(D\\right \)\\left \(x\\right \) = 0\\right \\},$$ ](A81414_1_En_2_Chapter_IEq104.gif) i.e., it is the space of solutions to pD = 0. (a) Assuming unique solutions to initial value problems, show that ![ $${\\dim }_{\\mathbb{C}}\\ker \\left \(p\\left \(D\\right \)\\right \) =\\deg p = n.$$ ](A81414_1_En_2_Chapter_Eqube.gif) (b) Show that D : kerpD -> kerpD (see also Exercise 3 in Sect. 1.11). (c) Show that qD : kerpD -> kerpD for any polynomial qt ∈ ℂt. (d) Show that kerpD has a basis for the form x, Dx,..., D n − 1 x. Hint: Let x be the solution to pDx = 0 with the initial values ![ $$x\\left \(0\\right \) = Dx\\left \(0\\right \) = \\cdots= {D}^{n-2}x\\left \(0\\right \) = 0$$ ](A81414_1_En_2_Chapter_IEq105.gif) and ![ $${D}^{n-1}x\\left \(0\\right \) = 1$$ ](A81414_1_En_2_Chapter_IEq106.gif). 13. Let p ∈ ℝt and consider ![ $$ \\begin{array}{rcl}{ \\ker }_{\\mathbb{R}}\\left \(p\\left \(D\\right \)\\right \)& =& \\left \\{x : \\mathbb{R} \\rightarrow\\mathbb{R} : p\\left \(D\\right \)\\left \(x\\right \) = 0\\right \\}, \\\\ {\\ker }_{\\mathbb{C}}\\left \(p\\left \(D\\right \)\\right \)& =& \\left \\{z : \\mathbb{R} \\rightarrow\\mathbb{C} : p\\left \(D\\right \)\\left \(z\\right \) = 0\\right \\} \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ38.gif) i.e., the real-valued, respectively, complex-valued solutions. (a) Show that ![ $$x {\\in \\ker }_{\\mathbb{R}}\\left \(p\\left \(D\\right \)\\right \)$$ ](A81414_1_En_2_Chapter_IEq107.gif) if and only if x = Rez where ![ $$z {\\in \\ker }_{\\mathbb{C}}\\left \(p\\left \(D\\right \)\\right \)$$ ](A81414_1_En_2_Chapter_IEq108.gif). (b) Show that ![ $${\\dim }_{\\mathbb{C}}\\ker \\left \(p\\left \(D\\right \)\\right \) =\\deg p {=\\dim }_{\\mathbb{R}}\\ker \\left \(p\\left \(D\\right \)\\right \)$$ ](A81414_1_En_2_Chapter_IEq109.gif). ## 2.3 Eigenvalues We are now ready to give the abstract definitions for eigenvalues and eigenvectors. Definition 2.3.1. Consider a linear operator L : V -> V on a vector space over ![ $$\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq110.gif). If we have a scalar ![ $$\\lambda\\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq111.gif) and a vector x ∈ V − 0 such that Lx = λx, then we say that λ is an eigenvalue of L and x is an eigenvector for λ. If we add the zero vector to the space of eigenvectors for λ, then it can be identified with the subspace ![ $$\\ker \\left \(L - \\lambda {1}_{V }\\right \) = \\left \\{x \\in V : L\\left \(x\\right \) - \\lambda x = 0\\right \\} \\subset V.$$ ](A81414_1_En_2_Chapter_Equbf.gif) This is also called the eigenspace for λ. In many texts, this space is often denoted ![ $${E}_{\\lambda } =\\ker \\left \(L - \\lambda {1}_{V }\\right \).$$ ](A81414_1_En_2_Chapter_Equbg.gif) Eigenvalues are also called proper values or characteristic values in some texts. "Eigen" is a German adjective that often is translated as "own" or "proper" (think "property"). For linear operators defined by n ×n matrices, we can give a procedure for computing the eigenvalues/vectors using Gauss elimination. The more standard method that employs determinants can be found in virtually every other book on linear algebra and will be explained in Sect. 5.7. We start by considering a matrix ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq112.gif). If we wish to find an eigenvalue λ for A, then we need to determine when there is a nontrivial solution to ![ $$\\left \(A - \\lambda {1}_{{\\mathbb{F}}^{n}}\\right \)\\left \(x\\right \) = 0$$ ](A81414_1_En_2_Chapter_IEq113.gif). In other words, the augmented system ![ $$\\left \[\\begin{array}{ccc} {\\alpha }_{11} - \\lambda &\\cdots & {\\alpha }_{1n}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\alpha }_{n1} & \\cdots &{\\alpha }_{nn} - \\lambda\\end{array} \\begin{array}{c} 0 \\\\ \\vdots \\\\ 0 \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equbh.gif) should have a nontrivial solution. This is something we know how to deal with using Gauss elimination. The only complication is that if λ is simply an abstract number, then it can be a bit tricky to decide when we are allowed to divide by expressions that involve λ. Note that we do not necessarily need to carry the last column of zeros through the calculations as row reduction will never change those entries. Thus, we only need to do row reduction on ![ $$A - \\lambda {1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_2_Chapter_IEq114.gif) or if convenient ![ $$\\lambda {1}_{{\\mathbb{F}}^{n}} - A$$ ](A81414_1_En_2_Chapter_IEq115.gif). Example 2.3.2. Assume that ![ $$\\mathbb{F} = \\mathbb{C}$$ ](A81414_1_En_2_Chapter_IEq116.gif) and let ![ $$A = \\left \[\\begin{array}{cccc} 0 &1&0&0\\\\ - 1 &0 &0 &0 \\\\ 0 &0&0&1\\\\ 0 &0 &1 &0 \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equbi.gif) Row reduction tells us the augmented system ![ $$\\left \[A - \\lambda {1}_{{\\mathbb{C}}^{4}}\\vert 0\\right \]$$ ](A81414_1_En_2_Chapter_IEq117.gif) becomes ![ $$\\begin{array}{rcl} & \\left \[\\begin{array}{ccccc} - \\lambda & 1 & 0 & 0 &0 \\\\ - 1 & - \\lambda & 0 & 0 &0 \\\\ 0 & 0 & - \\lambda & 1 &0 \\\\ 0 & 0 & 1 & - \\lambda &0 \\end{array} \\right \]\\begin{array}{l} \\mbox{ Interchange rows 1 and 2.}\\\\ \\\\ \\mbox{Interchange rows 3 and 4.}\\end{array} & \\\\ & \\left \[\\begin{array}{ccccc} - 1 & - \\lambda & 0 & 0 &0 \\\\ - \\lambda & 1 & 0 & 0 &0 \\\\ 0 & 0 & 1 & - \\lambda &0 \\\\ 0 & 0 & - \\lambda & 1 &0 \\end{array} \\right \]\\begin{array}{l} \\mbox{ Use row 1 to eliminate}-\\lambda \\ \\mbox{ in row 2.}\\\\ \\\\ \\mbox{Use row 3 to eliminate}-\\lambda \\ \\mbox{in row 4.}\\end{array} & \\\\ & \\left \[\\begin{array}{ccccc} - 1& - \\lambda&0& 0 &0 \\\\ 0 &1 + {\\lambda }^{2} & 0& 0 &0 \\\\ 0 & 0 &1& - \\lambda&0 \\\\ 0 & 0 &0&1 - {\\lambda }^{2} & 0 \\end{array} \\right \] & \\\\ & \\left \[\\begin{array}{ccccc} 1& \\lambda&0& 0 &0 \\\\ 0&1 + {\\lambda }^{2} & 0& 0 &0 \\\\ 0& 0 &1& - \\lambda&0 \\\\ 0& 0 &0&1 - {\\lambda }^{2} & 0 \\end{array} \\right \]\\begin{array}{l} \\mbox{ Multiply the first row by\\:-}\\mbox{ 1}.\\end{array} & \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ39.gif) Thus, ![ $$\\left \(A - \\lambda {1}_{{\\mathbb{C}}^{4}}\\right \)\\left \(x\\right \) = 0$$ ](A81414_1_En_2_Chapter_IEq118.gif) has nontrivial solutions precisely when ![ $$1 + {\\lambda }^{2} = 0$$ ](A81414_1_En_2_Chapter_IEq119.gif) or ![ $$1 - {\\lambda }^{2} = 0$$ ](A81414_1_En_2_Chapter_IEq120.gif). Therefore, the eigenvalues, are λ = ± i and λ = ± 1. Note that the two conditions can be multiplied into one characteristic equation of degree 4: ![ $$\\left \(1 + {\\lambda }^{2}\\right \)\\left \(1 - {\\lambda }^{2}\\right \) = 0$$ ](A81414_1_En_2_Chapter_IEq121.gif). Having found the eigenvalues we then need to insert them into the augmented system and find the eigenvectors. Since the system has already been reduced, this is quite simple. First, let λ = ± i so that the augmented system is ![ $$\\left \[\\begin{array}{cccc} 1& \\pm i&0& 0\\\\ 0 & 0 &0 & 0 \\\\ 0& 0 &1& \\mp i\\\\ 0 & 0 &0 & 2 \\end{array} \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\\\ 0 \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equbj.gif) Thus, we get ![ $$\\left \[\\begin{array}{c} 1\\\\ i \\\\ 0\\\\ 0 \\end{array} \\right \] \\leftrightarrow\\lambda= i\\text{ and }\\left \[\\begin{array}{c} i\\\\ 1 \\\\ 0\\\\ 0 \\end{array} \\right \] \\leftrightarrow\\lambda= -i$$ ](A81414_1_En_2_Chapter_Equbk.gif) Next, we let λ = ± 1 and consider ![ $$\\left \[\\begin{array}{cccc} 1& \\pm1&0& 0\\\\ 0 & 2 &0 & 0 \\\\ 0& 0 &1& \\mp1\\\\ 0 & 0 &0 & 0 \\end{array} \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\\\ 0 \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equbl.gif) to get ![ $$\\left \[\\begin{array}{c} 0\\\\ 0 \\\\ 1\\\\ 1 \\end{array} \\right \] \\leftrightarrow1\\text{ and }\\left \[\\begin{array}{c} 0\\\\ 0 \\\\ - 1\\\\ 1 \\end{array} \\right \] \\leftrightarrow -1$$ ](A81414_1_En_2_Chapter_Equbm.gif) Example 2.3.3. Let ![ $$A = \\left \[\\begin{array}{ccc} {\\alpha }_{11} & \\cdots & {\\alpha }_{1n}\\\\ \\vdots & \\ddots & \\vdots \\\\ 0 &\\cdots &{\\alpha }_{nn} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equbn.gif) be upper triangular, i.e., all entries below the diagonal are zero: α ij = 0 if i > j. Then, we are looking at ![ $$\\left \[\\begin{array}{ccc} {\\alpha }_{11} - \\lambda &\\cdots & {\\alpha }_{1n}\\\\ \\vdots & \\ddots & \\vdots \\\\ 0 &\\cdots &{\\alpha }_{nn} - \\lambda\\end{array} \\begin{array}{c} 0 \\\\ \\vdots \\\\ 0 \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equbo.gif) Note again that we do not perform any divisions so as to make the diagonal entries 1. This is because if they are zero, we evidently have a nontrivial solution and that is what we are looking for. Therefore, the eigenvalues are λ = α11,..., α nn . Note that the eigenvalues are precisely the roots of the polynomial that we get by multiplying the diagonal entries. This polynomial is going to be proportional to the characteristic polynomial of A. The next examples show what we have to watch out for when performing the elementary row operations. Example 2.3.4. Let ![ $$A = \\left \[\\begin{array}{ccc} 1 & 2 &4\\\\ - 1 & 0 &2 \\\\ 3 & - 1&5 \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equbp.gif) and perform row operations on ![ $$\\begin{array}{rcl} & \\left \[\\begin{array}{ccc} 1 - \\lambda & 2 & 4 \\\\ - 1 & - \\lambda & 2 \\\\ 3 & - 1 &5 - \\lambda\\end{array} \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\end{array} \\right \]\\begin{array}{l} \\mbox{ Change sign in row 2.}\\\\ \\mbox{Interchange rows 1 and 2.}\\end{array} & \\\\ & \\left \[\\begin{array}{ccc} 1 & \\lambda& - 2 \\\\ 1 - \\lambda & 2 & 4 \\\\ 3 & - 1&5 - \\lambda\\end{array} \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\end{array} \\right \]\\begin{array}{l} \\\\ \\mbox{ Use row 1 to row reduce column\\,1.}\\end{array} & \\\\ & \\left \[\\begin{array}{ccc} 1& \\lambda& - 2 \\\\ 0&2 - \\lambda+ {\\lambda }^{2} & 6 - 2\\lambda\\\\ 0& - 1 - 3\\lambda&11 - \\lambda\\end{array} \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\end{array} \\right \]\\begin{array}{l} \\\\ \\mbox{ Interchange rows 2 and 3.} \\end{array} & \\\\ & \\left \[\\begin{array}{ccc} 1& \\lambda& - 2 \\\\ 0& - 1 - 3\\lambda&11 - \\lambda\\\\ 0&2 - \\lambda+ {\\lambda }^{2} & 6 - 2\\lambda\\end{array} \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\end{array} \\right \]\\begin{array}{l} \\mbox{ Change sign in row 2.} \\\\ \\mbox{ Use row 2 to cancel}2 - \\lambda+ {\\lambda }^{2}\\ \\mbox{ in row 3;} \\\\ \\mbox{ this requires that we have}1 + 3\\lambda \\neq 0!\\end{array} & \\\\ & \\left \[\\begin{array}{ccc} 1& \\lambda& - 2 \\\\ 0&1 + 3\\lambda & - 11 + \\lambda\\\\ 0& 0 &6 - 2\\lambda-\\frac{2-\\lambda +{\\lambda }^{2}} {1+3\\lambda } \\left \(-11 + \\lambda \\right \) \\end{array} \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\end{array} \\right \] & \\\\ & \\left \[\\begin{array}{ccc} 1& \\lambda& - 2 \\\\ 0&1 + 3\\lambda & - 11 + \\lambda\\\\ 0& 0 &\\frac{28+3\\lambda +6{\\lambda }^{2}-{\\lambda }^{3}} {1+3\\lambda } \\end{array} \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\end{array} \\right \] & \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ40.gif) Note that we are not allowed to have ![ $$1 + 3\\lambda= 0$$ ](A81414_1_En_2_Chapter_IEq122.gif) in this formula. If ![ $$1 + 3\\lambda= 0,$$ ](A81414_1_En_2_Chapter_IEq123.gif) then we note that ![ $$2 - \\lambda+ {\\lambda }^{2}\\neq 0$$ ](A81414_1_En_2_Chapter_IEq124.gif) and 11 − λ≠0 so that the third display ![ $$\\left \[\\begin{array}{ccc} 1& \\lambda& - 2 \\\\ 0&2 - \\lambda+ {\\lambda }^{2} & 6 - 2\\lambda\\\\ 0& - 1 - 3\\lambda&11 - \\lambda\\end{array} \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equbq.gif) guarantees that there are no nontrivial solutions in that case. This means that our analysis is valid and that multiplying the diagonal entries will get us the characteristic polynomial ![ $$28 + 3\\lambda+ 6{\\lambda }^{2} - {\\lambda }^{3}$$ ](A81414_1_En_2_Chapter_IEq125.gif). First, observe that 7 is a root of this polynomial. We can then find the other two roots by dividing ![ $$\\frac{28 + 3\\lambda+ 6{\\lambda }^{2} - {\\lambda }^{3}} {\\lambda- 7} = -{\\lambda }^{2} - \\lambda- 4$$ ](A81414_1_En_2_Chapter_Equbr.gif) and using the quadratic formula: ![ $$-\\frac{1} {2} + \\frac{1} {2}i\\sqrt{15},-\\frac{1} {2} -\\frac{1} {2}i\\sqrt{15}$$ ](A81414_1_En_2_Chapter_IEq126.gif). These examples suggest a preliminary definition of the characteristic polynomial. Definition 2.3.5. The characteristic polynomial of ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq127.gif) is a polynomial ![ $${\\chi }_{A}\\left \(\\lambda \\right \) \\in\\mathbb{F}\\left \[\\lambda \\right \]$$ ](A81414_1_En_2_Chapter_IEq128.gif) of degree n such that all eigenvalues of A are roots of χ A . In addition, we scale the polynomial so that the leading term is λ n , i.e., the polynomial is monic. To get a better understanding of the process that leads us to the characteristic polynomial, we study the 2 ×2 and 3 ×3 cases as well as a few specialized n ×n situations. Starting with ![ $$A \\in \\mathrm{{ Mat}}_{2\\times 2}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq129.gif), we investigate ![ $$A-\\lambda {1}_{{\\mathbb{F}}^{2}} = \\left \[\\begin{array}{cc} {\\alpha }_{11} - \\lambda & {\\alpha }_{12} \\\\ {\\alpha }_{21} & {\\alpha }_{22} - \\lambda\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equbs.gif) If α21 = 0, the matrix is in upper triangular form and the characteristic polynomial is ![ $$\\begin{array}{rcl}{ \\chi }_{A}& =& \\left \({\\alpha }_{11} - \\lambda \\right \)\\left \({\\alpha }_{22} - \\lambda \\right \) \\\\ & =& {\\lambda }^{2} -\\left \({\\alpha }_{ 11} + {\\alpha }_{22}\\right \)\\lambda+ {\\alpha }_{11}{\\alpha }_{22}.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ41.gif) If α21≠0, then we switch the first and second row and then eliminate the bottom entry in the first column: ![ $$\\begin{array}{rcl} & \\left \[\\begin{array}{cc} {\\alpha }_{11} - \\lambda & {\\alpha }_{12} \\\\ {\\alpha }_{21} & {\\alpha }_{22} - \\lambda\\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{cc} {\\alpha }_{21} & {\\alpha }_{22} - \\lambda\\\\ {\\alpha }_{11} - \\lambda & {\\alpha }_{12} \\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{cc} {\\alpha }_{21} & {\\alpha }_{22} - \\lambda\\\\ 0 &{\\alpha }_{12} - \\frac{1} {{\\alpha }_{21}} \\left \({\\alpha }_{11} - \\lambda \\right \)\\left \({\\alpha }_{22} - \\lambda \\right \) \\end{array} \\right \].& \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ42.gif) Multiplying the diagonal entries gives ![ $$\\begin{array}{rcl} & & {\\alpha }_{21}{\\alpha }_{12} -\\left \({\\alpha }_{11} - \\lambda \\right \)\\left \({\\alpha }_{22} - \\lambda \\right \) = -{\\lambda }^{2} + \\left \({\\alpha }_{ 11} + {\\alpha }_{22}\\right \)\\lambda\\\\ & & \\quad - {\\alpha }_{11}{\\alpha }_{22} + {\\alpha }_{21}{\\alpha }_{12}.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ43.gif) In both cases, the characteristic polynomial of a 2 ×2 matrix is given by ![ $$\\begin{array}{rcl}{ \\chi }_{A}& =& {\\lambda }^{2} -\\left \({\\alpha }_{ 11} + {\\alpha }_{22}\\right \)\\lambda+ \\left \({\\alpha }_{11}{\\alpha }_{22} - {\\alpha }_{21}{\\alpha }_{12}\\right \) \\\\ & =& {\\lambda }^{2} -\\mathrm{ tr}\\left \(A\\right \)\\lambda+\\det \\left \(A\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ44.gif) We now make an attempt at the case where ![ $$A \\in \\mathrm{{ Mat}}_{3\\times 3}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq130.gif). Thus, we consider ![ $$A-\\lambda {1}_{{\\mathbb{F}}^{3}} = \\left \[\\begin{array}{ccc} {\\alpha }_{11} - \\lambda & {\\alpha }_{12} & {\\alpha }_{13} \\\\ {\\alpha }_{21} & {\\alpha }_{22} - \\lambda & {\\alpha }_{23} \\\\ {\\alpha }_{31} & {\\alpha }_{32} & {\\alpha }_{33} - \\lambda\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equbt.gif) When ![ $${\\alpha }_{21} = {\\alpha }_{31} = 0$$ ](A81414_1_En_2_Chapter_IEq131.gif), there is nothing to do in the first column, and we are left with the bottom right 2 ×2 matrix to consider. This is done as above. If α21 = 0 and α31≠0, then we switch the first and third rows and eliminate the last entry in the first row. This will look like ![ $$\\begin{array}{rcl} & \\left \[\\begin{array}{ccc} {\\alpha }_{11} - \\lambda & {\\alpha }_{12} & {\\alpha }_{13} \\\\ 0 &{\\alpha }_{22} - \\lambda & {\\alpha }_{23} \\\\ {\\alpha }_{31} & {\\alpha }_{32} & {\\alpha }_{33} - \\lambda\\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{ccc} {\\alpha }_{31} & {\\alpha }_{32} & {\\alpha }_{33} - \\lambda\\\\ 0 &{\\alpha }_{22} - \\lambda & {\\alpha }_{23} \\\\ {\\alpha }_{11} - \\lambda & {\\alpha }_{12} & {\\alpha }_{13} \\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{ccc} {\\alpha }_{31} & {\\alpha }_{32} & {\\alpha }_{33} - \\lambda\\\\ 0 &{\\alpha }_{22} - \\lambda & {\\alpha }_{23} \\\\ 0 & \\alpha \\lambda+ \\beta& p\\left \(\\lambda \\right \) \\end{array} \\right \]& \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ45.gif) where p has degree 2. If αλ + β is proportional to α22 − λ, then we can eliminate it to get an upper triangular matrix. Otherwise, we can still eliminate αλ by multiplying the second row by α and adding it to the third row. This leads us to a matrix of the form ![ $$\\left \[\\begin{array}{ccc} {\\alpha }_{31} & {\\alpha }_{32} & {\\alpha }_{33} - \\lambda\\\\ 0 &{\\alpha }_{22} - \\lambda & {\\alpha }_{23} \\\\ 0 & {\\beta }^{{\\prime}} & {p}^{{\\prime}}\\left \(\\lambda \\right \) \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equbu.gif) where β ′ is a scalar and p ′ a polynomial of degree 2. If β ′ = 0 we are finished. Otherwise, we switch the second and third rows and eliminate α22 − λ using β ′ . If α21≠0, then we switch the first two rows and cancel below the diagonal in the first column. This gives us something like ![ $$\\begin{array}{rcl} & \\left \[\\begin{array}{ccc} {\\alpha }_{11} - \\lambda & {\\alpha }_{12} & {\\alpha }_{13} \\\\ {\\alpha }_{21} & {\\alpha }_{22} - \\lambda & {\\alpha }_{23} \\\\ {\\alpha }_{31} & {\\alpha }_{32} & {\\alpha }_{33} - \\lambda\\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{ccc} {\\alpha }_{21} & {\\alpha }_{22} - \\lambda & {\\alpha }_{23} \\\\ {\\alpha }_{11} - \\lambda & {\\alpha }_{12} & {\\alpha }_{13} \\\\ {\\alpha }_{31} & {\\alpha }_{32} & {\\alpha }_{33} - \\lambda\\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{ccc} {\\alpha }_{21} & {\\alpha }_{22} - \\lambda & {\\alpha }_{23} \\\\ 0 & p\\left \(\\lambda \\right \) & {\\alpha }_{13}^{{\\prime}} \\\\ 0 & {q}^{{\\prime}}\\left \(\\lambda \\right \) &q\\left \(\\lambda \\right \) \\end{array} \\right \],& \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ46.gif) where p has degree 2 and q, q ′ have degree 1. If q ′ = 0, we are finished. Otherwise, we switch the last two rows. If q ′ divides p, we can eliminate p to get an upper triangular matrix. If q ′ does not divide p, then we can still eliminate the degree 2 term in p to reduce it to a polynomial of degree 1. This lands us in a situation similar to what we ended up with when α21 = 0. So we can finish using the same procedure. Note that we avoided making any illegal moves in the above procedure. It is easy to formalize this procedure for n ×n matrices. The idea is simply to treat λ as a variable and the entries as polynomials. To eliminate entries, we then use polynomial division to reduce the degrees of entries until they can be eliminated. Since we wish to treat λ as a variable, we shall rename it t when doing the Gauss elimination and only use λ for the eigenvalues and roots of the characteristic polynomial. More precisely, we claim the following: Theorem 2.3.6. Given ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq132.gif) , there is a row reduction procedure that leads to a decomposition ![ $$\\left \(t{1}_{{\\mathbb{F}}^{n}} - A\\right \) = PU,$$ ](A81414_1_En_2_Chapter_IEq133.gif) where ![ $$U = \\left \[\\begin{array}{cclc} {p}_{1}\\left \(t\\right \)& {_\\ast} &\\cdots & {_\\ast} \\\\ 0 &{p}_{2}\\left \(t\\right \)&\\cdots & {_\\ast}\\\\ \\vdots & \\vdots &\\ddots & \\vdots \\\\ 0 & 0 &\\cdots &{p}_{n}\\left \(t\\right \) \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equbv.gif) with ![ $${p}_{1},\\ldots,{p}_{n} \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq134.gif) being monic and unique, and P is the product of the elementary matrices: 1. I kl interchanging rows k and l. 2. R kl rt multiplies row l by ![ $$r\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq135.gif) and adds it to row k. 3. M k α multiplies row k by ![ $$\\alpha\\in\\mathbb{F} -\\left \\{0\\right \\}$$ ](A81414_1_En_2_Chapter_IEq136.gif). Proof. The procedure for obtaining the upper triangular form works as with row reduction with the twist that we think of all entries as being polynomials. Starting with the first column, we look at all entries at or below the diagonal. We then select the nonzero entry with the lowest degree and make a row interchange to place this entry on the diagonal. Using that entry, we use polynomial division and the operation in (2) to reduce the degrees of all the entries below the diagonal. The degrees of all the entries below the diagonal are now strictly smaller than the degree of the diagonal entry. Moreover, if the diagonal entry actually divided a specific entry below the diagonal, then we get a 0 in that entry. We now repeat this process on the same column until we end up with a situation where all entries below the diagonal are 0. Next, we must check that we actually get nonzero polynomials on the diagonal. This is clear for the first column as the first diagonal entry is non-zero. Should we end up with a situation where all entries on and below the diagonal vanish, then all values of ![ $$t \\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq137.gif) must be eigenvalues. However, as we shall prove in Lemma 2.5.6, it is not possible for A to have more than n eigenvalues. So we certainly obtain a contradiction if we assume that ![ $$\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq138.gif) has characteristic zero as it will then have infinitely many elements. It is possible to use a more direct argument by carefully examining what happens in the process we described. Next, we can multiply each row by a suitable nonzero scalar to ensure that the polynomials on the diagonal are monic. Finally, to see that the polynomials on the diagonal are unique, we note that the matrix P is invertible and a multiple of elementary matrices. So if we have PU = QV where U and V are both upper triangular, then U = RV and ![ $$V =\\tilde{ R}U$$ ](A81414_1_En_2_Chapter_IEq139.gif) where both R and ![ $$\\tilde{R}$$ ](A81414_1_En_2_Chapter_IEq140.gif) are matrices whose entries are polynomials. (In fact, they are products of the elementary matrices but we will not use that.) We claim that R and ![ $$\\tilde{R}$$ ](A81414_1_En_2_Chapter_IEq141.gif) are also upper triangular and with all diagonal entries being 1. Clearly, this will show that the diagonal entries in P are unique. The proof goes by induction on n. For a general n, write RU = V more explicitly as ![ $$\\left \[\\begin{array}{cccc} {r}_{11} & {r}_{12} & \\cdots & {r}_{1n} \\\\ {r}_{21} & {r}_{22}\\\\ \\vdots & & \\ddots \\\\ {r}_{n1} & & & {r}_{nn} \\end{array} \\right \]\\left \[\\begin{array}{cccc} {p}_{1} & {p}_{12} & \\cdots &{p}_{1n} \\\\ 0 & {p}_{2}\\\\ \\vdots & & \\ddots \\\\ 0 & 0 & & {p}_{n} \\end{array} \\right \] = \\left \[\\begin{array}{cccc} {q}_{1} & {q}_{12} & \\cdots &{q}_{1n} \\\\ 0 & {q}_{2}\\\\ \\vdots & & \\ddots \\\\ 0 & 0 & & {q}_{n} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equbw.gif) and note that the entries in the first column on the right-hand side satisfy ![ $$\\begin{array}{rcl} {r}_{11}{p}_{1}& =& {q}_{1} \\\\ {r}_{21}{p}_{1}& =& 0 \\\\ & \\vdots & \\\\ {r}_{n1}{p}_{1}& =& 0.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ47.gif) Since p 1 is nontrivial, it follows that ![ $${r}_{21} = \\cdots= {r}_{n1} = 0$$ ](A81414_1_En_2_Chapter_IEq142.gif). A similar argument shows that the entries in the first column of ![ $$\\tilde{R}$$ ](A81414_1_En_2_Chapter_IEq143.gif) satisfy ![ $$\\tilde{{r}}_{21} = \\cdots=\\tilde{ {r}}_{n1} = 0$$ ](A81414_1_En_2_Chapter_IEq144.gif). Next, we note that r 11 p 1 = q 1 and ![ $$\\tilde{{r}}_{11}{q}_{1} = {p}_{1}$$ ](A81414_1_En_2_Chapter_IEq145.gif) showing that ![ $${r}_{11} =\\tilde{ {r}}_{11} = 1$$ ](A81414_1_En_2_Chapter_IEq146.gif). This shows that our claim holds when n = 1. When, n > 1 we obtain ![ $$\\left \[\\begin{array}{cccc} {r}_{22} & {r}_{23} & \\cdots & {r}_{2n} \\\\ {r}_{32} & {r}_{33}\\\\ \\vdots & & \\ddots \\\\ {r}_{n2} & & & {r}_{nn} \\end{array} \\right \]\\left \[\\begin{array}{cccc} {p}_{2} & {p}_{23} & \\cdots &{p}_{2n} \\\\ 0 & {p}_{3}\\\\ \\vdots & & \\ddots \\\\ 0 & 0 & & {p}_{n} \\end{array} \\right \] = \\left \[\\begin{array}{cccc} {q}_{2} & {q}_{23} & \\cdots &{q}_{2n} \\\\ 0 & {q}_{3}\\\\ \\vdots & & \\ddots \\\\ 0 & 0 & & {q}_{n} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equbx.gif) after deleting the first row and column in the matrices. This allows us to use induction to finish the proof. □ This gives us a solid definition of the characteristic polynomial although it is as yet not completely clear why it has degree n. A very similar construction will be given in Sect. 2.9. The main difference is that it also uses column operations. The advantage of that more enhanced construction is that it calculates more invariants. In addition, it shows that the characteristic polynomial has degree n and remains the same for similar matrices. Definition 2.3.7. The characteristic polynomial of ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq147.gif) is the monic polynomial ![ $${\\chi }_{A}\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq148.gif) we obtain by applying Gauss elimination to ![ $$A - t{1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_2_Chapter_IEq149.gif) or ![ $$t{1}_{{\\mathbb{F}}^{n}} - A$$ ](A81414_1_En_2_Chapter_IEq150.gif) until it is in upper triangular form and then multiplying the monic polynomials in the diagonal entries, i.e., χ A t = p 1 tp 2 t⋯p n t. The next example shows how the proof of Theorem 2.3.6 works in a specific example. Example 2.3.8. Let ![ $$A = \\left \[\\begin{array}{ccc} 1&2& 3\\\\ 0 &2 & 4 \\\\ 2&1& - 1 \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equby.gif) The calculations go as follows: ![ $$\\begin{array}{rcl} A - t{1}_{{\\mathbb{F}}^{3}} =& \\left \[\\begin{array}{ccc} 1 - t& 2 & 3\\\\ 0 &2 - t & 4 \\\\ 2 & 1 & - 1 - t \\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{ccc} 2 & 1 & - 1 - t\\\\ 0 &2 - t & 4 \\\\ 1 - t& 2 & 3 \\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{ccc} 2& 1 & - 1 - t\\\\ 0 & 2 - t & 4 \\\\ 0&2 -\\frac{1-t} {2} & 3 + \\frac{\\left \(1-t\\right \)\\left \(1+t\\right \)} {2} \\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{ccc} 2& 1 & - 1 - t\\\\ 0 & 2 - t & 4 \\\\ 0&\\frac{3} {2} + \\frac{t} {2} & 3 + \\frac{\\left \(1-t\\right \)\\left \(1+t\\right \)} {2} \\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{ccc} 2& 1 & - 1 - t\\\\ 0 & 2 - t & 4 \\\\ 0&\\frac{3} {2} + 1&5 + \\frac{\\left \(1-t\\right \)\\left \(1+t\\right \)} {2} \\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{ccc} 2& 1 & - 1 - t \\\\ 0& \\frac{5} {2} & 5 + \\frac{\\left \(1-t\\right \)\\left \(1+t\\right \)} {2} \\\\ 0&2 - t& 4 \\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{ccc} 2& 1 & - 1 - t \\\\ 0&\\frac{5} {2} & 5 + \\frac{\\left \(1-t\\right \)\\left \(1+t\\right \)} {2} \\\\ 0& 0 &4 - 2\\frac{2-t} {5} \\left \(5 + \\frac{\\left \(1-t\\right \)\\left \(1+t\\right \)} {2} \\right \) \\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{ccc} 1&\\frac{1} {2} & \\frac{-1-t} {2} \\\\ 0& 1 & 2 + \\frac{\\left \(1-t\\right \)\\left \(1+t\\right \)} {5} \\\\ 0& 0 &{t}^{3} - 2{t}^{2} - 11t + 2 \\end{array} \\right \],& \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ48.gif) and the characteristic polynomial is ![ $${\\chi }_{A}\\left \(t\\right \) = {t}^{3} - 2{t}^{2} - 11t + 2$$ ](A81414_1_En_2_Chapter_Equbz.gif) When the matrix A can be written in block triangular form, it becomes somewhat easier to calculate the characteristic polynomial. Lemma 2.3.9. Assume that ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq151.gif) has the form ![ $$A = \\left \[\\begin{array}{cc} {A}_{11} & {A}_{12} \\\\ 0 &{A}_{22} \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equca.gif) where ![ $${A}_{11} \\in \\mathrm{{ Mat}}_{k\\times k}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_2_Chapter_IEq152.gif) ![ $${A}_{22} \\in \\mathrm{ Mat}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_2_Chapter_IEq153.gif) and ![ $${A}_{12} \\in \\mathrm{{ Mat}}_{k\\times \\left \(n-k\\right \)}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_2_Chapter_IEq154.gif) then ![ $${\\chi }_{A}\\left \(t\\right \) = {\\chi }_{{A}_{11}}\\left \(t\\right \){\\chi }_{{A}_{22}}\\left \(t\\right \).$$ ](A81414_1_En_2_Chapter_Equcb.gif) Proof. To compute χ A t, we do row operations on ![ $$t{1}_{{\\mathbb{F}}^{n}}-A = \\left \[\\begin{array}{cc} t{1}_{{\\mathbb{F}}^{k}} - {A}_{11} & - {A}_{12} \\\\ 0 &t{1}_{{\\mathbb{F}}^{n-k}} - {A}_{22} \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equcc.gif) This can be done by first doing row operations on the first k rows leading to a situation that looks like ![ $$\\left \[\\begin{array}{cc} \\begin{array}{ccc} {q}_{1}\\left \(t\\right \)&& {_\\ast}\\\\ &\\ddots \\\\ 0 &&{q}_{k}\\left \(t\\right \) \\end{array} & {_\\ast} \\\\ 0 &t{1}_{{\\mathbb{F}}^{n-k}} - {A}_{22} \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equcd.gif) Having accomplished this, we then do row operations on the last n − k rows. to get ![ $$\\left \[\\begin{array}{cc} \\begin{array}{ccc} {p}_{1}\\left \(t\\right \)&& {_\\ast}\\\\ &\\ddots \\\\ 0 &&{p}_{k}\\left \(t\\right \) \\end{array} & {_\\ast} \\\\ 0 &\\begin{array}{ccc} {r}_{1}\\left \(t\\right \)&& {_\\ast}\\\\ &\\ddots \\\\ 0 &&{r}_{n-k}\\left \(t\\right \) \\end{array} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equce.gif) As these two sets of operations do not depend on each other, we see that ![ $$\\begin{array}{rcl} {87.60004pt} {\\chi }_{A}\\left \(t\\right \)& =& {q}_{1}\\left \(t\\right \)\\cdots {q}_{k}\\left \(t\\right \){r}_{1}\\left \(t\\right \)\\cdots {r}_{n-k}\\left \(t\\right \) \\\\ {87.60004pt} & =& {\\chi }_{{A}_{11}}\\left \(t\\right \){\\chi }_{{A}_{22}}\\left \(t\\right \). {132.60004pt} \\square \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ49.gif) Finally, we need to figure out how this matrix procedure generates eigenvalues for general linear maps L : V -> V. In case V is finite-dimensional, we can simply pick a basis and then study the matrix representation ![ $$\\left \[L\\right \]$$ ](A81414_1_En_2_Chapter_IEq155.gif). The diagram ![ $$\\begin{array}{ccc} V & \\frac{L} {\\rightarrow } & V\\\\ \\uparrow &&\\uparrow\\\\ {\\mathbb{F}}^{n}&\\frac{\\left \[L\\right \]} {\\rightarrow } &{\\mathbb{F}}^{n} \\end{array}$$ ](A81414_1_En_2_Chapter_Equcf.gif) then quickly convinces us that eigenvectors in ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_2_Chapter_IEq156.gif) for ![ $$\\left \[L\\right \]$$ ](A81414_1_En_2_Chapter_IEq157.gif) are mapped to eigenvectors in V for L without changing the eigenvalue, i.e., ![ $$\\left \[L\\right \]\\xi= \\lambda \\xi $$ ](A81414_1_En_2_Chapter_Equcg.gif) is equivalent to ![ $$Lx = \\lambda x$$ ](A81414_1_En_2_Chapter_Equch.gif) if ![ $$\\xi\\in{\\mathbb{F}}^{n}$$ ](A81414_1_En_2_Chapter_IEq158.gif) is the coordinate vector for x ∈ V. Thus, we define the characteristic polynomial of L as χ L t = χ L t. While we do not have a problem with finding eigenvalues for L by finding them for ![ $$\\left \[L\\right \]$$ ](A81414_1_En_2_Chapter_IEq159.gif), it is less clear that χ L t becomes well defined with this definition. To see that it is well defined, we would have to show that ![ $${\\chi }_{\\left \[L\\right \]}\\left \(t\\right \) = {\\chi }_{{B}^{-1}\\left \[L\\right \]B}\\left \(t\\right \)$$ ](A81414_1_En_2_Chapter_IEq160.gif) where B the matrix transforming one basis into another basis. This is best done using determinants (see Sect. 5.7). Alternately, one would have to use the definition of the characteristic polynomial given in Sect. 2.7 which can be computed with a more elaborate procedure that uses both row and column operations (see Sect. 2.9). For now, we are going to take this on faith. Note, however, that computing χ L t does give us a rigorous method for finding the eigenvalues as L. In particular, all of the matrix representations for L must have the same eigenvalues. Thus, there is nothing wrong with searching for eigenvalues using a fixed matrix representation. In the case where ![ $$\\mathbb{F} = \\mathbb{Q}$$ ](A81414_1_En_2_Chapter_IEq161.gif) or ℝ, we can still think of ![ $$\\left \[L\\right \]$$ ](A81414_1_En_2_Chapter_IEq162.gif) as a complex matrix. As such, we might get complex eigenvalues that do not lie in the field ![ $$\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq163.gif). These roots of χ L cannot be eigenvalues for L as we are not allowed to multiply elements in V by complex numbers. Example 2.3.10. We now need to prove that our method for computing the characteristic polynomial of a matrix gives us the expected answer for the differential equation defined by the operator ![ $$L = {D}^{n} + {\\alpha }_{ n-1}{D}^{n-1} + \\cdots+ {\\alpha }_{ 1}D + {\\alpha }_{0}.$$ ](A81414_1_En_2_Chapter_Equci.gif) The corresponding system is ![ $$\\begin{array}{rcl} L\\left \(x\\right \)& =& \\dot{x} - Ax \\\\ & =& \\dot{x} -\\left \[\\begin{array}{cccc} 0 & 1 &\\cdots & 0\\\\ 0 & 0 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots & 1\\\\ - {\\alpha }_{ 0} & - {\\alpha }_{1} & \\cdots & - {\\alpha }_{n-1} \\end{array} \\right \]x \\\\ & =& 0.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ50.gif) So we consider the matrix ![ $$A = \\left \[\\begin{array}{cccc} 0 & 1 &\\cdots & 0\\\\ 0 & 0 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots & 1\\\\ - {\\alpha }_{ 0} & - {\\alpha }_{1} & \\cdots & - {\\alpha }_{n-1} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equcj.gif) and with it ![ $$t{1}_{{\\mathbb{F}}^{n}}-A = \\left \[\\begin{array}{cccc} t & - 1&\\cdots & 0\\\\ 0 & t & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots & - 1\\\\ {\\alpha }_{ 0} & {\\alpha }_{1} & \\cdots &t + {\\alpha }_{n-1} \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equck.gif) We immediately run into a problem as we do not know if some or all of α0,..., α n − 1 are zero. Thus, we proceed without interchanging rows: ![ $$\\begin{array}{rcl} & \\left \[\\begin{array}{cccc} - t & 1 &\\cdots & 0\\\\ 0 & - t & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots & 1\\\\ - {\\alpha }_{ 0} & - {\\alpha }_{1} & \\cdots & - t - {\\alpha }_{n-1} \\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{cccc} - t& 1 &\\cdots & 0\\\\ 0 & - t & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots & 1\\\\ 0 & - {\\alpha }_{ 1} -\\frac{{\\alpha }_{0}} {t} &\\cdots & - {\\alpha }_{n-1} - t \\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{ccccc} - t& 1 & &\\cdots & 0\\\\ 0 & - t & 1 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & & \\ddots & 1\\\\ 0 & 0 & - {\\alpha }_{ 2} -\\frac{{\\alpha }_{1}} {t} -\\frac{{\\alpha }_{0}} {{t}^{2}} & \\cdots &{\\alpha }_{n-1} - t \\end{array} \\right \]& \\\\ & \\vdots & \\\\ & \\left \[\\begin{array}{ccccc} t & - 1& &\\cdots & 0\\\\ 0 & t & - 1 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & & \\ddots & - 1 \\\\ 0& 0 & 0 &\\cdots &t + {\\alpha }_{n-1} + \\frac{{\\alpha }_{n-2}} {t} + \\cdots+ \\frac{{\\alpha }_{1}} {{t}^{n-2}} + \\frac{{\\alpha }_{0}} {{t}^{n-1}} \\end{array} \\right \]& \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ51.gif) Note that t = 0 is the only value that might give us trouble. In case t = 0, we note that there cannot be a nontrivial kernel unless α0 = 0. Thus, λ = 0 is an eigenvalue if and only if α0 = 0. Fortunately, this gets build into our characteristic polynomial. After multiplying the diagonal entries together, we have ![ $$\\begin{array}{rcl} p\\left \(t\\right \)& =& {t}^{n-1}\\left \(t + {\\alpha }_{ n-1} + \\frac{{\\alpha }_{n-2}} {t} + \\cdots+ \\frac{{\\alpha }_{1}} {{t}^{n-2}} + \\frac{{\\alpha }_{0}} {{t}^{n-1}}\\right \) \\\\ & =& \\left \({t}^{n} + {\\alpha }_{ n-1}{t}^{n-1} + {\\alpha }_{ n-2}{t}^{n-2} + \\cdots+ {\\alpha }_{ 1}t + {\\alpha }_{0}\\right \), \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ52.gif) where λ = 0 is a root precisely when α0 = 0 as hoped for. Finally, we see that pt = 0 is up to sign our old characteristic equation for pD = 0. In Proposition 2.6.3, we shall compute the characteristic polynomial for the transpose of A using only the techniques from Theorem 2.3.6. There are a few useful facts that can help us find roots of polynomials. Proposition 2.3.11. Let A ∈ Matn×n ℂ and ![ $${\\chi }_{A}\\left \(t\\right \) = {t}^{n} + {\\alpha }_{ n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{ 1}t + {\\alpha }_{0} = \\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{n}\\right \).$$ ](A81414_1_En_2_Chapter_Equcl.gif) 1. ![ $$\\mathrm{tr}A = {\\lambda }_{1} + \\cdots+ {\\lambda }_{n} = -{\\alpha }_{n-1}$$ ](A81414_1_En_2_Chapter_IEq164.gif). 2. ![ $${\\lambda }_{1}\\cdots {\\lambda }_{n} ={ \\left \(-1\\right \)}^{n}{\\alpha }_{0}$$ ](A81414_1_En_2_Chapter_IEq165.gif). 3. If χ A t ∈ ℝt and λ ∈ ℂ is a root, then ![ $$\\bar{\\lambda }$$ ](A81414_1_En_2_Chapter_IEq166.gif) is also a root. In particular, the number of real roots is even, respectively odd, if n is even, respectively odd. 4. If χ A t ∈ ℝt, n is even, and α 0 < 0, then there are at least two real roots one negative and one positive. 5. If χ A t ∈ ℝt and n is odd, then there is at least one real root, whose sign is the opposite of α 0. 6. If χ A t ∈ ℤt, then all rational roots are in fact integers that divide α 0. Proof. The proof of (3) follows from the fact that when the coefficients of χ A are real, then ![ $$\\overline{{\\chi }_{A}\\left \(t\\right \)} = {\\chi }_{A}\\left \(\\bar{t}\\right \)$$ ](A81414_1_En_2_Chapter_IEq167.gif). The proofs of (4) and (5) follow from the intermediate value theorem. Simply note that χ A 0 = α0 and that χ A t -> ∞ as t -> ∞ while ![ $${\\left \(-1\\right \)}^{n}{\\chi }_{A}\\left \(t\\right \) \\rightarrow \\infty $$ ](A81414_1_En_2_Chapter_IEq168.gif) as t -> − ∞. For the first two facts, note that the relationship ![ $${t}^{n} + {\\alpha }_{ n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{ 1}t + {\\alpha }_{0} = \\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{n}\\right \)$$ ](A81414_1_En_2_Chapter_Equcm.gif) shows that ![ $$\\begin{array}{rcl}{ \\lambda }_{1} + \\cdots+ {\\lambda }_{n}& =& -{\\alpha }_{n-1}, \\\\ {\\lambda }_{1}\\cdots {\\lambda }_{n}& =&{ \\left \(-1\\right \)}^{n}{\\alpha }_{ 0}.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ53.gif) This establishes (2) and part of (1). Finally, the relation ![ $$\\mathrm{tr}A = {\\lambda }_{1} + \\cdots+ {\\lambda }_{n}$$ ](A81414_1_En_2_Chapter_IEq169.gif) will be established when we can prove that complex matrices are similar to upper triangular matrices (see also Exercise 13 in Sect. 2.7). In other words, we will show that one can find B ∈ Gl n ℂ such that B − 1 AB is upper triangular (see Sect. 4.8 or 2.8). We then observe that A and B − 1 AB have the same eigenvalues as Ax = λx if and only if ![ $${B}^{-1}AB\\left \({B}^{-1}x\\right \) = \\lambda \\left \({B}^{-1}x\\right \)$$ ](A81414_1_En_2_Chapter_IEq170.gif). However, as the eigenvalues for the upper triangular matrix B − 1 AB are precisely the diagonal entries, we see that ![ $$\\begin{array}{rcl}{ \\lambda }_{1} + \\cdots+ {\\lambda }_{n}& =& \\mathrm{tr}\\left \({B}^{-1}AB\\right \) \\\\ & =& \\mathrm{tr}\\left \(AB{B}^{-1}\\right \) \\\\ & =& \\mathrm{tr}\\left \(A\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ54.gif) Another proof of ![ $$\\mathrm{tr}A = -{\\alpha }_{n-1}$$ ](A81414_1_En_2_Chapter_IEq171.gif) that works for all fields is presented below in the exercises to Sect. 2.7. For (6), let p ∕ q be a rational root in reduced form, then ![ $${\\left \(\\frac{p} {q}\\right \)}^{n} + \\cdots+ {\\alpha }_{ 1}\\left \(\\frac{p} {q}\\right \) + {\\alpha }_{0} = 0,$$ ](A81414_1_En_2_Chapter_Equcn.gif) and ![ $$\\begin{array}{rcl} 0& =& {p}^{n} + \\cdots+ {\\alpha }_{ 1}p{q}^{n-1} + {\\alpha }_{ 0}{q}^{n} \\\\ & =& {p}^{n} + q\\left \({\\alpha }_{ n-1}{p}^{n-1} + \\cdots+ {\\alpha }_{ 1}p{q}^{n-2} + {\\alpha }_{ 0}{q}^{n-1}\\right \) \\\\ & =& p\\left \({p}^{n-1} + \\cdots+ {\\alpha }_{ 1}{q}^{n-1}\\right \) + {\\alpha }_{ 0}{q}^{n}.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ55.gif) Thus, q divides p n and p divides a 0 q n . Since p and q have no divisors in common, the result follows. □ ### 2.3.1 Exercises 1. Find the characteristic polynomial and if possible the eigenvalues and eigenvectors for each of the following matrices: (a) ![ $$\\left \[\\begin{array}{lll} 1&\\quad 0&\\quad 1\\\\ 0 &\\quad 1 &\\quad 0 \\\\ 1&\\quad 0&\\quad 1\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equco.gif) (b) ![ $$\\left \[\\begin{array}{lll} 0&\\quad 1&\\quad 2\\\\ 1 &\\quad 0 &\\quad 3 \\\\ 2&\\quad 3&\\quad 0\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equcp.gif) (c) ![ $$\\left \[\\begin{array}{lll} 0 &\\quad 1 &\\quad 2\\\\ - 1 &\\quad 0 &\\quad 3 \\\\ - 2&\\quad - 3&\\quad 0\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equcq.gif) 2. Find the characteristic polynomial and if possible eigenvalues and eigenvectors for each of the following matrices: (a) ![ $$\\left \[\\begin{array}{cc} 0& \\quad i\\\\ i &\\quad 0\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equcr.gif) (b) ![ $$\\left \[\\begin{array}{cc} 0 & \\quad i\\\\ - i &\\quad 0\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equcs.gif) (c) ![ $$\\left \[\\begin{array}{lll} 1&\\quad i &\\quad 0\\\\ i &\\quad 1 &\\quad 0 \\\\ 0&\\quad 2&\\quad 1\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equct.gif) 3. Find the eigenvalues for the following matrices with a minimum of calculations (try not to compute the characteristic polynomial): (a) ![ $$\\left \[\\begin{array}{lll} 1&\\quad 0&\\quad 1\\\\ 0 &\\quad 0 &\\quad 0 \\\\ 1&\\quad 0&\\quad 1\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equcu.gif) (b) ![ $$\\left \[\\begin{array}{lll} 1&\\quad 0&\\quad 1\\\\ 0 &\\quad 1 &\\quad 0 \\\\ 1&\\quad 0&\\quad 1\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equcv.gif) (c) ![ $$\\left \[\\begin{array}{lll} 0&\\quad 0&\\quad 1\\\\ 0 &\\quad 1 &\\quad 0 \\\\ 1&\\quad 0&\\quad 0\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equcw.gif) 4. Find the characteristic polynomial, eigenvalues, and eigenvectors for each of the following linear operators L : P 3 -> P 3: (a) L = D. (b) ![ $$L = tD = T \\circ D$$ ](A81414_1_En_2_Chapter_IEq172.gif). (c) ![ $$L = {D}^{2} + 2D + {1}_{{P}_{3}}$$ ](A81414_1_En_2_Chapter_IEq173.gif). (d) ![ $$L = {t}^{2}{D}^{3} + D$$ ](A81414_1_En_2_Chapter_IEq174.gif). 5. Let p ∈ ℂt be a monic polynomial. Show that the characteristic polynomial for D : kerpD -> kerpD is pt. (To clarify the notation, see Exercise 12 in Sect. 2.2.) 6. Assume that ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq175.gif) is upper or lower triangular and let ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq176.gif). Show that μ is an eigenvalue for pA if and only if μ = pλ where λ is an eigenvalue for A. (Hint: See Exercise 7 in Sect. 1.6.) 7. Let L : V -> V be a linear operator on a complex vector space. Assume that we have a polynomial p ∈ ℂt such that pL = 0. Show that all eigenvalues of L are roots of p. 8. Let L : V -> V be a linear operator and K : W -> V an isomorphism. Show that L and K − 1 ∘ L ∘ K have the same eigenvalues. 9. Let K : V -> W and L : W -> V be two linear maps. (a) Show that K ∘ L and L ∘ K have the same nonzero eigenvalues. Hint: If x ∈ V is an eigenvector for L ∘ K, then Kx ∈ W is an eigenvector for K ∘ L. (b) Give an example where 0 is an eigenvalue for L ∘ K but not for K ∘ L. Hint: Try to have different dimensions for V and W. (c) If dimV = dimW, then (a) also holds for the zero eigenvalue. Hint: From Exercise 5 in Sect. 1.11, use that ![ $$\\begin{array}{rcl} \\dim \\left \(\\ker \\left \(K \\circ L\\right \)\\right \)& \\geq &\\max \\left \\{\\dim \\left \(\\ker \\left \(L\\right \)\\right \),\\dim \\left \(\\ker \\left \(K\\right \)\\right \)\\right \\}, \\\\ \\dim \\left \(\\ker \\left \(L \\circ K\\right \)\\right \)& \\geq &\\max \\left \\{\\dim \\left \(\\ker \\left \(L\\right \)\\right \),\\dim \\left \(\\ker \\left \(K\\right \)\\right \)\\right \\}.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ56.gif) 10. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq177.gif). (a) Show that A and A t have the same eigenvalues and that for each eigenvalue λ, we have ![ $$\\dim \\left \(\\ker \\left \(A - \\lambda {1}_{{\\mathbb{F}}^{n}}\\right \)\\right \) =\\dim \\left \(\\ker \\left \({A}^{t} - \\lambda {1}_{{ \\mathbb{F}}^{n}}\\right \)\\right \).$$ ](A81414_1_En_2_Chapter_Equcx.gif) (b) Show by example that A and A t need not have the same eigenvectors. 11. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq178.gif). Consider the following two linear operators on ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \) : {L}_{A}\\left \(X\\right \) = AX$$ ](A81414_1_En_2_Chapter_IEq179.gif) and R A X = XA (see Example 1.7.6). (a) Show that λ is an eigenvalue for A if and only if λ is an eigenvalue for L A . (b) Show that ![ $${\\chi }_{{L}_{A}}\\left \(t\\right \) ={ \\left \({\\chi }_{A}\\left \(t\\right \)\\right \)}^{n}$$ ](A81414_1_En_2_Chapter_IEq180.gif). (c) Show that λ is an eigenvalue for A t if and only if λ is an eigenvalue for R A . (d) Relate ![ $${\\chi }_{{A}^{t}}\\left \(t\\right \)$$ ](A81414_1_En_2_Chapter_IEq181.gif) and ![ $${\\chi }_{{R}_{A}}\\left \(t\\right \)$$ ](A81414_1_En_2_Chapter_IEq182.gif). 12. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq183.gif) and ![ $$B \\in \\mathrm{{ Mat}}_{m\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq184.gif) and consider ![ $$\\begin{array}{rcl} L& : & \\mathrm{{Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \) \\rightarrow \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \), \\\\ L\\left \(X\\right \)& =& AX - XB.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ57.gif) (a) Show that if A and B have a common eigenvalue, then L has nontrivial kernel. Hint: Use that B and B t have the same eigenvalues. (b) Show more generally that if λ is an eigenvalue of A and μ and eigenvalue for B, then λ − μ is an eigenvalue for L. 13. Find the characteristic polynomial, eigenvalues, and eigenvectors for ![ $$A = \\left \[\\begin{array}{cc} \\alpha &\\quad - \\beta \\\\ \\beta& \\quad \\alpha \\end{array} \\right \],\\alpha,\\beta\\in\\mathbb{R}.$$ ](A81414_1_En_2_Chapter_Equcy.gif) as a map A : ℂ 2 -> ℂ 2. 14. Show directly, using the methods developed in this section, that the characteristic polynomial for a 3 ×3 matrix has degree 3. 15. Let ![ $$A = \\left \[\\begin{array}{cc} a&\\quad b\\\\ c &\\quad d\\end{array} \\right \],a,b,c,d \\in\\mathbb{R}$$ ](A81414_1_En_2_Chapter_Equcz.gif) Show that the eigenvalues are either both real or are complex conjugates of each other. 16. Show that the eigenvalues of ![ $$\\left \[\\begin{array}{cc} a&b\\\\ \\bar{b} &d\\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_IEq185.gif) where a, d ∈ ℝ and b ∈ ℂ, are real. 17. Show that the eigenvalues of ![ $$\\left \[\\begin{array}{cc} ia& - b\\\\ \\bar{b} & id\\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_IEq186.gif) where a, d ∈ ℝ and b ∈ ℂ, are purely imaginary. 18. Show that the eigenvalues of ![ $$\\left \[\\begin{array}{cc} a& -\\bar{ b}\\\\ b & \\bar{a}\\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_IEq187.gif) where a, b ∈ ℂ and ![ $${\\left \\vert a\\right \\vert }^{2} +{ \\left \\vert b\\right \\vert }^{2} = 1,$$ ](A81414_1_En_2_Chapter_IEq188.gif) are complex numbers of unit length. 19. Let ![ $$A = \\left \[\\begin{array}{cccc} 0 & 1 &\\cdots & 0\\\\ 0 & 0 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots & 1\\\\ - {\\alpha }_{ 0} & - {\\alpha }_{1} & \\cdots & - {\\alpha }_{n-1}\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equda.gif) (a) Show that all eigenspaces are one-dimensional. (b) Show that kerA≠0 if and only if α0 = 0. 20. Let ![ $$\\begin{array}{rcl} p\\left \(t\\right \)& =& \\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{n}\\right \) \\\\ & =& {t}^{n} + {\\alpha }_{ n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{ 1}t + {\\alpha }_{0}, \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ58.gif) where ![ $${\\lambda }_{1},\\ldots,{\\lambda }_{n} \\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq189.gif). Show that there is a change of basis such that ![ $$\\left \[\\begin{array}{cccc} 0 & 1 &\\cdots & 0\\\\ 0 & 0 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots & 1\\\\ - {\\alpha }_{ 0} & - {\\alpha }_{1} & \\cdots & - {\\alpha }_{n-1}\\end{array} \\right \] = B\\left \[\\begin{array}{cccc} {\\lambda }_{1} & 1 && 0 \\\\ 0 &{\\lambda }_{2} & \\ddots \\\\ & & \\ddots & 1\\\\ 0 & & &{\\lambda }_{ n}\\end{array} \\right \]{B}^{-1}.$$ ](A81414_1_En_2_Chapter_Equdb.gif) Hint: Try n = 2, 3, assume that B is lower triangular with 1s on the diagonal, or alternately use Exercise 9 in Sect. 2.2. 21. Show that (a) The multiplication operator T : C ∞ ℝ, ℝ -> C ∞ ℝ, ℝ does not have any eigenvalues. Recall that Tft = t ⋅ft. (b) Show that the differential operator D : ℂt -> ℂt only has 0 as an eigenvalue. (c) Show that D : C ∞ ℝ, ℝ -> C ∞ ℝ, ℝ has all real numbers as eigenvalues. (d) Show that D : C ∞ ℝ, ℂ -> C ∞ ℝ, ℂ has all complex numbers as eigenvalues. ## 2.4 The Minimal Polynomial The minimal polynomial of a linear operator is, unlike the characteristic polynomial, fairly easy to define rigorously. It is, however, not as easy to calculate. The amazing properties contained in the minimal polynomial on the other hand seem to make it sufficiently desirable that it would be a shame to ignore it. See also Sect. 1.12 for a preliminary discussion of the minimal polynomial. Recall that projections are characterized by a very simple polynomial relationship ![ $${L}^{2} - L = 0$$ ](A81414_1_En_2_Chapter_IEq190.gif). The purpose of this section is to find a polynomial pt for a linear operator L : V -> V such that pL = 0. This polynomial will, like the characteristic polynomial, also have the property that its roots are the eigenvalues of L. In subsequent sections, we shall study in more depth the properties of linear operators based on knowledge of the minimal polynomial. Before passing on to the abstract constructions, let us consider two examples. Example 2.4.1. An involution is a linear operator L : V -> V such that L 2 = 1 V . This means that pL = 0 if ![ $$p\\left \(t\\right \) = {t}^{2} - 1$$ ](A81414_1_En_2_Chapter_IEq191.gif). Our first observation is that this relationship implies that L is invertible and that ![ $${L}^{-1} = L$$ ](A81414_1_En_2_Chapter_IEq192.gif). Next, note that any eigenvalue must satisfy λ2 = 1 and hence be a root of p. It is possible to glean even more information out of this polynomial relationship. We claim that L is diagonalizable, i.e., V has a basis of eigenvectors for L; in fact ![ $$V =\\ker \\left \(L - {1}_{V }\\right \) \\oplus \\ker \\left \(L + {1}_{V }\\right \).$$ ](A81414_1_En_2_Chapter_Equdc.gif) First, we observe that these spaces have trivial intersection as they are eigenspaces for different eigenvalues. If ![ $$x \\in \\ker \\left \(L - {1}_{V }\\right \) \\cap \\ker \\left \(L + {1}_{V }\\right \),$$ ](A81414_1_En_2_Chapter_IEq193.gif) then ![ $$-x = L\\left \(x\\right \) = x$$ ](A81414_1_En_2_Chapter_Equdd.gif) so x = 0. To show that ![ $$V =\\ker \\left \(L - {1}_{V }\\right \) +\\ker \\left \(L + {1}_{V }\\right \),$$ ](A81414_1_En_2_Chapter_Equde.gif) we observe that any x ∈ V can be written as ![ $$x = \\frac{1} {2}\\left \(x - L\\left \(x\\right \)\\right \) + \\frac{1} {2}\\left \(x + L\\left \(x\\right \)\\right \).$$ ](A81414_1_En_2_Chapter_Equdf.gif) Next, we see that ![ $$\\begin{array}{rcl} L\\left \(x \\pm L\\left \(x\\right \)\\right \)& =& L\\left \(x\\right \) \\pm{L}^{2}\\left \(x\\right \) \\\\ & =& L\\left \(x\\right \) \\pm x \\\\ & =& \\pm \\left \(x \\pm L\\left \(x\\right \)\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ59.gif) Thus, ![ $$x + L\\left \(x\\right \) \\in \\ker \\left \(L - {1}_{V }\\right \)$$ ](A81414_1_En_2_Chapter_IEq194.gif) and ![ $$x - L\\left \(x\\right \) \\in \\ker \\left \(L + {1}_{V }\\right \)$$ ](A81414_1_En_2_Chapter_IEq195.gif). This proves the claim. Example 2.4.2. Consider a linear operator L : V -> V such that ![ $${\\left \(L - {1}_{V }\\right \)}^{2} = 0$$ ](A81414_1_En_2_Chapter_IEq196.gif). This relationship implies that 1 is the only possible eigenvalue. Therefore, if L is diagonalizable, then L = 1 V and hence also satisfies the simpler relationship ![ $$L - {1}_{V } = 0$$ ](A81414_1_En_2_Chapter_IEq197.gif). Thus, L is not diagonalizable unless it is the identity map. By multiplying out the polynomial relationship, we obtain ![ $${L}^{2} - 2L + {1}_{ V } = 0.$$ ](A81414_1_En_2_Chapter_Equdg.gif) This implies that ![ $$\\left \(2 \\cdot{1}_{V } - L\\right \)L = {1}_{V }.$$ ](A81414_1_En_2_Chapter_Equdh.gif) Hence, L is invertible with ![ $${L}^{-1} = 2 \\cdot{1}_{V } - L$$ ](A81414_1_En_2_Chapter_IEq198.gif). These two examples, together with our knowledge of projections, tell us that one can get a tremendous amount of information from knowing that an operator satisfies a polynomial relationship. To commence our more abstract developments we start with a very simple observation. Proposition 2.4.3. Let L : V -> V be a linear operator and ![ $$p\\left \(t\\right \) = {t}^{k} + {\\alpha }_{ k-1}{t}^{k-1} + \\cdots+ {\\alpha }_{ 1}t + {\\alpha }_{0} \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_Equdi.gif) a polynomial such that ![ $$p\\left \(L\\right \) = {L}^{k} + {\\alpha }_{ k-1}{L}^{k-1} + \\cdots+ {\\alpha }_{ 1}L + {\\alpha }_{0}{1}_{V } = 0.$$ ](A81414_1_En_2_Chapter_Equdj.gif) (1) All eigenvalues for L are roots of pt. (2) If p0 = α 0 ≠0, then L is invertible and ![ $${L}^{-1} = \\frac{-1} {{\\alpha }_{0}} \\left \({L}^{k-1} + {\\alpha }_{ k-1}{L}^{k-2} + \\cdots+ {\\alpha }_{ 1}{1}_{V }\\right \).$$ ](A81414_1_En_2_Chapter_Equdk.gif) To begin with it would be nice to find a polynomial ![ $$p\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq199.gif) such that both of the above properties become bi-implications. In other words ![ $$\\lambda\\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq200.gif) is an eigenvalues for L if and only pλ = 0, and L is invertible if and only if p0≠0. It turns out that the characteristic polynomial does have this property, but there is a polynomial that has even more information as well as being much easier to define. One defect of the characteristic polynomial can be seen by considering the two matrices ![ $$\\left \[\\begin{array}{cc} 1&\\quad 0\\\\ 0 &\\quad 1 \\end{array} \\right \],\\left \[\\begin{array}{cc} 1&\\quad 1\\\\ 0 &\\quad 1 \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equdl.gif) They clearly have the same characteristic polynomial ![ $$p\\left \(t\\right \) ={ \\left \(t - 1\\right \)}^{2}$$ ](A81414_1_En_2_Chapter_IEq201.gif), but only the first matrix is diagonalizable. Definition 2.4.4. We define the minimal polynomial μ L t for a linear operator L : V -> V on a finite-dimensional vector space in the following way. Consider 1 V , L, L 2,..., L k , . . ∈ HomV, V. Since HomV, V is finite-dimensional we can use Lemma 1.12.3 to find a smallest k ≥ 1 such that L k is a linear combination of 1 V , L, L 2,..., L k − 1: ![ $$\\begin{array}{rcl}{ L}^{k}& =& -\\left \({\\alpha }_{ 0}{1}_{V } + {\\alpha }_{1}L + {\\alpha }_{2}{L}^{2} + \\cdots+ {\\alpha }_{ k-1}{L}^{k-1}\\right \),\\text{ or} \\\\ 0& =& {L}^{k} + {\\alpha }_{ k-1}{L}^{k-1} + \\cdots+ {\\alpha }_{ 1}L + {\\alpha }_{0}{1}_{V }.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ60.gif) The minimal polynomial of L is defined as ![ $${\\mu }_{L}\\left \(t\\right \) = {t}^{k} + {\\alpha }_{ k-1}{t}^{k-1} + \\cdots+ {\\alpha }_{ 1}t + {\\alpha }_{0}.$$ ](A81414_1_En_2_Chapter_Equdm.gif) The first interesting thing to note is that the minimal polynomial for L = 1 V is given by ![ $${\\mu }_{{1}_{V }}\\left \(t\\right \) = t - 1$$ ](A81414_1_En_2_Chapter_IEq202.gif). Hence, it is not the characteristic polynomial. The name "minimal" is justified by the next proposition. Proposition 2.4.5. Let L : V -> V be a linear operator on a finite-dimensional space. (1) If ![ $$p\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq203.gif) satisfies pL = 0, then deg p ≥ deg μ L. (2) If ![ $$p\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq204.gif) satisfies pL = 0 and deg p = deg μ L , then pt = α ⋅ μ L t for some ![ $$\\alpha\\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq205.gif). Proof. (1) Assume that p≠0 and pL = 0, then ![ $$\\begin{array}{rcl} p\\left \(L\\right \)& =& {\\alpha }_{m}{L}^{m} + {\\alpha }_{ m-1}{L}^{m-1} + \\cdots+ {\\alpha }_{ 1}L + {\\alpha }_{0}{1}_{V } \\\\ & =& 0.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ61.gif) If α m ≠0, then L m is a linear combination of lower order terms, and hence, m ≥ degμ L . (2) In case ![ $$m =\\deg \\left \({\\mu }_{L}\\right \) = k$$ ](A81414_1_En_2_Chapter_IEq206.gif), we have that 1 V , L,..., L k − 1 are linearly independent. Thus, there is only one way in which to make L k into a linear combination of 1 V , L,..., L k − 1. This implies the claim. □ Before discussing further properties of the minimal polynomial, let us try to compute it for some simple matrices. See also Sect. 1.12 for similar examples. Example 2.4.6. Let ![ $$\\begin{array}{rcl} A& =& \\left \[\\begin{array}{cc} \\lambda &\\quad 1\\\\ 0 &\\quad \\lambda\\end{array} \\right \] \\\\ B& =& \\left \[\\begin{array}{ccc} \\lambda &\\quad 0 & \\quad 0\\\\ 0 &\\quad \\lambda& \\quad 1 \\\\ 0 & \\quad 0 &\\quad \\lambda\\end{array} \\right \] \\\\ C& =& \\left \[\\begin{array}{ccc} 0&\\quad - 1&\\quad 0\\\\ 1 & \\quad 0 &\\quad 0 \\\\ 0& \\quad 0 & \\quad i \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_2_Chapter_Equ62.gif) We note that A is not proportional to 1 V , while ![ $$\\begin{array}{rcl}{ A}^{2}& =&{ \\left \[\\begin{array}{cc} \\lambda &\\quad 1 \\\\ 0 &\\quad \\lambda\\end{array} \\right \]}^{2} \\\\ & =& \\left \[\\begin{array}{cc} {\\lambda }^{2} & \\quad 2\\lambda\\\\ 0 & \\quad {\\lambda }^{2} \\end{array} \\right \] \\\\ & =& 2\\lambda \\left \[\\begin{array}{cc} \\lambda &\\quad 1\\\\ 0 &\\lambda\\end{array} \\right \] - {\\lambda }^{2}\\left \[\\begin{array}{cc} 1&\\quad 0 \\\\ 0&\\quad 1 \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_2_Chapter_Equ63.gif) Thus, ![ $${\\mu }_{A}\\left \(t\\right \) = {t}^{2} - 2\\lambda t + {\\lambda }^{2} ={ \\left \(t - \\lambda \\right \)}^{2}.$$ ](A81414_1_En_2_Chapter_Equdn.gif) The calculation for B is similar and evidently yields the same minimal polynomial ![ $${\\mu }_{B}\\left \(t\\right \) = {t}^{2} - 2\\lambda t + {\\lambda }^{2} ={ \\left \(t - \\lambda \\right \)}^{2}.$$ ](A81414_1_En_2_Chapter_Equdo.gif) Finally, for C, we note that ![ $${C}^{2} = \\left \[\\begin{array}{ccc} - 1& \\quad 0 & \\quad 0\\\\ 0 &\\quad - 1 & \\quad 0 \\\\ 0 & \\quad 0 &\\quad - 1 \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equdp.gif) Thus, ![ $${\\mu }_{C}\\left \(t\\right \) = {t}^{2} + 1.$$ ](A81414_1_En_2_Chapter_Equdq.gif) The next proposition shows that the minimal polynomial contains much of the information that we usually get from the characteristic polynomial. In subsequent sections, we shall delve much deeper into the properties of the minimal polynomial and what it tells us about possible matrix representations for L. Proposition 2.4.7. Let L : V -> V be a linear operator on a finite-dimensional vector space. Then, the following properties for the minimal polynomial hold: (1) If pL = 0 for some ![ $$p \\in\\mathbb{F}\\left \[t\\right \],$$ ](A81414_1_En_2_Chapter_IEq207.gif) then μ L divides p, i.e., pt = μ L tqt for some ![ $$q\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq208.gif). (2) Let ![ $$\\lambda\\in\\mathbb{F},$$ ](A81414_1_En_2_Chapter_IEq209.gif) then λ is an eigenvalue for L if and only if μ L λ = 0. (3) L is invertible if and only if μ L 0≠0. Proof. (1) Assume that pL = 0. We know that degp ≥ degμ L , so if we perform polynomial division (the Euclidean Algorithm 2.1.1), then ![ $$p\\left \(t\\right \) = q\\left \(t\\right \){\\mu }_{L}\\left \(t\\right \) + r\\left \(t\\right \),$$ ](A81414_1_En_2_Chapter_IEq210.gif) where degr < degμ L . Substituting L for t gives ![ $$p\\left \(L\\right \) = q\\left \(L\\right \){\\mu }_{L}\\left \(L\\right \) + r\\left \(L\\right \)$$ ](A81414_1_En_2_Chapter_IEq211.gif). Since both pL = 0, and μ L L = 0 we also have rL = 0. This will give us a contradiction with the definition of the minimal polynomial unless r = 0. Thus, μ L divides p. (2) We already know that eigenvalues are roots. Conversely, if μ L λ = 0, then we can write ![ $${\\mu }_{L}\\left \(t\\right \) = \\left \(t - \\lambda \\right \)p\\left \(t\\right \)$$ ](A81414_1_En_2_Chapter_IEq212.gif). Thus, ![ $$0 = {\\mu }_{L}\\left \(L\\right \) = \\left \(L - \\lambda {1}_{V }\\right \)p\\left \(L\\right \)$$ ](A81414_1_En_2_Chapter_Equdr.gif) As degp < degμ L , we know that pL≠0, so the relationship ![ $$\\left \(L - \\lambda {1}_{V }\\right \)p\\left \(L\\right \) = 0$$ ](A81414_1_En_2_Chapter_IEq213.gif) shows that L − λ1 V is not invertible. (3) If μ L 0≠0, then we already know that L is invertible. Conversely, suppose that μ L 0 = 0. Then, 0 is an eigenvalue by (2) and hence L cannot be invertible. □ Example 2.4.8. The derivative map D : P n -> P n has ![ $${\\mu }_{D} = {t}^{n+1}$$ ](A81414_1_En_2_Chapter_IEq214.gif). Certainly, D n + 1 vanishes on P n as all the polynomials in P n have degree ≤ n. This means that μ D t = t k for some k ≤ n + 1. On the other hand, D n t n = n! ≠0 forcing ![ $$k = n + 1$$ ](A81414_1_En_2_Chapter_IEq215.gif). Example 2.4.9. Let V = spanexpλ1 t,..., expλ n t, with λ1,..., λ n being distinct, and consider again the derivative map D : V -> V. Then we have Dexpλ i t = λ i expλ i t. In Example 1.12.15 (see also Sect. 1.13) it was shown that expλ1 t,..., expλ n t form a basis for V. Now observe that ![ $$\\left \(D - {\\lambda }_{1}{1}_{V }\\right \)\\cdots \\left \(D - {\\lambda }_{n}{1}_{V }\\right \)\\left \(\\exp \\left \({\\lambda }_{n}\\right \)t\\right \) = 0.$$ ](A81414_1_En_2_Chapter_Equds.gif) By rearranging terms, it follows that also ![ $$\\left \(D - {\\lambda }_{1}{1}_{V }\\right \)\\cdots \\left \(D - {\\lambda }_{n}{1}_{V }\\right \)\\left \(\\exp \\left \({\\lambda }_{i}\\right \)t\\right \) = 0$$ ](A81414_1_En_2_Chapter_Equdt.gif) and consequently ![ $$\\left \(D - {\\lambda }_{1}{1}_{V }\\right \)\\cdots \\left \(D - {\\lambda }_{n}{1}_{V }\\right \) = 0\\text{ on }V.$$ ](A81414_1_En_2_Chapter_Equdu.gif) On the other hand, ![ $$\\left \(D - {\\lambda }_{1}{1}_{V }\\right \)\\cdots \\left \(D - {\\lambda }_{n-1}{1}_{V }\\right \)\\left \(\\exp \\left \({\\lambda }_{n}\\right \)t\\right \)\\neq 0.$$ ](A81414_1_En_2_Chapter_Equdv.gif) This means that μ D divides ![ $$\\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{n}\\right \)$$ ](A81414_1_En_2_Chapter_IEq216.gif) and that it cannot divide ![ $$\\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{n-1}\\right \)$$ ](A81414_1_En_2_Chapter_IEq217.gif). Since the order of the λs is irrelevant, this shows that μ D t = t − λ1⋯t − λ n as μ D cannot divide ![ $$\\frac{\\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{n}\\right \)} {t - {\\lambda }_{i}}.$$ ](A81414_1_En_2_Chapter_Equdw.gif) Finally, let us compute the minimal polynomials in two interesting and somewhat tricky situations. Proposition 2.4.10. The minimal polynomial for ![ $$A = \\left \[\\begin{array}{cccc} 0 & 1 &\\cdots & 0\\\\ 0 & 0 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots & 1\\\\ - {\\alpha }_{ 0} & - {\\alpha }_{1} & \\cdots & - {\\alpha }_{n-1} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equdx.gif) is given by ![ $${\\mu }_{A}\\left \(t\\right \) = {t}^{n} + {\\alpha }_{ n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{ 1}t + {\\alpha }_{0}.$$ ](A81414_1_En_2_Chapter_Equdy.gif) Proof. It turns out to be easier to calculate the minimal polynomial for the transpose ![ $$B = {A}^{t} = \\left \[\\begin{array}{ccccc} 0&0&\\cdots &0& - {\\alpha }_{0} \\\\ 1&0&\\cdots &0& - {\\alpha }_{1} \\\\ 0&1&\\cdots &0& - {\\alpha }_{2}\\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ 0&0&\\cdots &1& - {\\alpha }_{n-1} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equdz.gif) and it is not hard to show that a matrix and its transpose have the same minimal polynomials by noting that, if ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq218.gif), then ![ $${\\left \(p\\left \(A\\right \)\\right \)}^{t} = p\\left \({A}^{t}\\right \)$$ ](A81414_1_En_2_Chapter_Equea.gif) (see Exercise 3 in this section). Let ![ $$p\\left \(t\\right \) = {t}^{n} + {\\alpha }_{ n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{ 1}t + {\\alpha }_{0}.$$ ](A81414_1_En_2_Chapter_Equeb.gif) We claim that μ B t = pt = χ A t. Recall from Example 2.3.10 that we already know that χ A t = pt. To prove the claim for μ B , first note that e k = Be k − 1, for k = 2,..., n showing that e k = B k − 1 e 1, for k = 2,..., n. Thus, the vectors e 1, Be 1,..., B n − 1 e 1 are linearly independent. This shows that ![ $${1}_{{\\mathbb{F}}^{n}},$$ ](A81414_1_En_2_Chapter_IEq219.gif) B,..., B n − 1 must also be linearly independent. Next, we can also show that pB = 0. This is because ![ $$\\begin{array}{rcl} p\\left \(B\\right \)\\left \({e}_{k}\\right \)& =& p\\left \(B\\right \) \\circ{B}^{k-1}\\left \({e}_{ 1}\\right \) \\\\ & =& {B}^{k-1} \\circ p\\left \(B\\right \)\\left \({e}_{ 1}\\right \) \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ64.gif) and pBe 1 = 0 since ![ $$\\begin{array}{rcl} {24.0pt} p\\left \(B\\right \)\\left \({e}_{1}\\right \)& =& \\left \({\\left \(B\\right \)}^{n} + {\\alpha }_{ n-1}{\\left \(B\\right \)}^{n-1} + \\cdots+ {\\alpha }_{ 1}B + {\\alpha }_{0}{1}_{{\\mathbb{F}}^{n}}\\right \){e}_{1} \\\\ & =&{ \\left \(B\\right \)}^{n}\\left \({e}_{ 1}\\right \) + {\\alpha }_{n-1}{\\left \(B\\right \)}^{n-1}\\left \({e}_{ 1}\\right \) + \\cdots+ {\\alpha }_{1}B\\left \({e}_{1}\\right \) + {\\alpha }_{0}{1}_{{\\mathbb{F}}^{n}}\\left \({e}_{1}\\right \) \\\\ & =& B{e}_{n} + {\\alpha }_{n-1}{e}_{n} + \\cdots+ {\\alpha }_{1}{e}_{2} + {\\alpha }_{0}{e}_{1} \\\\ & =& -{\\alpha }_{0}{e}_{1} - {\\alpha }_{1}{e}_{2} -\\cdots- {\\alpha }_{n-1}{e}_{n} \\\\ & & +{\\alpha }_{n-1}{e}_{n} + \\cdots+ {\\alpha }_{1}{e}_{2} + {\\alpha }_{0}{e}_{1} \\\\ & =& 0. {240.0pt} \\square \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ65.gif) Next, we show Proposition 2.4.11. The minimal polynomial for ![ $$C = \\left \[\\begin{array}{cccc} {\\lambda }_{1} & 1 && 0 \\\\ 0 &{\\lambda }_{2} & \\ddots \\\\ & & \\ddots & 1\\\\ 0 & & &{\\lambda }_{ n} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equec.gif) is given by ![ $${\\mu }_{C}\\left \(t\\right \) = \\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{n}\\right \).$$ ](A81414_1_En_2_Chapter_Equed.gif) Proof. One strategy would be to show that C has the same minimal polynomial as A in the previous proposition (see also Exercise 20 in Sect. 2.3). But we can also prove the claim directly. Define α0,..., α n − 1 by ![ $$p\\left \(t\\right \) = {t}^{n} + {\\alpha }_{ n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{ 1}t + {\\alpha }_{0} = \\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{n}\\right \).$$ ](A81414_1_En_2_Chapter_Equee.gif) The claim is then established directly by first showing that pC = 0. This will imply that μ C divides p. We then just need to show that q i C≠0, where ![ $${q}_{i}\\left \(t\\right \) = \\frac{p\\left \(t\\right \)} {t - {\\lambda }_{i}}.$$ ](A81414_1_En_2_Chapter_Equef.gif) The key observation for these facts follow from knowing how to multiply certain upper triangular matrices: ![ $$\\left \[\\begin{array}{cccc} 0& \\quad 1 & \\quad 0\\\\ 0 &\\quad {\\gamma }_{ 2} & \\quad 1\\\\ 0 & \\quad 0 &\\quad {\\gamma }_{ 3} & \\quad \\ddots\\\\ & & &\\ddots \\end{array} \\right \]\\left \[\\begin{array}{cccc} {\\delta }_{1} & \\quad 1& \\quad 0 \\\\ 0 &\\quad 0& \\quad 1\\\\ 0 &\\quad 0 &\\quad {\\delta }_{ 3} & \\quad \\ddots\\\\ & & &\\ddots \\end{array} \\right \] = \\left \[\\begin{array}{cccc} 0&\\quad 0& \\quad 1 &\\quad 0\\\\ 0 &\\quad 0 & \\quad {_\\ast} \\\\ 0&\\quad 0&\\quad {\\gamma }_{3}{\\delta }_{3}\\\\ \\vdots & \\quad \\vdots & \\quad & \\quad \\ddots \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equeg.gif) ![ $$\\left \[\\begin{array}{cccc} 0&\\quad 0& \\quad 1 &\\quad 0\\\\ 0 &\\quad 0 & \\quad {_\\ast} \\\\ 0&\\quad 0&\\quad {\\gamma }_{3}{\\delta }_{3} \\end{array} \\right \]\\left \[\\begin{array}{cccc} {\\epsilon }_{1} & \\quad 1 &\\quad 0 \\\\ 0 &\\quad {\\epsilon }_{2} & \\quad 1 \\\\ 0 & \\quad 0 &\\quad 0&\\quad 1\\\\ & \\quad & \\quad & \\quad \\ddots \\end{array} \\right \] = \\left \[\\begin{array}{cccc} 0&\\quad 0&\\quad 0& \\quad 1\\\\ 0 &\\quad 0 &\\quad 0 & \\quad {_\\ast} \\\\ 0&\\quad 0&\\quad 0& \\quad {_\\ast}\\\\ \\vdots & \\quad \\vdots & \\quad \\vdots &\\quad {\\gamma }_{ 4}{\\delta }_{4}{\\epsilon }_{4} \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equeh.gif) Therefore, when we do the multiplication ![ $$\\left \(C - {\\lambda }_{1}{1}_{{\\mathbb{F}}^{n}}\\right \)\\left \(C - {\\lambda }_{2}{1}_{{\\mathbb{F}}^{n}}\\right \)\\cdots \\left \(C - {\\lambda }_{n}{1}_{{\\mathbb{F}}^{n}}\\right \)$$ ](A81414_1_En_2_Chapter_Equei.gif) by starting from the left, we get that the first k columns are zero in ![ $$\\left \(C - {\\lambda }_{1}{1}_{{\\mathbb{F}}^{n}}\\right \)\\left \(C - {\\lambda }_{2}{1}_{{\\mathbb{F}}^{n}}\\right \)\\cdots \\left \(C - {\\lambda }_{k}{1}_{{\\mathbb{F}}^{n}}\\right \)$$ ](A81414_1_En_2_Chapter_Equej.gif) while the ![ $${\\left \(k + 1\\right \)}^{\\mathrm{th}}$$ ](A81414_1_En_2_Chapter_IEq220.gif) column has 1 as the first entry. Clearly, this shows that pC = 0 as well as q n C≠0. By rearranging the λ i s, this also shows that q i C≠0 for all i = 1,..., n. □ ### 2.4.1 Exercises 1. Find the minimal and characteristic polynomials for ![ $$A = \\left \[\\begin{array}{lll} 1&\\quad 0&\\quad 1\\\\ 0 &\\quad 1 &\\quad 0 \\\\ 1&\\quad 0&\\quad 1\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equek.gif) 2. Assume that L : V -> V has an invariant subspace M ⊂ V, i.e., LM ⊂ M. Show that ![ $${\\mu }_{L{\\vert }_{M}}$$ ](A81414_1_En_2_Chapter_IEq221.gif) divides μ L . 3. Show that ![ $${\\mu }_{A}\\left \(t\\right \) = {\\mu }_{{A}^{t}}\\left \(t\\right \),$$ ](A81414_1_En_2_Chapter_IEq222.gif) where A t is the transpose of ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq223.gif). More abstractly, one can show that a linear operator and its dual have the same minimal polynomials (see Sect. 1.14 for definitions related to dual spaces). 4. Let L : V -> V be a linear operator on a finite-dimensional vector space over ![ $$\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq224.gif) and ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq225.gif) a polynomial. Show that kerpL = 0 if and only if gcdp, μ L = 1. 5. Let L : V -> V be a linear operator on a finite-dimensional vector space over ![ $$\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq226.gif) and ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq227.gif) a polynomial. Show that if p divides μ L , then ![ $${\\mu }_{L{\\vert }_{\\mathrm{ker}\\left \(p\\left \(L\\right \)\\right \)}}$$ ](A81414_1_En_2_Chapter_IEq228.gif) divides p. 6. Let L : V -> V be a linear operator such that L 2 + 1 V = 0. (a) If V is real vector space show, that 1 V and L are linearly independent and that μ L t = t 2 + 1. (b) If V and L are complex, show that 1 V and L need not be linearly independent. (c) Find the possibilities for the minimal polynomial of L 3 + 2L 2 + L + 3 ⋅1 V . 7. Assume that L : V -> V has minimal polynomial μ L t = t. Find a matrix representation for L. 8. Assume that L : V -> V has minimal polynomial μ L t = t 3 + 2t + 1. Find a polynomial qt of degree ≤ 2 such that L 4 = qL. 9. Assume that L : V -> V has minimal polynomial μ L t = t 2 + 1. Find a polynomial pt such that L − 1 = pL. 10. Show that if l ≥ degμ L = k, then L l is a linear combination of 1 V , L,..., L k − 1. If L is invertible, show the same for all l < 0. 11. Let L : V -> V be a linear operator on a finite-dimensional vector space over ![ $$\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq229.gif) and ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq230.gif) a polynomial. Show ![ $$\\deg {\\mu }_{p\\left \(L\\right \)}\\left \(t\\right \) \\leq \\deg {\\mu }_{L}\\left \(t\\right \).$$ ](A81414_1_En_2_Chapter_Equel.gif) 12. Let p ∈ ℂt. Show that the minimal polynomial for D : kerpD -> kerpD is μ D = p (see also Exercise 5 in Sect. 2.3 and Example 2.3.10). 13. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq231.gif) and consider the two linear operators ![ $${L}_{A},{R}_{A} :\\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \) \\rightarrow \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq232.gif) defined by L A X = AX and R A X = XA (see also Exercise 11 in Sect. 2.3). Find the minimal polynomial of L A , R A given μ A t. 14. Consider two matrices A and B, show that the minimal polynomial for the block diagonal matrix ![ $$\\left \[\\begin{array}{cc} A& \\quad 0\\\\ 0 &\\quad B\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equem.gif) is lcmμ A , μ B (see Proposition 2.1.5 for the definition of lcm). Generalize this to block diagonal matrices ![ $$\\left \[\\begin{array}{ccc} {A}_{1}\\\\ & \\ddots \\\\ & & {A}_{k}\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equen.gif) ## 2.5 Diagonalizability In this section, we shall investigate how and when one can find a basis that puts a linear operator L : V -> V into the simplest possible form. In Sect. 2.2, we saw that decoupling a system of differential equations by finding a basis of eigenvectors for a matrix considerably simplifies the problem of solving the differential equations. It is from that setup that we shall take our cue to the simplest form of a linear operator. Definition 2.5.1. A linear operator L : V -> V on a finite-dimensional vector space over a field ![ $$\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq233.gif) is said to be diagonalizable if we can find a basis for V that consists of eigenvectors for L, i.e., a basis e 1,..., e n for V such that Le i = λ i e i and ![ $${\\lambda }_{i} \\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq234.gif) for all i = 1,..., n. This is the same as saying that ![ $$\\left \[\\begin{array}{ccc} L\\left \({e}_{1}\\right \)&\\cdots &L\\left \({e}_{n}\\right \) \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \[\\begin{array}{ccc} {\\lambda }_{1} & \\cdots & 0\\\\ \\vdots & \\ddots & \\vdots \\\\ 0 &\\cdots &{\\lambda }_{n} \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equeo.gif) In other words, the matrix representation for L is a diagonal matrix. One advantage of having a basis that diagonalizes a linear operator L is that it becomes much simpler to calculate the powers L k since L k e i = λ i k e i . More generally, if ![ $$p\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \],$$ ](A81414_1_En_2_Chapter_IEq235.gif) then we have pLe i = pλ i e i . Thus, pL is diagonalized with respect to the same basis and with eigenvalues pλ i . We are now ready for a few examples and then the promised application of diagonalizability. Example 2.5.2. The derivative map D : P n -> P n is not diagonalizable. We already know (see Example 1.7.3) that D has a matrix representation that is upper triangular and with zeros on the diagonal. Thus, the characteristic polynomial is t n + 1. So the only eigenvalue is 0. Therefore, had D been diagonalizable, it would have had to be the zero transformation ![ $${0}_{{P}_{n}}$$ ](A81414_1_En_2_Chapter_IEq236.gif). Since this is not true, we conclude that D : P n -> P n is not diagonalizable. Example 2.5.3. Let V = spanexpλ1 t,..., expλ n t and consider again the derivative map D : V -> V. Then, we have Dexpλ i t = λ i expλ i t. So if we extract a basis for V among the functions expλ1 t,..., expλ n t, then we have found a basis of eigenvectors for D. These two examples show that diagonalizability is not just a property of the operator. It really matters what space the operator is restricted to live on. We can exemplify this with matrices as well. Example 2.5.4. Consider ![ $$A = \\left \[\\begin{array}{cc} 0&\\quad - 1\\\\ 1 & \\quad 0 \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equep.gif) As a map A : ℝ 2 -> ℝ 2, this operator cannot be diagonalizable as it rotates vectors. However, as a map A : ℂ 2 -> ℂ 2, it has two eigenvalues ± i with eigenvectors ![ $$\\left \[\\begin{array}{c} 1\\\\ \\mp i \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equeq.gif) As these eigenvectors form a basis for ℂ 2, we conclude that A : ℂ 2 -> ℂ 2 is diagonalizable. We have already seen how decoupling systems of differential equations is related to being able to diagonalize a matrix (see Sect. 2.2). Below we give a related example showing that diagonalizability can be used to investigate a recurrence relation. Example 2.5.5. Consider the Fibonacci sequence 1, 1, 2, 3, 5, 8,... where each term is the sum of the previous two terms. Therefore, if ϕ n is the nth term in the sequence, then ϕ n + 2 = ϕ n + 1 + ϕ n , with initial values ϕ0 = 1, ϕ1 = 1. If we record the elements in pairs ![ $${\\Phi }_{n} = \\left \[\\begin{array}{c} {\\phi }_{n} \\\\ {\\phi }_{n+1} \\end{array} \\right \] \\in{\\mathbb{R}}^{2},$$ ](A81414_1_En_2_Chapter_Equer.gif) then the relationship takes the form ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{c} {\\phi }_{n+1} \\\\ {\\phi }_{n+2} \\end{array} \\right \]& =& \\left \[\\begin{array}{cc} 0&\\quad 1\\\\ 1 &\\quad 1 \\end{array} \\right \]\\left \[\\begin{array}{c} {\\phi }_{n} \\\\ {\\phi }_{n+1} \\end{array} \\right \], \\\\ {\\Phi }_{n+1}& =& A{\\Phi }_{n}.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ66.gif) The goal is to find a general formula for ϕ n and to discover what happens as n -> ∞. The matrix relationship tells us that ![ $$\\begin{array}{rcl} {\\Phi }_{n}& =& {A}^{n}{\\Phi }_{ 0}, \\\\ \\left \[\\begin{array}{c} {\\phi }_{n} \\\\ {\\phi }_{n+1} \\end{array} \\right \]& =&{ \\left \[\\begin{array}{cc} 0&\\quad 1\\\\ 1 &\\quad 1 \\end{array} \\right \]}^{n}\\left \[\\begin{array}{c} 1 \\\\ 1 \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_2_Chapter_Equ67.gif) Thus, we must find a formula for ![ $${\\left \[\\begin{array}{cc} 0&\\quad 1\\\\ 1 &\\quad 1 \\end{array} \\right \]}^{n}.$$ ](A81414_1_En_2_Chapter_Eques.gif) This is where diagonalization comes in handy. The matrix A has characteristic polynomial ![ $${t}^{2} - t - 1 = \\left \(t -\\frac{1 + \\sqrt{5}} {2} \\right \)\\left \(t -\\frac{1 -\\sqrt{5}} {2} \\right \).$$ ](A81414_1_En_2_Chapter_Equet.gif) The corresponding eigenvectors for ![ $$\\frac{1\\pm \\sqrt{5}} {2}$$ ](A81414_1_En_2_Chapter_IEq237.gif) are ![ $$\\left \[\\begin{array}{c} 1 \\\\ \\frac{1\\pm \\sqrt{5}} {2} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_IEq238.gif). So ![ $$\\left \[\\begin{array}{cc} 0&\\quad 1\\\\ 1 &\\quad 1 \\end{array} \\right \]\\left \[\\begin{array}{cc} 1 & 1 \\\\ \\frac{1+\\sqrt{5}} {2} & \\frac{1-\\sqrt{5}} {2} \\end{array} \\right \] = \\left \[\\begin{array}{cc} 1 & 1 \\\\ \\frac{1+\\sqrt{5}} {2} & \\frac{1-\\sqrt{5}} {2} \\end{array} \\right \]\\left \[\\begin{array}{cc} \\frac{1+\\sqrt{5}} {2} & 0 \\\\ 0 &\\frac{1-\\sqrt{5}} {2} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equeu.gif) or ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{cc} 0&\\quad 1\\\\ 1 &\\quad 1 \\end{array} \\right \]& =& \\left \[\\begin{array}{cc} 1 & 1 \\\\ \\frac{1+\\sqrt{5}} {2} & \\frac{1-\\sqrt{5}} {2} \\end{array} \\right \]\\left \[\\begin{array}{cc} \\frac{1+\\sqrt{5}} {2} & 0 \\\\ 0 &\\frac{1-\\sqrt{5}} {2} \\end{array} \\right \]{\\left \[\\begin{array}{cc} 1 & 1 \\\\ \\frac{1+\\sqrt{5}} {2} & \\frac{1-\\sqrt{5}} {2} \\end{array} \\right \]}^{-1} \\\\ & =& \\left \[\\begin{array}{cc} 1 & 1 \\\\ \\frac{1+\\sqrt{5}} {2} & \\frac{1-\\sqrt{5}} {2} \\end{array} \\right \]\\left \[\\begin{array}{cc} \\frac{1+\\sqrt{5}} {2} & 0 \\\\ 0 &\\frac{1-\\sqrt{5}} {2} \\end{array} \\right \]\\left \[\\begin{array}{cc} \\frac{1} {2} - \\frac{1} {2\\sqrt{5}} & \\frac{1} {\\sqrt{5}} \\\\ \\frac{1} {2} + \\frac{1} {2\\sqrt{5}} & - \\frac{1} {\\sqrt{5}} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_2_Chapter_Equ68.gif) This means that ![ $$\\begin{array}{rcl} &{ \\left \[\\begin{array}{cc} 0&\\quad 1\\\\ 1 &\\quad 1 \\end{array} \\right \]}^{n} & \\\\ & \\quad = \\left \[\\begin{array}{cc} 1 & 1 \\\\ \\frac{1+\\sqrt{5}} {2} & \\frac{1-\\sqrt{5}} {2} \\end{array} \\right \]{\\left \[\\begin{array}{cc} \\frac{1+\\sqrt{5}} {2} & 0 \\\\ 0 &\\frac{1-\\sqrt{5}} {2} \\end{array} \\right \]}^{n}\\left \[\\begin{array}{cc} \\frac{1} {2} - \\frac{1} {2\\sqrt{5}} & \\frac{1} {\\sqrt{5}} \\\\ \\frac{1} {2} + \\frac{1} {2\\sqrt{5}} & - \\frac{1} {\\sqrt{5}} \\end{array} \\right \]& \\\\ & \\quad = \\left \[\\begin{array}{cc} 1 & 1 \\\\ \\frac{1+\\sqrt{5}} {2} & \\frac{1-\\sqrt{5}} {2} \\end{array} \\right \]\\left \[\\begin{array}{cc} {\\left \(\\frac{1+\\sqrt{5}} {2} \\right \)}^{n}& 0 \\\\ 0 &{\\left \(\\frac{1-\\sqrt{5}} {2} \\right \)}^{n} \\end{array} \\right \]\\left \[\\begin{array}{cc} \\frac{1} {2} - \\frac{1} {2\\sqrt{5}} & \\frac{1} {\\sqrt{5}} \\\\ \\frac{1} {2} + \\frac{1} {2\\sqrt{5}} & - \\frac{1} {\\sqrt{5}} \\end{array} \\right \] & \\\\ & \\quad = \\left \[\\begin{array}{c} {\\left \(\\frac{1+\\sqrt{5}} {2} \\right \)}^{n}\\left \(\\frac{1} {2} - \\frac{1} {2\\sqrt{5}}\\right \) +{ \\left \(\\frac{1-\\sqrt{5}} {2} \\right \)}^{n}\\left \(\\frac{1} {2} + \\frac{1} {2\\sqrt{5}}\\right \) \\\\ {\\left \(\\frac{1+\\sqrt{5}} {2} \\right \)}^{n+1}\\left \(\\frac{1} {2} - \\frac{1} {2\\sqrt{5}}\\right \) +{ \\left \(\\frac{1-\\sqrt{5}} {2} \\right \)}^{n+1}\\left \(\\frac{1} {2} + \\frac{1} {2\\sqrt{5}}\\right \) \\end{array} \\right. & \\\\ & \\quad \\qquad \\left.\\begin{array}{c} \\frac{1} {\\sqrt{5}}{\\left \(\\frac{1+\\sqrt{5}} {2} \\right \)}^{n} - \\frac{1} {\\sqrt{5}}{\\left \(\\frac{1-\\sqrt{5}} {2} \\right \)}^{n} \\\\ \\frac{1} {\\sqrt{5}}{\\left \(\\frac{1+\\sqrt{5}} {2} \\right \)}^{n+1} - \\frac{1} {\\sqrt{5}}{\\left \(\\frac{1-\\sqrt{5}} {2} \\right \)}^{n+1} \\end{array} \\right \] & \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ69.gif) Hence ![ $$\\begin{array}{rcl}{ \\phi }_{n}& =&{ \\left \(\\frac{1 + \\sqrt{5}} {2} \\right \)}^{n}\\left \(\\frac{1} {2} - \\frac{1} {2\\sqrt{5}}\\right \) +{ \\left \(\\frac{1 -\\sqrt{5}} {2} \\right \)}^{n}\\left \(\\frac{1} {2} + \\frac{1} {2\\sqrt{5}}\\right \) \\\\ & & + \\frac{1} {\\sqrt{5}}{\\left \(\\frac{1 + \\sqrt{5}} {2} \\right \)}^{n} - \\frac{1} {\\sqrt{5}}{\\left \(\\frac{1 -\\sqrt{5}} {2} \\right \)}^{n} \\\\ & =& \\left \(\\frac{1} {2} + \\frac{1} {2\\sqrt{5}}\\right \){\\left \(\\frac{1 + \\sqrt{5}} {2} \\right \)}^{n} +{ \\left \(\\frac{1 -\\sqrt{5}} {2} \\right \)}^{n}\\left \(\\frac{1} {2} - \\frac{1} {2\\sqrt{5}}\\right \) \\\\ & =& \\left \(\\frac{1 + \\sqrt{5}} {2\\sqrt{5}} \\right \){\\left \(\\frac{1 + \\sqrt{5}} {2} \\right \)}^{n} -{\\left \(\\frac{1 -\\sqrt{5}} {2} \\right \)}^{n}\\left \(\\frac{1 -\\sqrt{5}} {2\\sqrt{5}} \\right \) \\\\ & =& \\left \( \\frac{1} {\\sqrt{5}}\\right \){\\left \(\\frac{1 + \\sqrt{5}} {2} \\right \)}^{n+1} -\\left \( \\frac{1} {\\sqrt{5}}\\right \){\\left \(\\frac{1 -\\sqrt{5}} {2} \\right \)}^{n+1}.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ70.gif) The ratio of successive Fibonacci numbers satisfies ![ $$\\begin{array}{rcl} \\frac{{\\phi }_{n+1}} {{\\phi }_{n}} & =& \\frac{\\left \( \\frac{1} {\\sqrt{5}}\\right \){\\left \(\\frac{1+\\sqrt{5}} {2} \\right \)}^{n+2} -\\left \( \\frac{1} {\\sqrt{5}}\\right \){\\left \(\\frac{1-\\sqrt{5}} {2} \\right \)}^{n+2}} {\\left \( \\frac{1} {\\sqrt{5}}\\right \){\\left \(\\frac{1+\\sqrt{5}} {2} \\right \)}^{n+1} -\\left \( \\frac{1} {\\sqrt{5}}\\right \){\\left \(\\frac{1-\\sqrt{5}} {2} \\right \)}^{n+1}} \\\\ & =& \\frac{{\\left \(\\frac{1+\\sqrt{5}} {2} \\right \)}^{n+2} -{\\left \(\\frac{1-\\sqrt{5}} {2} \\right \)}^{n+2}} {{\\left \(\\frac{1+\\sqrt{5}} {2} \\right \)}^{n+1} -{\\left \(\\frac{1-\\sqrt{5}} {2} \\right \)}^{n+1}} \\\\ & =& \\frac{\\left \(\\frac{1+\\sqrt{5}} {2} \\right \) -\\left \(\\frac{1-\\sqrt{5}} {2} \\right \){\\left \(\\frac{1-\\sqrt{5}} {1+\\sqrt{5}}\\right \)}^{n+1}} {1 -{\\left \(\\frac{1-\\sqrt{5}} {1+\\sqrt{5}}\\right \)}^{n+1}}, \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ71.gif) where ![ $${\\left \(\\frac{1-\\sqrt{5}} {1+\\sqrt{5}}\\right \)}^{n+1} \\rightarrow0$$ ](A81414_1_En_2_Chapter_IEq239.gif) as n -> ∞. Thus, ![ $${\\lim }_{n\\rightarrow \\infty }\\frac{{\\phi }_{n+1}} {{\\phi }_{n}} = \\frac{1 + \\sqrt{5}} {2},$$ ](A81414_1_En_2_Chapter_Equev.gif) which is the golden ratio. This ratio is usually denoted by ϕ. The Fibonacci sequence is often observed in growth phenomena in nature and is also of fundamental importance in combinatorics. It is not easy to come up with a criterion that guarantees that a matrix is diagonalizable and is also easy to use. It turns out that the minimal polynomial holds the key to diagonalizability of a general linear operator. In a different context, we shall show in Sect. 4.3 that symmetric matrices with real entries are diagonalizable. The basic procedure for deciding diagonalizability of an operator L : V -> V is to first compute the eigenvalues, then list them without multiplicities λ1,..., λ k , then calculate all the eigenspaces kerL − λ i 1 V , and, finally, check if one can find a basis of eigenvectors. To assist us in this process, there are some useful abstract results about the relationship between the eigenspaces. Lemma 2.5.6. (Eigenspaces form Direct Sums) If λ 1 ,...,λ k are distinct eigenvalues for a linear operator L : V -> V, then ![ $$\\ker \\left \(L - {\\lambda }_{1}{1}_{V }\\right \) + \\cdots+\\ker \\left \(L - {\\lambda }_{k}{1}_{V }\\right \) =\\ker \\left \(L - {\\lambda }_{1}{1}_{V }\\right \) \\oplus \\cdots\\oplus \\ker \\left \(L - {\\lambda }_{k}{1}_{V }\\right \).$$ ](A81414_1_En_2_Chapter_Equew.gif) In particular, we have ![ $$k \\leq \\dim \\left \(V \\right \).$$ ](A81414_1_En_2_Chapter_Equex.gif) Proof. The proof uses induction on k. When k = 1, there is nothing to prove. Assume that the result is true for any collection of k distinct eigenvalues for L and suppose that we have k + 1 distinct eigenvalues λ1,..., λ k + 1 for L. Since we already know that ![ $$\\ker \\left \(L - {\\lambda }_{1}{1}_{V }\\right \) + \\cdots+\\ker \\left \(L - {\\lambda }_{k}{1}_{V }\\right \) =\\ker \\left \(L - {\\lambda }_{1}{1}_{V }\\right \) \\oplus \\cdots\\oplus \\ker \\left \(L - {\\lambda }_{k}{1}_{V }\\right \),$$ ](A81414_1_En_2_Chapter_Equey.gif) it will be enough to prove that ![ $$\\left \(\\ker \\left \(L - {\\lambda }_{1}{1}_{V }\\right \) + \\cdots+\\ker \\left \(L - {\\lambda }_{k}{1}_{V }\\right \)\\right \) \\cap \\ker \\left \(L - {\\lambda }_{k+1}{1}_{V }\\right \) = \\left \\{0\\right \\}.$$ ](A81414_1_En_2_Chapter_Equez.gif) In other words, we claim that if Lx = λ k + 1 x and x = x 1 + ⋯ + x k where x i ∈ kerL − λ i 1 V , then x = 0. We can prove this in two ways. First, note that if k = 1, then x = x 1 implies that x is the eigenvector for two different eigenvalues. This is clearly not possible unless x = 0. Thus, we can assume that k > 1. In that case, ![ $$\\begin{array}{rcl}{ \\lambda }_{k+1}x& =& L\\left \(x\\right \) \\\\ & =& L\\left \({x}_{1} + \\cdots+ {x}_{k}\\right \) \\\\ & =& {\\lambda }_{1}{x}_{1} + \\cdots+ {\\lambda }_{k}{x}_{k}.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ72.gif) Subtracting yields ![ $$0 = \\left \({\\lambda }_{1} - {\\lambda }_{k+1}\\right \){x}_{1} + \\cdots+ \\left \({\\lambda }_{k} - {\\lambda }_{k+1}\\right \){x}_{k}.$$ ](A81414_1_En_2_Chapter_Equfa.gif) Since we assumed that ![ $$\\ker \\left \(L - {\\lambda }_{1}{1}_{V }\\right \) + \\cdots+\\ker \\left \(L - {\\lambda }_{k}{1}_{V }\\right \) =\\ker \\left \(L - {\\lambda }_{1}{1}_{V }\\right \) \\oplus \\cdots\\oplus \\ker \\left \(L - {\\lambda }_{k}{1}_{V }\\right \),$$ ](A81414_1_En_2_Chapter_Equfb.gif) it follows that ![ $$\\left \({\\lambda }_{1} - {\\lambda }_{k+1}\\right \){x}_{1} = 0,$$ ](A81414_1_En_2_Chapter_IEq240.gif)..., ![ $$\\left \({\\lambda }_{k} - {\\lambda }_{k+1}\\right \){x}_{k} = 0$$ ](A81414_1_En_2_Chapter_IEq241.gif). As ![ $$\\left \({\\lambda }_{1} - {\\lambda }_{k+1}\\right \)\\neq 0,$$ ](A81414_1_En_2_Chapter_IEq242.gif)..., ![ $$\\left \({\\lambda }_{k} - {\\lambda }_{k+1}\\right \)\\neq 0$$ ](A81414_1_En_2_Chapter_IEq243.gif), we conclude that x 1 = 0,..., x k = 0, implying that x = x 1 + ⋯ + x k = 0. The second way of doing the induction is slightly trickier and has the advantage that it is easy to generalize (see Exercise 20 in this section.) This proof will in addition give us an interesting criterion for when an operator is diagonalizable. Since λ1,..., λ k + 1 are different, the polynomials t − λ1,..., t − λ k + 1 have 1 as their greatest common divisor. Thus, also ![ $$\\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{k}\\right \)$$ ](A81414_1_En_2_Chapter_IEq244.gif) and ![ $$\\left \(t - {\\lambda }_{k+1}\\right \)$$ ](A81414_1_En_2_Chapter_IEq245.gif) have 1 as their greatest common divisor. This means that we can find polynomials ![ $$p\\left \(t\\right \),q\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq246.gif) such that ![ $$1 = p\\left \(t\\right \)\\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{k}\\right \) + q\\left \(t\\right \)\\left \(t - {\\lambda }_{k+1}\\right \)$$ ](A81414_1_En_2_Chapter_Equfc.gif) (see Proposition 2.1.4). If we substitute the operator L into this formula in place of t, we obtain: ![ $${1}_{V } = p\\left \(L\\right \)\\left \(L - {\\lambda }_{1}{1}_{V }\\right \)\\cdots \\left \(L - {\\lambda }_{k}{1}_{V }\\right \) + q\\left \(L\\right \)\\left \(L - {\\lambda }_{k+1}{1}_{V }\\right \).$$ ](A81414_1_En_2_Chapter_Equfd.gif) Applying this to x gives us ![ $$x = p\\left \(L\\right \)\\left \(L - {\\lambda }_{1}{1}_{V }\\right \)\\cdots \\left \(L - {\\lambda }_{k}{1}_{V }\\right \)\\left \(x\\right \) + q\\left \(L\\right \)\\left \(L - {\\lambda }_{k+1}{1}_{V }\\right \)\\left \(x\\right \).$$ ](A81414_1_En_2_Chapter_Equfe.gif) If ![ $$x \\in \\left \(\\ker \\left \(L - {\\lambda }_{1}{1}_{V }\\right \) + \\cdots+\\ker \\left \(L - {\\lambda }_{k}{1}_{V }\\right \)\\right \) \\cap \\ker \\left \(L - {\\lambda }_{k+1}{1}_{V }\\right \),$$ ](A81414_1_En_2_Chapter_Equff.gif) then ![ $$\\begin{array}{rcl} \\left \(L - {\\lambda }_{1}{1}_{V }\\right \)\\cdots \\left \(L - {\\lambda }_{k}{1}_{V }\\right \)\\left \(x\\right \)& =& 0, \\\\ \\left \(L - {\\lambda }_{k+1}{1}_{V }\\right \)\\left \(x\\right \)& =& 0, \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ73.gif) so also x = 0. □ As applications of this lemma, we reexamine several examples. Example 2.5.7. First, we wish to give a new proof (see Example 1.12.15) that expλ1 t,..., expλ n t are linearly independent if λ1,..., λ n are distinct. For that, we consider V = spanexpλ1 t,..., expλ n t and D : V -> V. The result is now obvious as each of the functions expλ i t is an eigenvector with eigenvalue λ i for D : V -> V. As λ1,..., λ n are distinct, we can conclude that the corresponding eigenfunctions are linearly independent. Thus, expλ1 t,..., expλ n t form a basis for V which diagonalizes D. Example 2.5.8. In order to solve the initial value problem for higher order differential equations, it was necessary to show that the Vandermonde matrix ![ $$\\left \[\\begin{array}{ccc} 1 &\\cdots & 1\\\\ {\\lambda }_{ 1} & \\cdots & {\\lambda }_{n}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\lambda }_{1}^{n-1} & \\cdots &{\\lambda }_{n}^{n-1} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equfg.gif) is invertible, when ![ $${\\lambda }_{1},\\ldots,{\\lambda }_{n} \\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq247.gif) are distinct. This was done in Example 1.12.12 and will now be established using eigenvectors. Given the origins of this problem (in this book), it is not unnatural to consider a matrix ![ $$A = \\left \[\\begin{array}{cccc} 0 & 1 &\\cdots & 0\\\\ 0 & 0 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots & 1\\\\ - {\\alpha }_{ 0} & - {\\alpha }_{1} & \\cdots & - {\\alpha }_{n-1} \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equfh.gif) where ![ $$\\begin{array}{rcl} p\\left \(t\\right \)& =& {t}^{n} + {\\alpha }_{ n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{ 1}t + {\\alpha }_{0} \\\\ & =& \\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{n}\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ74.gif) In Example 2.3.10, we saw that the characteristic polynomial for A is pt. In particular, ![ $${\\lambda }_{1},\\ldots,{\\lambda }_{n} \\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq248.gif) are the eigenvalues. When these eigenvalues are distinct, we consequently know that the corresponding eigenvectors are linearly independent. To find these eigenvectors, note that ![ $$\\begin{array}{rcl} A\\left \[\\begin{array}{c} 1\\\\ {\\lambda }_{ k}\\\\ \\vdots \\\\ {\\lambda }_{k}^{n-1} \\end{array} \\right \]& =& \\left \[\\begin{array}{cccc} 0 & 1 &\\cdots & 0\\\\ 0 & 0 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots & 1\\\\ - {\\alpha }_{ 0} & - {\\alpha }_{1} & \\cdots & - {\\alpha }_{n-1} \\end{array} \\right \]\\left \[\\begin{array}{c} 1\\\\ {\\lambda }_{ k}\\\\ \\vdots \\\\ {\\lambda }_{k}^{n-1} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{c} {\\lambda }_{k} \\\\ {\\lambda }_{k}^{2}\\\\ \\vdots \\\\ - {\\alpha }_{0} - {\\alpha }_{1}{\\lambda }_{k} -\\cdots- {\\alpha }_{n-1}{\\lambda }_{k}^{n-1} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{c} {\\lambda }_{k} \\\\ {\\lambda }_{k}^{2}\\\\ \\vdots \\\\ {\\lambda }_{k}^{n} \\end{array} \\right \]\\text{, since }p\\left \({\\lambda }_{k}\\right \) = 0 \\\\ & =& {\\lambda }_{k}\\left \[\\begin{array}{c} 1\\\\ {\\lambda }_{ k}\\\\ \\vdots \\\\ {\\lambda }_{k}^{n-1} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_2_Chapter_Equ75.gif) This implies that the columns in the Vandermonde matrix are the eigenvectors for a diagonalizable operator. Hence, the matrix must be invertible. Note that A is diagonalizable if and only if λ1,..., λ n are distinct as all eigenspaces for A are one-dimensional (we shall also prove and use this in the next Sect. 2.6). Example 2.5.9. An interesting special case of the previous example occurs when pt = t n − 1 and we assume that ![ $$\\mathbb{F} = \\mathbb{C}$$ ](A81414_1_En_2_Chapter_IEq249.gif). Then, the roots are the nth roots of unity, and the operator that has these numbers as eigenvalues looks like ![ $$C = \\left \[\\begin{array}{cccc} 0&1&\\cdots &0\\\\ 0 &0 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots &1\\\\ 1 &0 &\\cdots&0 \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equfi.gif) The powers of this matrix have the following interesting patterns: ![ $$\\begin{array}{rcl} {C}^{2}& =& \\left \[\\begin{array}{llllll} 0&0&1&0& &0\\\\ &0 &0 &\\ddots \\\\ & & & &1&0\\\\ 0 & & & &0 &1 \\\\ 1&0& & &0&0\\\\ 0 &1 &0 & & &0 \\end{array} \\right \], \\\\ & & \\vdots \\\\ {C}^{n-1}& =& \\left \[\\begin{array}{cccc} 0&\\cdots &\\cdots &1\\\\ 1 & 0 & \\ddots & \\vdots \\\\ \\vdots & \\ddots & \\ddots &0\\\\ 0 &\\cdots& 1 &0 \\end{array} \\right \], \\\\ {C}^{n}& =& \\left \[\\begin{array}{cccc} 1& 0 &\\cdots &0\\\\ 0 & 1 & \\ddots & \\vdots \\\\ \\vdots & \\ddots & \\ddots &0\\\\ 0 &\\cdots& 0 &1 \\end{array} \\right \] = {1}_{{\\mathbb{F}}^{n}}.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ76.gif) A linear combination of these powers looks like ![ $$\\begin{array}{rcl}{ C}_{{\\alpha }_{0},\\ldots,{\\alpha }_{n-1}}& =& {\\alpha }_{0}{1}_{{\\mathbb{F}}^{n}} + {\\alpha }_{1}C + \\cdots+ {\\alpha }_{n-1}{C}^{n-1} \\\\ & =& \\left \[\\begin{array}{llllll} {\\alpha }_{0} & {\\alpha }_{1} & {\\alpha }_{2} & {\\alpha }_{3} & \\cdots&{\\alpha }_{n-1} \\\\ {\\alpha }_{n-1} & {\\alpha }_{0} & {\\alpha }_{1} & {\\alpha }_{2} & \\cdots&{\\alpha }_{n-2} \\\\ \\vdots & {\\alpha }_{n-1} & {\\alpha }_{0} & \\ddots & & \\vdots \\\\ {\\alpha }_{3} & \\vdots & {\\alpha }_{n-1} & \\ddots \\\\ {\\alpha }_{2} & {\\alpha }_{3} & \\vdots & \\ddots & {\\alpha }_{0} & {\\alpha }_{1} \\\\ {\\alpha }_{1} & {\\alpha }_{2} & {\\alpha }_{3} & \\cdots&{\\alpha }_{n-1} & {\\alpha }_{0} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_2_Chapter_Equ77.gif) Since we have a basis that diagonalizes C and hence also all of its powers, we have also found a basis that diagonalizes ![ $${C}_{{\\alpha }_{0},\\ldots,{\\alpha }_{n-1}}$$ ](A81414_1_En_2_Chapter_IEq250.gif). This would probably not have been so easy to see if we had just been handed the matrix ![ $${C}_{{\\alpha }_{0},\\ldots,{\\alpha }_{n-1}}$$ ](A81414_1_En_2_Chapter_IEq251.gif). The above lemma also helps us establish three criteria for diagonalizability. Theorem 2.5.10. (First Criterion for Diagonalizability) Let L : V -> V be a linear operator on an n-dimensional vector space over ![ $$\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq252.gif) . If ![ $${\\lambda }_{1},\\ldots,{\\lambda }_{k} \\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq253.gif) are distinct eigenvalues for L such that ![ $$n =\\dim \\left \(\\ker \\left \(L - {\\lambda }_{1}{1}_{V }\\right \)\\right \) + \\cdots+\\dim \\left \(\\ker \\left \(L - {\\lambda }_{k}{1}_{V }\\right \)\\right \),$$ ](A81414_1_En_2_Chapter_Equfj.gif) then L is diagonalizable. In particular, if L has n distinct eigenvalues in ![ $$\\mathbb{F},$$ ](A81414_1_En_2_Chapter_IEq254.gif) then L is diagonalizable. Proof. Our assumption together with Lemma 2.5.6 shows that ![ $$\\begin{array}{rcl} n& =& \\dim \\left \(\\ker \\left \(L - {\\lambda }_{1}{1}_{V }\\right \)\\right \) + \\cdots+\\dim \\left \(\\ker \\left \(L - {\\lambda }_{k}{1}_{V }\\right \)\\right \) \\\\ & =& \\dim \\left \(\\ker \\left \(L - {\\lambda }_{1}{1}_{V }\\right \) + \\cdots+\\ker \\left \(L - {\\lambda }_{k}{1}_{V }\\right \)\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ78.gif) Thus, ![ $$\\ker \\left \(L - {\\lambda }_{1}{1}_{V }\\right \) \\oplus \\cdots\\oplus \\ker \\left \(L - {\\lambda }_{k}{1}_{V }\\right \) = V,$$ ](A81414_1_En_2_Chapter_Equfk.gif) and we can find a basis of eigenvectors, by selecting a basis for each of the eigenspaces. For the last statement, we only need to observe that dimkerL − λ1 V ≥ 1 for any eigenvalue ![ $$\\lambda\\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq255.gif). □ The next characterization offers a particularly nice condition for diagonalizability and will also give us the minimal polynomial characterization of diagonalizability. Theorem 2.5.11. (Second Criterion for Diagonalizability) Let L : V -> V be a linear operator on an n-dimensional vector space over ![ $$\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq256.gif) . L is diagonalizable if and only if we can find ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq257.gif) such that pL = 0 and ![ $$p\\left \(t\\right \) = \\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{k}\\right \),$$ ](A81414_1_En_2_Chapter_Equfl.gif) where ![ $${\\lambda }_{1},\\ldots,{\\lambda }_{k} \\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq258.gif) are distinct. Proof. Assuming that L is diagonalizable, we have ![ $$V =\\ker \\left \(L - {\\lambda }_{1}{1}_{V }\\right \) \\oplus \\cdots\\oplus \\ker \\left \(L - {\\lambda }_{k}{1}_{V }\\right \).$$ ](A81414_1_En_2_Chapter_Equfm.gif) So if we use ![ $$p\\left \(t\\right \) = \\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{k}\\right \)$$ ](A81414_1_En_2_Chapter_Equfn.gif) we see that pL = 0 as pL vanishes on each of the eigenspaces (see also Exercise 16 in this section). Conversely, assume that pL = 0 and ![ $$p\\left \(t\\right \) = \\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{k}\\right \),$$ ](A81414_1_En_2_Chapter_Equfo.gif) where ![ $${\\lambda }_{1},\\ldots,{\\lambda }_{k} \\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq259.gif) are distinct. If any of these λ i s are not eigenvalues for L, we can eliminate the factors t − λ i since L − λ i 1 V is an isomorphism unless λ i is an eigenvalue. We then still have that L is a root of the new polynomial. The proof now goes by induction on the number of roots in p. If there is one root, the result is obvious. If k ≥ 2, we can use Proposition 2.1.4 to write ![ $$\\begin{array}{rcl} 1& =& r\\left \(t\\right \)\\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{k-1}\\right \) + s\\left \(t\\right \)\\left \(t - {\\lambda }_{k}\\right \) \\\\ & =& r\\left \(t\\right \)q\\left \(t\\right \) + s\\left \(t\\right \)\\left \(t - {\\lambda }_{k}\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ79.gif) We then claim that ![ $$V =\\ker \\left \(q\\left \(L\\right \)\\right \) \\oplus \\ker \\left \(L - {\\lambda }_{k}{1}_{V }\\right \)$$ ](A81414_1_En_2_Chapter_Equfp.gif) and that ![ $$L\\left \(\\ker \\left \(q\\left \(L\\right \)\\right \)\\right \) \\subset \\ker \\left \(q\\left \(L\\right \)\\right \).$$ ](A81414_1_En_2_Chapter_Equfq.gif) This will finish the induction step as L | kerqL is a linear operator on the proper subspace kerqL with the property that qL | kerqL = 0. We can then use the induction hypothesis to conclude that the result holds for L | kerqL . As it obviously holds for ![ $$\\left \(L - {\\lambda }_{k}{1}_{V }\\right \){\\vert }_{\\ker \\left \(L-{\\lambda }_{k}{1}_{V }\\right \)}$$ ](A81414_1_En_2_Chapter_IEq260.gif), it follows that the result also holds for L. To establish the decomposition observe that ![ $$\\begin{array}{rcl} x& =& q\\left \(L\\right \)\\left \(r\\left \(L\\right \)\\left \(x\\right \)\\right \) + \\left \(L - {\\lambda }_{k}{1}_{V }\\right \)\\left \(s\\left \(L\\right \)\\left \(x\\right \)\\right \) \\\\ & =& y + z.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ80.gif) Here y ∈ kerL − λ k 1 V since ![ $$\\begin{array}{rcl} \\left \(L - {\\lambda }_{k}{1}_{V }\\right \)\\left \(y\\right \)& =& \\left \(L - {\\lambda }_{k}{1}_{V }\\right \)\\left \(q\\left \(L\\right \)\\left \(r\\left \(L\\right \)\\left \(x\\right \)\\right \)\\right \) \\\\ & =& p\\left \(L\\right \)\\left \(r\\left \(L\\right \)\\left \(x\\right \)\\right \) \\\\ & =& 0, \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ81.gif) and z ∈ kerqL since ![ $$q\\left \(L\\right \)\\left \(\\left \(L - {\\lambda }_{k}{1}_{V }\\right \)\\left \(s\\left \(L\\right \)\\left \(x\\right \)\\right \)\\right \) = p\\left \(L\\right \)\\left \(s\\left \(L\\right \)\\left \(x\\right \)\\right \) = 0.$$ ](A81414_1_En_2_Chapter_Equfr.gif) Thus, ![ $$V =\\ker \\left \(q\\left \(L\\right \)\\right \) +\\ker \\left \(L - {\\lambda }_{k}{1}_{V }\\right \).$$ ](A81414_1_En_2_Chapter_Equfs.gif) If ![ $$x \\in \\ker \\left \(q\\left \(L\\right \)\\right \) \\cap \\ker \\left \(L - {\\lambda }_{k}{1}_{V }\\right \),$$ ](A81414_1_En_2_Chapter_Equft.gif) then we have ![ $$x = r\\left \(L\\right \)\\left \(q\\left \(L\\right \)\\left \(x\\right \)\\right \) + s\\left \(L\\right \)\\left \(\\left \(L - {\\lambda }_{k}{1}_{V }\\right \)\\left \(x\\right \)\\right \) = 0.$$ ](A81414_1_En_2_Chapter_Equfu.gif) This gives the direct sum decomposition. Finally, if x ∈ kerqL, then we see that ![ $$\\begin{array}{rcl} q\\left \(L\\right \)\\left \(L\\left \(x\\right \)\\right \)& =& \\left \(q\\left \(L\\right \) \\circ L\\right \)\\left \(x\\right \) \\\\ & =& \\left \(L \\circ q\\left \(L\\right \)\\right \)\\left \(x\\right \) \\\\ & =& L\\left \(q\\left \(L\\right \)\\left \(x\\right \)\\right \) \\\\ & =& 0.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ82.gif) Thus, showing that Lx ∈ kerqL. □ Corollary 2.5.12. (The Minimal Polynomial Characterization of Diagonalizability) Let L : V -> V be a linear operator on an n-dimensional vector space over ![ $$\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq261.gif) . L is diagonalizable if and only if the minimal polynomial factors ![ $${\\mu }_{L}\\left \(t\\right \) = \\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{k}\\right \),$$ ](A81414_1_En_2_Chapter_Equfv.gif) and has no multiple roots, i.e. ![ $${\\lambda }_{1},\\ldots,{\\lambda }_{k} \\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq262.gif) are distinct. Finally we can estimate how large dimkerL − λ1 V can be if we have factored the characteristic polynomial. Lemma 2.5.13. Let L : V -> V be a linear operator on an n-dimensional vector space over ![ $$\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq263.gif) . If ![ $$\\lambda\\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq264.gif) is an eigenvalue and χ L t = t − λ m qt, where qλ≠0, then ![ $$\\dim \\left \(\\ker \\left \(L - \\lambda {1}_{V }\\right \)\\right \) \\leq m.$$ ](A81414_1_En_2_Chapter_Equfw.gif) We call dimkerL − λ1 V the geometric multiplicity of λ and m the algebraic multiplicity of λ. Proof. Select a complement N to kerL − λ1 V in V. Then, choose a basis where x 1,..., x k ∈ kerL − λ1 V and x k + 1,..., x n ∈ N. Since Lx i = λx i for i = 1,..., k, we see that the matrix representation has a block form that looks like ![ $$\\left \[L\\right \] = \\left \[\\begin{array}{cc} \\lambda {1}_{{\\mathbb{F}}^{k}} & B \\\\ 0 &C \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equfx.gif) This implies that ![ $$\\begin{array}{rcl}{ \\chi }_{L}\\left \(t\\right \)& =& {\\chi }_{\\left \[L\\right \]}\\left \(t\\right \) \\\\ & =& {\\chi }_{\\lambda {1}_{{ \\mathbb{F}}^{k}}}\\left \(t\\right \){\\chi }_{C}\\left \(t\\right \) \\\\ & =&{ \\left \(t - \\lambda \\right \)}^{k}{\\chi }_{ C}\\left \(t\\right \) \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ83.gif) and hence that λ has algebraic multiplicity m ≥ k. □ Clearly, the appearance of multiple roots in the characteristic polynomial is something that might prevent linear operators from becoming diagonalizable. The following criterion is often useful for deciding whether or not a polynomial has multiple roots. Proposition 2.5.14. A polynomial ![ $$p\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq265.gif) has ![ $$\\lambda\\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq266.gif) as a multiple root if and only if λ is a root of both p and Dp. Proof. If λ is a multiple root, then pt = t − λ m qt, where m ≥ 2. Thus, ![ $$Dp\\left \(t\\right \) = m{\\left \(t - \\lambda \\right \)}^{m-1}q\\left \(t\\right \) +{ \\left \(t - \\lambda \\right \)}^{m}Dq\\left \(t\\right \)$$ ](A81414_1_En_2_Chapter_Equfy.gif) also has λ as a root. Conversely, if λ is a root of Dp and p, then we can write pt = t − λqt and ![ $$\\begin{array}{rcl} 0& =& Dp\\left \(\\lambda \\right \) \\\\ & =& q\\left \(\\lambda \\right \) + \\left \(\\lambda- \\lambda \\right \)Dq\\left \(\\lambda \\right \) \\\\ & =& q\\left \(\\lambda \\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ84.gif) Thus, also qt has λ as a root and hence λ is a multiple root of pt. □ Example 2.5.15. If pt = t 2 + αt + β, then Dpt = 2t + α. Thus we have a double root only if the only root ![ $$t = -\\frac{\\alpha } {2}$$ ](A81414_1_En_2_Chapter_IEq267.gif) of Dp is a root of p. If we evaluate ![ $$\\begin{array}{rcl} p\\left \(-\\frac{\\alpha } {2} \\right \)& =& \\frac{{\\alpha }^{2}} {4} -\\frac{{\\alpha }^{2}} {2} + \\beta\\\\ & =& -\\frac{{\\alpha }^{2}} {4} + \\beta\\\\ & =& -\\frac{{\\alpha }^{2} - 4\\beta } {4}, \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ85.gif) we see that this occurs precisely when the discriminant vanishes. This conforms nicely with the quadratic formula. Example 2.5.16. If pt = t 3 + 12t 2 − 14, then the roots are pretty nasty. We can, however, check for multiple roots by finding the roots of ![ $$Dp\\left \(t\\right \) = 3{t}^{2} + 24t = 3t\\left \(t + 8\\right \)$$ ](A81414_1_En_2_Chapter_Equfz.gif) and checking whether they are roots of p ![ $$\\begin{array}{rcl} p\\left \(0\\right \)& =& -14\\neq 0, \\\\ p\\left \(8\\right \)& =& {8}^{3} + 12 \\cdot{8}^{2} - 14 \\\\ & =& {8}^{2}\\left \(8 + 12\\right \) - 14 > 0.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ86.gif) ### 2.5.1 Exercises 1. Decide whether or not the following matrices are diagonalizable: (a) ![ $$\\left \[\\begin{array}{lll} 1&\\quad 0&\\quad 1\\\\ 0 &\\quad 1 &\\quad 0 \\\\ 1&\\quad 0&\\quad 1\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equga.gif) (b) ![ $$\\left \[\\begin{array}{lll} 0&\\quad 1&\\quad 2\\\\ 1 &\\quad 0 &\\quad 3 \\\\ 2&\\quad 3&\\quad 0\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equgb.gif) (c) ![ $$\\left \[\\begin{array}{lll} 0 &\\quad 1 &\\quad 2\\\\ - 1 &\\quad 0 &\\quad 3 \\\\ - 2&\\quad - 3&\\quad 0\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equgc.gif) 2. Decide whether or not the following matrices are diagonalizable: (a) ![ $$\\left \[\\begin{array}{cc} 0& \\quad i\\\\ i &\\quad 0\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equgd.gif) (b) ![ $$\\left \[\\begin{array}{cc} 0 & \\quad i\\\\ - i &\\quad 0\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equge.gif) (c) ![ $$\\left \[\\begin{array}{lll} 1&\\quad i &\\quad 0\\\\ i &\\quad 1 &\\quad 0 \\\\ 0&\\quad 2&\\quad 1\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equgf.gif) 3. Decide whether or not the following matrices are diagonalizable: (a) ![ $$\\left \[\\begin{array}{lll} 1&\\quad 0&\\quad 1\\\\ 0 &\\quad 0 &\\quad 0 \\\\ 1&\\quad 0&\\quad 1\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equgg.gif) (b) ![ $$\\left \[\\begin{array}{lll} 1&\\quad 0&\\quad 1\\\\ 0 &\\quad 1 &\\quad 0 \\\\ 1&\\quad 0&\\quad 1\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equgh.gif) (c) ![ $$\\left \[\\begin{array}{lll} 0&\\quad 0&\\quad 1\\\\ 0 &\\quad 1 &\\quad 0 \\\\ 1&\\quad 0&\\quad 0\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equgi.gif) 4. Find the characteristic polynomial, eigenvalues, and eigenvectors for each of the following linear operators L : P 3 -> P 3. Then, decide whether they are diagonalizable by checking whether there is a basis of eigenvectors. (a) L = D. (b) L = tD = T ∘ D. (c) ![ $$L = {D}^{2} + 2D + {1}_{{P}_{3}}$$ ](A81414_1_En_2_Chapter_IEq268.gif). (d) L = t 2 D 3 + D. 5. Consider the linear operator on ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq269.gif) defined by LX = X t . Show that L is diagonalizable. Compute the eigenvalues and eigenspaces. 6. For which s, t ∈ ℂ is the matrix diagonalizable ![ $$\\left \[\\begin{array}{cc} 1&1\\\\ s & t\\end{array} \\right \]?$$ ](A81414_1_En_2_Chapter_Equgj.gif) 7. For which α, β, γ ∈ ℂ is the matrix diagonalizable ![ $$\\left \[\\begin{array}{ccc} 0 & 1 & 0\\\\ 0 & 0 & 1 \\\\ - \\alpha & - \\beta& - \\gamma \\end{array} \\right \]?$$ ](A81414_1_En_2_Chapter_Equgk.gif) 8. Assume L : V -> V is diagonalizable. Show that V = kerL ⊕ imL. 9. Assume that L : V -> V is a diagonalizable real linear map on a finite-dimensional vector space. Show that trL 2 ≥ 0. 10. Assume that ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq270.gif) is diagonalizable. (a) Show that A t is diagonalizable. (b) Show that L A X = AX defines a diagonalizable operator on ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq271.gif) (see Example 1.7.6.) (c) Show that R A X = XA defines a diagonalizable operator on ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq272.gif). 11. Show that if E : V -> V is a projection on a finite-dimensional vector space, then trE = dimimE. 12. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq273.gif) and ![ $$B \\in \\mathrm{{ Mat}}_{m\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq274.gif) and consider ![ $$\\begin{array}{rcl} L& : & \\mathrm{{Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \) \\rightarrow \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \), \\\\ L\\left \(X\\right \)& =& AX - XB.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ87.gif) Show that if B is diagonalizable, then all eigenvalues of L are of the form λ − μ, where λ is an eigenvalue of A and μ an eigenvalue of B. 13. (Restrictions of Diagonalizable Operators) Let L : V -> V be a linear operator on a finite-dimensional vector space and M ⊂ V an invariant subspace, i.e., LM ⊂ M. (a) If x + y ∈ M, where Lx = λx, Ly = μy, and λ≠μ, then x, y ∈ M. (b) If x 1 + ⋯ + x k ∈ M and Lx i = λ i x i , where λ1,..., λ k are distinct, then x 1,..., x k ∈ M. Hint: Use induction on k. (c) If L : V -> V is diagonalizable, use (a) and (b) to show that L : M -> M is diagonalizable. (d) If L : V -> V is diagonalizable, use Theorem 2.5.11 directly to show that L : M -> M is diagonalizable. 14. Let L : V -> V be a linear operator on a finite-dimensional vector space. Show that λ is a multiple root for μ L t if and only if ![ $$\\left \\{0\\right \\} \\subsetneq \\ker \\left \(L - \\lambda {1}_{V }\\right \) \\subsetneq \\ker \\left \({\\left \(L - \\lambda {1}_{V }\\right \)}^{2}\\right \).$$ ](A81414_1_En_2_Chapter_Equgl.gif) 15. Assume that L, K : V -> V are both diagonalizable, that KL = LK, and that V is finite-dimensional. Show that we can find a basis for V that diagonalizes both L and K. Hint: You can use Exercise 13 with M as an eigenspace for one of the operators as well as Exercise 3 in Sect. 1.11. 16. Let L : V -> V be a linear operator on a vector space and λ1,..., λ k distinct eigenvalues. If x = x 1 + ⋯ + x k , where x i ∈ kerL − λ i 1 V , then ![ $$\\left \(L - {\\lambda }_{1}{1}_{V }\\right \)\\cdots \\left \(L - {\\lambda }_{k}{1}_{V }\\right \)\\left \(x\\right \) = 0.$$ ](A81414_1_En_2_Chapter_Equgm.gif) 17. Let L : V -> V be a linear operator on a vector space and λ≠μ. Use the identity ![ $$\\frac{1} {\\mu- \\lambda }\\left \(L - \\lambda {1}_{V }\\right \) - \\frac{1} {\\mu- \\lambda }\\left \(L - \\mu {1}_{V }\\right \) = {1}_{V }$$ ](A81414_1_En_2_Chapter_Equgn.gif) to show that two eigenspaces associated to distinct eigenvalues for L have trivial intersection. 18. Consider an involutionL : V -> V, i.e., L 2 = 1 V . (a) Show that x ± Lx is an eigenvector for L with eigenvalue ± 1. (b) Show that V = kerL + 1 V ⊕ kerL − 1 V . (c) Conclude that L is diagonalizable. 19. Assume L : V -> V satisfies L 2 + αL + β1 V = 0 and that the roots λ1, λ2 of λ2 + αλ + β are distinct and lie in ![ $$\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq275.gif). (a) Determine γ, δ so that ![ $$x = \\gamma \\left \(L\\left \(x\\right \) - {\\lambda }_{1}x\\right \) + \\delta \\left \(L\\left \(x\\right \) - {\\lambda }_{2}x\\right \).$$ ](A81414_1_En_2_Chapter_Equgo.gif) (b) Show that Lx − λ1 x ∈ kerL − λ21 V and Lx − λ2 x ∈ kerL − λ11 V . (c) Conclude that V = kerL − λ11 V ⊕ kerL − λ21 V . (d) Conclude that L is diagonalizable. 20. Let L : V -> V be a linear operator on a finite-dimensional vector space. Show that (a) If ![ $$p,q \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq276.gif) and gcdp, q = 1, then V = kerpL ⊕ kerqL. Hint: Look at the proof of Theorem 2.5.11. (b) If μ L t = ptqt, where gcdp, q = 1, then ![ $${\\mu }_{L{\\vert }_{\\ker \\left \(p\\left \(L\\right \)\\right \)}} = p$$ ](A81414_1_En_2_Chapter_IEq277.gif) and ![ $${\\mu }_{L{\\vert }_{\\ker \\left \(q\\left \(L\\right \)\\right \)}} = q$$ ](A81414_1_En_2_Chapter_IEq278.gif). ## 2.6 Cyclic Subspaces The goal of this section is to find a relatively simple matrix representation for linear operators L : V -> V on finite-dimensional vector spaces that are not necessarily diagonalizable. The way in which this is going to be achieved is by finding a decomposition V = M 1 ⊕ ⋯ ⊕ M k into L-invariant subspaces M i with the property that ![ $$L{\\vert }_{{M}_{i}}$$ ](A81414_1_En_2_Chapter_IEq279.gif) has matrix representation that can be found by only knowing the characteristic or minimal polynomial for ![ $$L{\\vert }_{{M}_{i}}$$ ](A81414_1_En_2_Chapter_IEq280.gif). The invariant subspaces we are going to use are in fact a very natural generalization of eigenvectors. Observe that x ∈ V is an eigenvector if Lx ∈ spanx or in other words Lx is a linear combination of x. Definition 2.6.1. Let L : V -> V be a linear operator on a finite-dimensional vector space. The cyclic subspace generated by x ∈ V is the subspace spanned by the vectors x, Lx,.... , L k x,...., i.e., ![ $${C}_{x} =\\mathrm{ span}\\left \\{x,L\\left \(x\\right \),{L}^{2}\\left \(x\\right \),\\ldots,{L}^{k}\\left \(x\\right \),\\ldots \\right \\}.$$ ](A81414_1_En_2_Chapter_Equgp.gif) Assuming x≠0, we can use Lemma 1.12.3 to find a smallest k ≥ 1 such that ![ $${L}^{k}\\left \(x\\right \) \\in \\mathrm{ span}\\left \\{x,L\\left \(x\\right \),{L}^{2}\\left \(x\\right \),\\ldots,{L}^{k-1}\\left \(x\\right \)\\right \\}.$$ ](A81414_1_En_2_Chapter_Equgq.gif) With this definition and construction behind us, we can now prove. Lemma 2.6.2. Let L : V -> V be a linear operator on an finite-dimensional vector space. Then, C x is L-invariant and we can find k ≤ dim V so that x, Lx, L 2 x,..., L k−1 x form a basis for C x . The matrix representation for ![ $$L{\\vert }_{{C}_{x}}$$ ](A81414_1_En_2_Chapter_IEq281.gif) with respect to this basis is ![ $$\\left \[\\begin{array}{ccccc} 0&0&\\cdots &0& {\\alpha }_{0} \\\\ 1&0&\\cdots &0& {\\alpha }_{1} \\\\ 0&1&\\cdots &0& {\\alpha }_{2}\\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ 0&0&\\cdots &1&{\\alpha }_{k-1} \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equgr.gif) where ![ $${L}^{k}\\left \(x\\right \) = {\\alpha }_{ 0}x + {\\alpha }_{1}L\\left \(x\\right \) + \\cdots+ {\\alpha }_{k-1}{L}^{k-1}\\left \(x\\right \).$$ ](A81414_1_En_2_Chapter_Equgs.gif) Proof. The vectors x, Lx, L 2 x,..., L k − 1 x must be linearly independent if we pick k as the smallest k such that ![ $${L}^{k}\\left \(x\\right \) = {\\alpha }_{ 0}x + {\\alpha }_{1}L\\left \(x\\right \) + \\cdots+ {\\alpha }_{k-1}{L}^{k-1}\\left \(x\\right \).$$ ](A81414_1_En_2_Chapter_Equgt.gif) To see that they span C x , we need to show that ![ $${L}^{m}\\left \(x\\right \) \\in \\mathrm{ span}\\left \\{x,L\\left \(x\\right \),{L}^{2}\\left \(x\\right \),\\ldots,{L}^{k-1}\\left \(x\\right \)\\right \\}$$ ](A81414_1_En_2_Chapter_Equgu.gif) for all m ≥ k. We are going to use induction on m to prove this. If m = 0,...k − 1, there is nothing to prove. Assuming that ![ $${L}^{m-1}\\left \(x\\right \) = {\\beta }_{ 0}x + {\\beta }_{1}L\\left \(x\\right \) + \\cdots+ {\\beta }_{k-1}{L}^{k-1}\\left \(x\\right \),$$ ](A81414_1_En_2_Chapter_Equgv.gif) we get ![ $${L}^{m}\\left \(x\\right \) = {\\beta }_{ 0}L\\left \(x\\right \) + {\\beta }_{1}{L}^{2}\\left \(x\\right \) + \\cdots+ {\\beta }_{ k-1}{L}^{k}\\left \(x\\right \).$$ ](A81414_1_En_2_Chapter_Equgw.gif) Since we already have that ![ $${L}^{k}\\left \(x\\right \) \\in \\mathrm{ span}\\left \\{x,L\\left \(x\\right \),{L}^{2}\\left \(x\\right \),\\ldots,{L}^{k-1}\\left \(x\\right \)\\right \\},$$ ](A81414_1_En_2_Chapter_Equgx.gif) it follows that ![ $${L}^{m}\\left \(x\\right \) \\in \\mathrm{ span}\\left \\{x,L\\left \(x\\right \),{L}^{2}\\left \(x\\right \),\\ldots,{L}^{k-1}\\left \(x\\right \)\\right \\}.$$ ](A81414_1_En_2_Chapter_Equgy.gif) This completes the induction step. This also explains why C x is L-invariant, namely, if z ∈ C x , then we have ![ $$z = {\\gamma }_{0}x + {\\gamma }_{1}L\\left \(x\\right \) + \\cdots+ {\\gamma }_{k-1}{L}^{k-1}\\left \(x\\right \),$$ ](A81414_1_En_2_Chapter_Equgz.gif) and ![ $$L\\left \(z\\right \) = {\\gamma }_{0}L\\left \(x\\right \) + {\\gamma }_{1}{L}^{2}\\left \(x\\right \) + \\cdots+ {\\gamma }_{ k-1}{L}^{k}\\left \(x\\right \).$$ ](A81414_1_En_2_Chapter_Equha.gif) As L k x ∈ C x we see that Lz ∈ C x as well. To find the matrix representation, we note that ![ $$\\begin{array}{rcl} & \\left \[\\begin{array}{lllll} L\\left \(x\\right \)&L\\left \(L\\left \(x\\right \)\\right \)&\\cdots &L\\left \({L}^{k-2}\\left \(x\\right \)\\right \)&L\\left \({L}^{k-1}\\left \(x\\right \)\\right \) \\end{array} \\right \] & \\\\ & = \\left \[\\begin{array}{lllll} L\\left \(x\\right \)&{L}^{2}\\left \(x\\right \)&\\cdots &{L}^{k-1}\\left \(x\\right \)&{L}^{k}\\left \(x\\right \) \\end{array} \\right \] & \\\\ & = \\left \[\\begin{array}{lllll} x&L\\left \(x\\right \)&\\cdots &{L}^{k-2}\\left \(x\\right \)&{L}^{k-1}\\left \(x\\right \) \\end{array} \\right \]\\left \[\\begin{array}{ccccc} 0&0&\\cdots &0& {\\alpha }_{0} \\\\ 1&0&\\cdots &0& {\\alpha }_{1} \\\\ 0&1&\\cdots &0& {\\alpha }_{2}\\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ 0&0&\\cdots &1&{\\alpha }_{k-1} \\end{array} \\right \].& \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ88.gif) This proves the lemma. □ The matrix representation for ![ $$L{\\vert }_{{C}_{x}}$$ ](A81414_1_En_2_Chapter_IEq282.gif) is apparently the transpose of the type of matrix coming from higher order differential equations that we studied in the previous sections. Therefore, we can expect our knowledge of those matrices to carry over without much effort.To be a little more precise, we define the companion matrix of a monic nonconstant polynomial ![ $$p\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq283.gif) as the matrix ![ $$\\begin{array}{rcl} {C}_{p}& =& \\left \[\\begin{array}{ccccc} 0&0&\\cdots &0& - {\\alpha }_{0} \\\\ 1&0&\\cdots &0& - {\\alpha }_{1} \\\\ 0&1&\\cdots &0& - {\\alpha }_{2}\\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ 0&0&\\cdots &1& - {\\alpha }_{n-1} \\end{array} \\right \], \\\\ p\\left \(t\\right \)& =& {t}^{n} + {\\alpha }_{ n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{ 1}t + {\\alpha }_{0}.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ89.gif) It is worth mentioning that the companion matrix for p = t + α is simply the 1 ×1 matrix ![ $$\\left \[-\\alpha \\right \]$$ ](A81414_1_En_2_Chapter_IEq284.gif). Proposition 2.6.3. The characteristic and minimal polynomials of C p are both pt, and all eigenspaces are one-dimensional. In particular, C p is diagonalizable if and only if all the roots of pt are distinct and lie in ![ $$\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq285.gif). Proof. Even though we can prove these properties from our knowledge of the transpose of C p , it is still worthwhile to give a complete proof. Recall that we computed the minimal polynomial in the proof of Proposition 2.4.10. To compute the characteristic polynomial, we consider: ![ $$t{1}_{{\\mathbb{F}}^{n}}-{C}_{p} = \\left \[\\begin{array}{ccccc} t & 0 &\\cdots & 0 & {\\alpha }_{0} \\\\ - 1& t &\\cdots & 0 & {\\alpha }_{1} \\\\ 0 & - 1&\\cdots & 0 & {\\alpha }_{2}\\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ 0 & 0 &\\cdots & - 1&t + {\\alpha }_{n-1} \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equhb.gif) By switching rows 1 and 2, we see that this is row equivalent to ![ $$\\left \[\\begin{array}{ccccc} - 1& t &\\cdots & 0 & {\\alpha }_{1} \\\\ t & 0 &\\cdots & 0 & {\\alpha }_{0} \\\\ 0 & - 1&\\cdots & 0 & {\\alpha }_{2}\\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ 0 & 0 &\\cdots & - 1&t + {\\alpha }_{n-1} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equhc.gif) eliminating t then gives us ![ $$\\left \[\\begin{array}{ccccc} - 1& t &\\cdots & 0 & {\\alpha }_{1} \\\\ 0 & {t}^{2} & \\cdots & 0 &{\\alpha }_{0} + {\\alpha }_{1}t \\\\ 0 & - 1&\\cdots & 0 & {\\alpha }_{2}\\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ 0 & 0 &\\cdots & - 1&t + {\\alpha }_{n-1} \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equhd.gif) Now, switch rows 2 and 3 to get ![ $$\\left \[\\begin{array}{ccccc} - 1& t &\\cdots & 0 & {\\alpha }_{1} \\\\ 0 & - 1&\\cdots & 0 & {\\alpha }_{2} \\\\ 0 & {t}^{2} & \\cdots & 0 &{\\alpha }_{0} + {\\alpha }_{1}t\\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ 0 & 0 &\\cdots & - 1&t + {\\alpha }_{n-1} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equhe.gif) and eliminate t 2 ![ $$\\left \[\\begin{array}{ccccc} - 1& t &\\cdots & 0 & {\\alpha }_{1} \\\\ 0 & - 1&\\cdots & 0 & {\\alpha }_{2} \\\\ 0 & 0 &\\cdots & 0 &{\\alpha }_{0} + {\\alpha }_{1}t + {\\alpha }_{2}{t}^{2}\\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ 0 & 0 &\\cdots & - 1& t + {\\alpha }_{n-1} \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equhf.gif) Repeating this argument shows that ![ $$t{1}_{{\\mathbb{F}}^{n}} - {C}_{p}$$ ](A81414_1_En_2_Chapter_IEq286.gif) is row equivalent to ![ $$\\left \[\\begin{array}{ccccc} - 1& t &\\cdots & 0 & {\\alpha }_{1} \\\\ 0 & - 1&\\cdots & 0 & {\\alpha }_{2} \\\\ 0 & 0 & \\ddots & \\vdots & \\vdots\\\\ \\vdots & \\vdots & & - 1 & t + {\\alpha }_{ n-1} \\\\ 0 & 0 &\\cdots & 0 &{t}^{n} + {\\alpha }_{n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{1}t + {\\alpha }_{0} \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equhg.gif) This implies that the characteristic polynomial is pt. To see that all eigenspaces are one-dimensional we note that if λ is a root of pt, then we have just shown that ![ $$\\lambda {1}_{{\\mathbb{F}}^{n}} - {C}_{p}$$ ](A81414_1_En_2_Chapter_IEq287.gif) is row equivalent to the matrix ![ $$\\left \[\\begin{array}{ccccc} - 1& \\lambda&\\cdots & 0 & {\\alpha }_{1} \\\\ 0 & - 1&\\cdots & 0 & {\\alpha }_{2} \\\\ 0 & 0 & \\ddots & \\vdots & \\vdots\\\\ \\vdots & \\vdots & & - 1 &\\lambda+ {\\alpha }_{ n-1} \\\\ 0 & 0 &\\cdots & 0 & 0 \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equhh.gif) Since all but the last diagonal entry is nonzero we see that the kernel must be one-dimensional. □ Cyclic subspaces lead us to a very elegant proof of the Cayley-Hamilton theorem. Theorem 2.6.4. (The Cayley-Hamilton Theorem) Let L : V -> V be a linear operator on a finite-dimensional vector space. Then, L is a root of its own characteristic polynomial ![ $${\\chi }_{L}\\left \(L\\right \) = 0.$$ ](A81414_1_En_2_Chapter_Equhi.gif) In particular, the minimal polynomial divides the characteristic polynomial. Proof. Select any x≠0 in V and a complement M to the cyclic subspace C x generated by x. This gives us a nontrivial decomposition V = C x ⊕ M, where L maps C x to itself and M into V. If we select a basis for V that starts with the cyclic basis for C x , then L will have a matrix representation that looks like ![ $$\\left \[L\\right \] = \\left \[\\begin{array}{cc} {C}_{p}& B \\\\ 0 &D \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equhj.gif) where C p is the companion matrix representation for L restricted to C x . This shows that ![ $$\\begin{array}{rcl}{ \\chi }_{L}\\left \(t\\right \)& =& {\\chi }_{{C}_{p}}\\left \(t\\right \){\\chi }_{D}\\left \(t\\right \) \\\\ & =& p\\left \(t\\right \){\\chi }_{D}\\left \(t\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ90.gif) We know that pC p = 0 from the previous result. This shows that ![ $$p\\left \(L{\\vert }_{{C}_{x}}\\right \) = 0$$ ](A81414_1_En_2_Chapter_IEq288.gif) and in particular that pLx = 0. Thus, ![ $$\\begin{array}{rcl}{ \\chi }_{L}\\left \(L\\right \)\\left \(x\\right \)& =& {\\chi }_{D}\\left \(L\\right \) \\circ p\\left \(L\\right \)\\left \(x\\right \) \\\\ & =& 0.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ91.gif) Since x was arbitrary, this shows that χ L L = 0. □ We now have quite a good understanding of the basic building blocks in the decomposition we are seeking. Theorem 2.6.5. (The Cyclic Subspace Decomposition) Let L : V -> V be a linear operator on a finite-dimensional vector space. Then, V has a cyclic subspace decomposition ![ $$V = {C}_{{x}_{1}} \\oplus \\cdots\\oplus{C}_{{x}_{k}},$$ ](A81414_1_En_2_Chapter_Equhk.gif) where each ![ $${C}_{{x}_{i}}$$ ](A81414_1_En_2_Chapter_IEq289.gif) is a cyclic subspace. In particular, L has a block diagonal matrix representation where each block is a companion matrix ![ $$\\left \[L\\right \] = \\left \[\\begin{array}{cccc} {C}_{{p}_{1}} & 0 && 0 \\\\ 0 &{C}_{{p}_{2}}\\\\ & & \\ddots \\\\ 0 & &&{C}_{{p}_{k}} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equhl.gif) and χ L t = p 1 t⋯p k t. Moreover, the geometric multiplicity satisfies ![ $$\\dim \\left \(\\ker \\left \(L - \\lambda {1}_{V }\\right \)\\right \) = \\text{ number of }{p}_{i}\\text{ s such that }{p}_{i}\\left \(\\lambda \\right \) = 0.$$ ](A81414_1_En_2_Chapter_Equhm.gif) Thus, L is diagonalizable if and only if all of the companion matrices ![ $${C}_{{p}_{i}}$$ ](A81414_1_En_2_Chapter_IEq290.gif) have distinct eigenvalues. Proof. The proof uses induction on the dimension of the vector space. The theorem clearly holds if dimV = 1, so assume that the theorem holds for all linear operators on vector spaces of dimension < dimV. Our goal is to show that either ![ $$V = {C}_{{x}_{1}}$$ ](A81414_1_En_2_Chapter_IEq291.gif) for some x 1 ∈ V or that ![ $$V = {C}_{{x}_{1}} \\oplus M$$ ](A81414_1_En_2_Chapter_IEq292.gif) for some L-invariant subspace M. Let m ≤ dimV be the largest dimension of a cyclic subspace, i.e., dimC x ≤ m for all x ∈ V, and there is an x 1 ∈ V such that ![ $$\\dim {C}_{{x}_{1}} = m$$ ](A81414_1_En_2_Chapter_IEq293.gif). In other words, L m x ∈ spanx, Lx,..., L m − 1 x for all x ∈ V, and we can find x 1 ∈ V such that x 1, Lx 1,..., L m − 1 x 1 are linearly independent. In case m = dimV, it follows that ![ $${C}_{{x}_{1}} = V$$ ](A81414_1_En_2_Chapter_IEq294.gif) and we are finished. Otherwise, we must show that there is an L-invariant complement to ![ $${C}_{{x}_{1}} =\\mathrm{ span}\\left \\{{x}_{1},L\\left \({x}_{1}\\right \),\\ldots,{L}^{m-1}\\left \({x}_{ 1}\\right \)\\right \\}$$ ](A81414_1_En_2_Chapter_Equhn.gif) in V. To construct this complement, we consider the linear map ![ $$K : V \\rightarrow{\\mathbb{F}}^{m}$$ ](A81414_1_En_2_Chapter_IEq295.gif) defined by ![ $$K\\left \(x\\right \) = \\left \[\\begin{array}{c} f\\left \(x\\right \) \\\\ f\\left \(L\\left \(x\\right \)\\right \)\\\\ \\vdots \\\\ f\\left \({L}^{m-1}\\left \(x\\right \)\\right \) \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equho.gif) where ![ $$f : V \\rightarrow\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq296.gif) is a linear functional chosen so that ![ $$\\begin{array}{rcl} f\\left \({x}_{1}\\right \)& =& 0, \\\\ f\\left \(L\\left \({x}_{1}\\right \)\\right \)& =& 0, \\\\ \\vdots& & \\vdots \\\\ f\\left \({L}^{m-2}\\left \({x}_{ 1}\\right \)\\right \)& =& 0, \\\\ f\\left \({L}^{m-1}\\left \({x}_{ 1}\\right \)\\right \)& =& 1.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ92.gif) Note that it is possible to choose such an f as x 1, Lx 1,..., L m − 1 x 1 are linearly independent and hence part of a basis for V. We now claim that ![ $$K{\\vert }_{{C}_{{x}_{ 1}}} : {C}_{{x}_{1}} \\rightarrow{\\mathbb{F}}^{m}$$ ](A81414_1_En_2_Chapter_IEq297.gif) is an isomorphism. To see this, we find the matrix representation for the restriction of K to ![ $${C}_{{x}_{1}}$$ ](A81414_1_En_2_Chapter_IEq298.gif). Using the basis x 1, Lx 1,..., L m − 1 x 1 for ![ $${C}_{{x}_{1}}$$ ](A81414_1_En_2_Chapter_IEq299.gif) and the canonical basis e 1,..., e m for ![ $${\\mathbb{F}}^{m}$$ ](A81414_1_En_2_Chapter_IEq300.gif), we see that: ![ $$\\begin{array}{rcl} & \\left \[\\begin{array}{cccc} K\\left \({x}_{1}\\right \)&K\\left \(L\\left \({x}_{1}\\right \)\\right \)&\\cdots &K\\left \({L}^{m-1}\\left \({x}_{1}\\right \)\\right \) \\end{array} \\right \] & \\\\ & = \\left \[\\begin{array}{cccc} {e}_{1} & {e}_{2} & \\cdots &{e}_{m} \\end{array} \\right \]\\left \[\\begin{array}{cccc} 0&0& \\cdots&1 \\\\ \\vdots & &\\mathinner{...}&{_\\ast} \\\\ 0&1& & \\vdots\\\\ 1 &{_\\ast} & \\cdots&{_\\ast} \\end{array} \\right \],& \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ93.gif) where ∗ indicates that we do not know or care what the entry is. Since the matrix representation is clearly invertible, we have that ![ $$K{\\vert }_{{C}_{{x}_{ 1}}} : {C}_{{x}_{1}} \\rightarrow{\\mathbb{F}}^{m}$$ ](A81414_1_En_2_Chapter_IEq301.gif) is an isomorphism. Next, we need to show that kerK is L-invariant. Let x ∈ kerK, i.e., ![ $$K\\left \(x\\right \) = \\left \[\\begin{array}{c} f\\left \(x\\right \) \\\\ f\\left \(L\\left \(x\\right \)\\right \)\\\\ \\vdots \\\\ f\\left \({L}^{m-1}\\left \(x\\right \)\\right \) \\end{array} \\right \] = \\left \[\\begin{array}{c} 0\\\\ 0\\\\ \\vdots \\\\ 0 \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equhp.gif) Then, ![ $$K\\left \(L\\left \(x\\right \)\\right \) = \\left \[\\begin{array}{c} f\\left \(L\\left \(x\\right \)\\right \) \\\\ f\\left \({L}^{2}\\left \(x\\right \)\\right \)\\\\ \\vdots \\\\ f\\left \({L}^{m-1}\\left \(x\\right \)\\right \) \\\\ f\\left \({L}^{m}\\left \(x\\right \)\\right \) \\end{array} \\right \] = \\left \[\\begin{array}{c} 0\\\\ 0\\\\ \\vdots \\\\ 0 \\\\ f\\left \({L}^{m}\\left \(x\\right \)\\right \) \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equhq.gif) Now, by the choice of m, L m x is a linear combination of x, Lx,..., L m − 1 x for all x. This shows that fL m x = 0 and consequently Lx ∈ kerK. Finally, we show that ![ $$V = {C}_{{x}_{1}} \\oplus \\ker \\left \(K\\right \)$$ ](A81414_1_En_2_Chapter_IEq302.gif). We have seen that ![ $$K{\\vert }_{{C}_{{x}_{ 1}}} : {C}_{{x}_{1}} \\rightarrow{\\mathbb{F}}^{m}$$ ](A81414_1_En_2_Chapter_IEq303.gif) is an isomorphism. This implies that ![ $${C}_{{x}_{1}} \\cap \\mathrm{\\ker }\\left \(K\\right \) = \\left \\{0\\right \\}$$ ](A81414_1_En_2_Chapter_IEq304.gif). From Theorem 1.11.7 and Corollary 1.10.14, we then get that ![ $$\\begin{array}{rcl} \\mathrm{\\dim }\\left \(V \\right \)& =& \\dim \\left \(\\ker \\left \(K\\right \)\\right \) +\\dim \\left \(\\mathrm{im}\\left \(K\\right \)\\right \) \\\\ & =& \\dim \\left \(\\ker \\left \(K\\right \)\\right \) + m \\\\ & =& \\dim \\left \(\\ker \\left \(K\\right \)\\right \) +\\dim \\left \({C}_{{x}_{1}}\\right \) \\\\ & =& \\dim \\left \(\\ker \\left \(K\\right \) + {C}_{{x}_{1}}\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ94.gif) Thus, ![ $$V = {C}_{{x}_{1}} +\\ker \\left \(K\\right \) = {C}_{{x}_{1}} \\oplus \\ker \\left \(K\\right \)$$ ](A81414_1_En_2_Chapter_IEq305.gif). To find the geometric multiplicity of λ, we need only observe that each of the blocks ![ $${C}_{{p}_{i}}$$ ](A81414_1_En_2_Chapter_IEq306.gif) has a one-dimensional eigenspace corresponding to λ if λ is an eigenvalue for ![ $${C}_{{p}_{i}}$$ ](A81414_1_En_2_Chapter_IEq307.gif). We know in turn that λ is an eigenvalue for ![ $${C}_{{p}_{i}}$$ ](A81414_1_En_2_Chapter_IEq308.gif) precisely when p i λ = 0. □ It is important to understand that there can be several cyclic subspace decompositions. This fact, of course, makes our calculation of the geometric multiplicity of eigenvalues especially intriguing. A rather interesting example comes from companion matrices themselves. Clearly, they have the desired decomposition; however, if they are diagonalizable, then the space also has a different decomposition into cyclic subspaces given by the one-dimensional eigenspaces. The issue of obtaining a unique decomposition is discussed in the next section. To see that this theorem really has something to say, we should give examples of linear maps that force the space to have a nontrivial cyclic subspace decomposition. Since a companion matrix always has one-dimensional eigenspaces, this is of course not hard at all. Example 2.6.6. A very natural choice is the linear operator L A X = AX on Mat n ×n ℂ. In Example 1.7.6, we showed that it had a block diagonal form with As on the diagonal. This shows that any eigenvalue for A has geometric multiplicity at least n. We can also see this more directly. Assume that Ax = λx, where x ∈ ℂ n and consider ![ $$X = \\left \[\\begin{array}{lll} {\\alpha }_{1}x&\\cdots &{\\alpha }_{n}x \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_IEq309.gif). Then, ![ $$\\begin{array}{rcl}{ L}_{A}\\left \(X\\right \)& =& A\\left \[\\begin{array}{lll} {\\alpha }_{1}x&\\cdots &{\\alpha }_{n}x \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{lll} {\\alpha }_{1}Ax&\\cdots &{\\alpha }_{n}Ax \\end{array} \\right \] \\\\ & =& \\lambda \\left \[\\begin{array}{lll} {\\alpha }_{1}x&\\cdots &{\\alpha }_{n}x \\end{array} \\right \] \\\\ & =& \\lambda X.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ95.gif) Thus, ![ $$M = \\left \\{\\left \[\\begin{array}{lll} {\\alpha }_{1}x&\\cdots &{\\alpha }_{n}x \\end{array} \\right \] : {\\alpha }_{1},\\ldots,{\\alpha }_{n} \\in\\mathbb{C}\\right \\}$$ ](A81414_1_En_2_Chapter_Equhr.gif) forms an n-dimensional space of eigenvectors for L A . Example 2.6.7. Another interesting example of a cyclic subspace decomposition comes from permutation matrices. We first recall that a permutation matrix ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq310.gif) is a matrix such that Ae i = e σi , see also Example 1.7.7. We claim that we can find a cyclic subspace decomposition by simply rearranging the canonical basis e 1,..., e n for ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_2_Chapter_IEq311.gif). The proof works by induction on n. When n = 1, there is nothing to prove. For n > 1, we consider ![ $${C}_{{e}_{1}} =\\mathrm{ span}\\left \\{{e}_{1},A{e}_{1},{A}^{2}{e}_{1},\\ldots \\right \\}$$ ](A81414_1_En_2_Chapter_IEq312.gif). Since all of the powers A m e 1 belong to the finite set ![ $$\\left \\{{e}_{1},\\ldots,{e}_{n}\\right \\},$$ ](A81414_1_En_2_Chapter_IEq313.gif) we can find integers k > l > 0 such that A k e 1 = A l e 1. Since A is invertible, this implies that A k − l e 1 = e 1. Now, select the smallest integer m > 0 such that A m e 1 = e 1. Then we have ![ $${C}_{{e}_{1}} =\\mathrm{ span}\\left \\{{e}_{1},A{e}_{1},{A}^{2}{e}_{ 1},\\ldots,{A}^{m-1}{e}_{ 1}\\right \\}.$$ ](A81414_1_En_2_Chapter_Equhs.gif) Moreover, all of the vectors e 1, Ae 1, A 2 e 1,..., A m − 1 e 1 must be distinct as we could otherwise find l < k < m such that A k − l e 1 = e 1. This contradicts minimality of m. Since all of e 1, Ae 1, A 2 e 1,..., A m − 1 e 1 are also vectors from the basis e 1,..., e n , they must form a basis for ![ $${C}_{{e}_{1}}$$ ](A81414_1_En_2_Chapter_IEq314.gif). In this basis, A is represented by the companion matrix to pt = t m − 1 and hence takes the form ![ $$\\left \[\\begin{array}{ccccc} 0&\\quad 0&\\quad \\cdots &\\quad 0&\\quad 1\\\\ 1 &\\quad 0 &\\quad \\cdots&\\quad 0 &\\quad 0 \\\\ 0&\\quad 1&\\quad \\cdots &\\quad 0&\\quad 0\\\\ \\vdots & \\quad \\vdots & \\quad \\ddots & \\quad \\vdots & \\quad \\vdots\\\\ 0 &\\quad 0 &\\quad \\cdots&\\quad 1 &\\quad 0 \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equht.gif) The permutation that corresponds to ![ $$A : {C}_{{e}_{1}} \\rightarrow{C}_{{e}_{1}}$$ ](A81414_1_En_2_Chapter_IEq315.gif) is also called a cyclic permutation. Evidently, it maps the elements 1, σ1,..., σ m − 11 to themselves in a cyclic manner. One often refers to such permutations by listing the elements as ![ $$\\left \(1,\\sigma \\left \(1\\right \),\\ldots,{\\sigma }^{m-1}\\left \(1\\right \)\\right \)$$ ](A81414_1_En_2_Chapter_IEq316.gif). This is not a unique representation as, e.g., ![ $$\\left \({\\sigma }^{m-1}\\left \(1\\right \),1,\\sigma \\left \(1\\right \),\\ldots,{\\sigma }^{m-2}\\left \(1\\right \)\\right \)$$ ](A81414_1_En_2_Chapter_IEq317.gif) clearly describes the same permutation. We used m of the basis vectors e 1,..., e n to span ![ $${C}_{{e}_{1}}$$ ](A81414_1_En_2_Chapter_IEq318.gif). Rename and reindex the complementary basis vectors f 1,..., f n − m . To get our induction to work we need to show that Af i = f τi for each i = 1,..., n − m. We know that Af i ∈ e 1,..., e n . If Af i ∈ e 1, Ae 1, A 2 e 1,..., A m − 1 e 1, then either f i = e 1 or f i = A k e 1. The former is impossible since f i ∉e 1, Ae 1, A 2 e 1,..., A m − 1 e 1. The latter is impossible as A leaves ![ $$\\left \\{{e}_{1},A{e}_{1},{A}^{2}{e}_{1},\\ldots,{A}^{m-1}{e}_{1}\\right \\}$$ ](A81414_1_En_2_Chapter_IEq319.gif) invariant. Thus, it follows that Af i ∈ f 1,..., f n − m as desired. In this way, we see that it is possible to rearrange the basis e 1,..., e n so as to get a cyclic subspace decomposition. Furthermore, on each cyclic subspace, A is represented by a companion matrix corresponding to pt = t k − 1 for some k ≤ n. Recall that if ![ $$\\mathbb{F} = \\mathbb{C},$$ ](A81414_1_En_2_Chapter_IEq320.gif) then all of these companion matrices are diagonalizable, in particular, A is itself diagonalizable. Note that the cyclic subspace decomposition for a permutation matrix also decomposes the permutation σ into cyclic permutations that are disjoint. This is a basic construction in the theory of permutations. The cyclic subspace decomposition qualifies as a central result in linear algebra for many reasons. While somewhat difficult and tricky to prove, it does not depend on several of our developments in this chapter. It could in fact be established without knowledge of eigenvalues, characteristic polynomials and minimal polynomials, etc. Second, it gives a matrix representation that is in block diagonal form and where we have a very good understanding of each of the blocks. Therefore, all of our developments in this chapter could be considered consequences of this decomposition. Finally, several important and difficult results such as the Frobenius and Jordan canonical forms become relatively easy to prove using this decomposition. ### 2.6.1 Exercises 1. Find all invariant subspaces for the following two matrices and show that they are not diagonalizable: (a) ![ $$\\left \[\\begin{array}{cc} 0&\\quad 1\\\\ 0 &\\quad 0\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equhu.gif) (b) ![ $$\\left \[\\begin{array}{cc} \\alpha & \\quad 1\\\\ 0 &\\quad \\alpha \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equhv.gif) 2. Show that the space of n ×n companion matrices form an affine subspace isomorphic to the affine subspace of monic polynomials of degree n. Affine subspaces are defined in Exercise 8 in Sect. 1.10. 3. Given ![ $$A = \\left \[\\begin{array}{cccc} {\\lambda }_{1} & \\quad 1 &\\quad \\cdots & \\quad 0 \\\\ 0 &\\quad {\\lambda }_{2} & \\quad \\ddots & \\quad \\vdots \\\\ \\vdots & \\quad \\vdots & \\quad \\ddots & \\quad 1\\\\ 0 & \\quad 0 &\\quad \\cdots&\\quad {\\lambda }_{ n}\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equhw.gif) find ![ $$x \\in{\\mathbb{F}}^{n}$$ ](A81414_1_En_2_Chapter_IEq321.gif) such that ![ $${C}_{x} = {\\mathbb{F}}^{n}$$ ](A81414_1_En_2_Chapter_IEq322.gif). Hint: Try n = 2, 3 first. 4. Given a linear operator L : V -> V on a finite-dimensional vector space and x ∈ V, show that ![ $${C}_{x} = \\left \\{p\\left \(L\\right \)\\left \(x\\right \) : p\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]\\right \\}.$$ ](A81414_1_En_2_Chapter_Equhx.gif) 5. Let ![ $$p\\left \(t\\right \) = {t}^{n} + {\\alpha }_{n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{0} \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq323.gif). Show that C p and C p t are similar. Hint: Let ![ $$B = \\left \[\\begin{array}{cccccc} {\\alpha }_{1} & {\\alpha }_{2} & {\\alpha }_{3} & \\cdots &{\\alpha }_{n-1} & 1 \\\\ {\\alpha }_{2} & {\\alpha }_{3} & \\cdots& & 1 &0\\\\ {\\alpha }_{ 3} & \\vdots & \\ddots \\\\ \\vdots & {\\alpha }_{n-1} & & & 0 &0 \\\\ {\\alpha }_{n-1} & 1 & & & & \\vdots \\\\ 1 & 0 & \\cdots& & 0 &0\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equhy.gif) and show ![ $${C}_{p}B = B{C}_{p}^{t}.$$ ](A81414_1_En_2_Chapter_Equhz.gif) 6. Use the previous exercise to show that ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq324.gif) and its transpose are similar. 7. Show that if V = C x for some x ∈ V, then degμ L = dimV. 8. For each n ≥ 2, construct a matrix ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq325.gif) such that V ≠C x for every x ∈ V. 9. For each n ≥ 2, construct a matrix ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq326.gif) such that V = C x for some x ∈ V. 10. Let L : V -> V be a diagonalizable linear operator on a finite-dimensional vector space. Show that V = C x if and only if there are no multiple eigenvalues. 11. Let L : V -> V be a linear operator on a finite-dimensional vector space. Assume that ![ $$V \\neq {C}_{{x}_{1}},$$ ](A81414_1_En_2_Chapter_IEq327.gif) where ![ $${C}_{{x}_{1}}$$ ](A81414_1_En_2_Chapter_IEq328.gif) is the first cyclic subspace as constructed in the proof of the cyclic subspace decomposition. Show that it is possible to select another y 1 ∈ V such that ![ $$\\dim {C}_{{y}_{1}} =\\dim {C}_{{x}_{1}} = m,$$ ](A81414_1_En_2_Chapter_IEq329.gif) but ![ $${C}_{{x}_{1}}\\neq {C}_{{y}_{1}}$$ ](A81414_1_En_2_Chapter_IEq330.gif). This gives a different indication of why the cyclic subspace decomposition is not unique. 12. Let L : V -> V be a linear operator on a finite-dimensional vector space such that V = C x for some x ∈ V. (a) Show that K ∘ L = L ∘ K if and only if K = pL for some ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq331.gif). Hint: When K ∘ L = L ∘ K define p by using that ![ $$K\\left \(x\\right \) = {\\alpha }_{0} + \\cdots+ {\\alpha }_{n-1}{L}^{n-1}\\left \(x\\right \)$$ ](A81414_1_En_2_Chapter_IEq332.gif). (b) Show that all invariant subspaces for L are of the form kerpL for some polynomial ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq333.gif). 13. Let L : V -> V be a linear operator on a finite-dimensional vector space. Define ![ $$\\mathbb{F}\\left \[L\\right \] = \\left \\{p\\left \(L\\right \) : p\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]\\right \\} \\subset \\mathrm{ Hom}\\left \(V,V \\right \)$$ ](A81414_1_En_2_Chapter_IEq334.gif) as the space of polynomials in L. (a) Show that ![ $$\\mathbb{F}\\left \[L\\right \]$$ ](A81414_1_En_2_Chapter_IEq335.gif) is a subspace, that is also closed under composition of operators. (b) Show that ![ $$\\dim \\left \(\\mathbb{F}\\left \[L\\right \]\\right \) =\\deg \\left \({\\mu }_{L}\\right \)$$ ](A81414_1_En_2_Chapter_IEq336.gif) and ![ $$\\mathbb{F}\\left \[L\\right \] =\\mathrm{ span}\\left \\{{1}_{V },L,\\ldots,{L}^{k-1}\\right \\},$$ ](A81414_1_En_2_Chapter_IEq337.gif) where k = degμ L . (c) Show that the map ![ $$\\Phi: \\mathbb{F}\\left \[t\\right \] \\rightarrow \\mathrm{ Hom}\\left \(V,V \\right \)$$ ](A81414_1_En_2_Chapter_IEq338.gif) defined by Φpt = pL is linear and a ring homomorphism (preserves multiplication and sends ![ $$1 \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq339.gif) to 1 V ∈ HomV, V ) with image ![ $$\\mathbb{F}\\left \[L\\right \]$$ ](A81414_1_En_2_Chapter_IEq340.gif). (d) Show that ![ $$\\ker \\left \(\\Phi \\right \) = \\left \\{p\\left \(t\\right \){\\mu }_{L}\\left \(t\\right \) : p\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]\\right \\}$$ ](A81414_1_En_2_Chapter_IEq341.gif). (e) Show that for any ![ $$p\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq342.gif), we have pL = rL for some ![ $$r\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq343.gif) with degrt < degμ L t. (f) Given an eigenvector x ∈ V for L, show that x is an eigenvector for all ![ $$K \\in\\mathbb{F}\\left \[L\\right \]$$ ](A81414_1_En_2_Chapter_IEq344.gif) and that the map ![ $$\\mathbb{F}\\left \[L\\right \] \\rightarrow\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq345.gif) that sends K to the eigenvalue corresponding to x is a ring homomorphism. (g) Conversely, show that any ring nontrivial homomorphism ![ $$\\phi: \\mathbb{F}\\left \[L\\right \] \\rightarrow\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq346.gif) is of the type described in (f). ## 2.7 The Frobenius Canonical Form As we have already indicated, the above proof of the cyclic subspace decomposition actually proves quite a bit more than the result claims. It leads us to a unique matrix representation for the operator known as the Frobenius canonical form. This canonical form will be used in the next section to establish more refined canonical forms for complex operators. Theorem 2.7.1. (The Frobenius Canonical Form) Let L : V -> V be a linear operator on a finite-dimensional vector space. Then, V has a cyclic subspace decomposition such that the block diagonal form of L ![ $$\\left \[L\\right \] = \\left \[\\begin{array}{cccc} {C}_{{p}_{1}} & 0 && 0 \\\\ 0 &{C}_{{p}_{2}}\\\\ & & \\ddots \\\\ 0 & &&{C}_{{p}_{k}} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equia.gif) has the property that p i divides p i−1 for each i = 2,...,k. Moreover, the monic polynomials p 1 ,...,p k are unique. Proof. We first establish that the polynomials constructed in the above version of the cyclic subspace decomposition have the desired divisibility properties. Recall that m ≤ dimV is the largest dimension of a cyclic subspace, i.e., dimC x ≤ m for all x ∈ V and there is an x 1 ∈ V such that ![ $$\\dim {C}_{{x}_{1}} = m$$ ](A81414_1_En_2_Chapter_IEq347.gif). In other words, L m x ∈ spanx, Lx,..., L m − 1 x for all x ∈ V and we can find x 1 ∈ V such that x 1, Lx 1,..., L m − 1 x 1 are linearly independent. With this choice of x 1, define ![ $$\\begin{array}{rcl} {p}_{1}\\left \(t\\right \)& =& {t}^{m} - {\\alpha }_{ m-1}{t}^{m-1} -\\cdots- {\\alpha }_{ 0},\\text{ where} \\\\ {L}^{m}\\left \({x}_{ 1}\\right \)& =& {\\alpha }_{m-1}{L}^{m-1}\\left \({x}_{ 1}\\right \) + \\cdots+ {\\alpha }_{0}{x}_{1}, \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ96.gif) and recall that in the proof of Theorem 2.6.5, we also found an L-invariant complementary subspace M ⊂ V. With these choices we claim that p 1 Lz = 0 for all z ∈ V. Note that we already know this for z = x 1, and it is easy to also verify it for z = Lx 1,..., L m − 1 x 1 by using that pL ∘ L k = L k ∘ pL. Thus, we only need to check the claim for z ∈ M. By construction of m we know that ![ $${L}^{m}\\left \({x}_{ 1} + z\\right \) = {\\gamma }_{m-1}{L}^{m-1}\\left \({x}_{ 1} + z\\right \) + \\cdots+ {\\gamma }_{0}\\left \({x}_{1} + z\\right \).$$ ](A81414_1_En_2_Chapter_Equib.gif) Now, we rearrange the terms as follows: ![ $$\\begin{array}{rcl}{ L}^{m}\\left \({x}_{ 1}\\right \) + {L}^{m}\\left \(z\\right \)& =& {L}^{m}\\left \({x}_{ 1} + z\\right \) \\\\ & =& {\\gamma }_{m-1}{L}^{m-1}\\left \({x}_{ 1}\\right \) + \\cdots+ {\\gamma }_{0}{x}_{1} \\\\ & & +{\\gamma }_{m-1}{L}^{m-1}\\left \(z\\right \) + \\cdots+ {\\gamma }_{ 0}z.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ97.gif) Since ![ $${L}^{m}\\left \({x}_{ 1}\\right \),\\:{\\gamma }_{m-1}{L}^{m-1}\\left \({x}_{ 1}\\right \) + \\cdots+ {\\gamma }_{0}{x}_{1} \\in{C}_{{x}_{1}}$$ ](A81414_1_En_2_Chapter_Equic.gif) and ![ $${L}^{m}\\left \(z\\right \),\\:{\\gamma }_{ m-1}{L}^{m-1}\\left \(z\\right \) + \\cdots+ {\\gamma }_{ 0}z \\in M,$$ ](A81414_1_En_2_Chapter_Equid.gif) it follows that ![ $${\\gamma }_{m-1}{L}^{m-1}\\left \({x}_{ 1}\\right \) + \\cdots+ {\\gamma }_{0}{x}_{1} = {L}^{m}\\left \({x}_{ 1}\\right \) = {\\alpha }_{m-1}{L}^{m-1}\\left \({x}_{ 1}\\right \) + \\cdots+ {\\alpha }_{0}{x}_{1}.$$ ](A81414_1_En_2_Chapter_Equie.gif) Since x 1, Lx 1,..., L m − 1 x 1 are linearly independent, this shows that γ i = α i for ![ $$i = 0,\\ldots,m - 1$$ ](A81414_1_En_2_Chapter_IEq348.gif). But then ![ $$\\begin{array}{rcl} 0& =& {p}_{1}\\left \(L\\right \)\\left \({x}_{1} + z\\right \) \\\\ & =& {p}_{1}\\left \(L\\right \)\\left \({x}_{1}\\right \) + {p}_{1}\\left \(L\\right \)\\left \(z\\right \) \\\\ & =& {p}_{1}\\left \(L\\right \)\\left \(z\\right \), \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ98.gif) which is what we wanted to prove. Next, let x 2 ∈ M and p 2 t be chosen in the same fashion as x 1 and p 1. We first note that ![ $$l =\\deg {p}_{2} \\leq \\deg {p}_{1} = m$$ ](A81414_1_En_2_Chapter_IEq349.gif); this means that we can write ![ $${p}_{1} = {q}_{1}{p}_{2} + r,$$ ](A81414_1_En_2_Chapter_IEq350.gif) where degr < degp 2. Thus, ![ $$\\begin{array}{rcl} 0& =& {p}_{1}\\left \(L\\right \)\\left \({x}_{2}\\right \) \\\\ & =& {q}_{1}\\left \(L\\right \) \\circ{p}_{2}\\left \(L\\right \)\\left \({x}_{2}\\right \) + r\\left \(L\\right \)\\left \({x}_{2}\\right \) \\\\ & =& r\\left \(L\\right \)\\left \({x}_{2}\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ99.gif) Since degr < l = degp 2, the equation rLx 2 = 0 takes the form ![ $$\\begin{array}{rcl} 0& =& r\\left \(L\\right \)\\left \({x}_{2}\\right \) \\\\ & =& {\\beta }_{0}{x}_{2} + \\cdots+ {\\beta }_{l-1}{L}^{l-1}\\left \({x}_{ 2}\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ100.gif) However, p 2 was chosen to that x 2, Lx 2,..., L l − 1 x 2 are linearly independent, so ![ $${\\beta }_{0} = \\cdots= {\\beta }_{l-1} = 0$$ ](A81414_1_En_2_Chapter_Equif.gif) and hence also r = 0. This shows that p 2 divides p 1. We now show that p 1 and p 2 are unique, despite the fact that x 1 and x 2 need not be unique. To see that p 1 is unique, we simply check that it is the minimal polynomial of L. We have already seen that p 1 Lz = 0 for all z ∈ V. Thus, p 1 L = 0 showing that degμ L ≤ degp 1. On the other hand, we also know that x 1, Lx 1,..., L m − 1 x 1 are linearly independent; in particular, 1 V , L,..., L m − 1 must also be linearly independent. This shows that degμ L ≥ m = degp 1. Hence, μ L = p 1 as they are both monic. To see that p 2 is unique is a bit more tricky since the choice for ![ $${C}_{{x}_{1}}$$ ](A81414_1_En_2_Chapter_IEq351.gif) is not unique. We select two decompositions ![ $${C}_{{x}_{1}^{{\\prime}}}\\oplus{M}^{{\\prime}} = V = {C}_{{ x}_{1}} \\oplus M.$$ ](A81414_1_En_2_Chapter_Equig.gif) This yields two block diagonal matrix decompositions for L ![ $$\\begin{array}{rcl} & & \\left \[\\begin{array}{cc} {C}_{{p}_{1}} & 0 \\\\ 0 &\\left \[L{\\vert }_{{M}^{{\\prime}}}\\right \] \\end{array} \\right \] \\\\ & & \\left \[\\begin{array}{cc} {C}_{{p}_{1}} & 0 \\\\ 0 &\\left \[L{\\vert }_{M}\\right \] \\end{array} \\right \],\\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ101.gif) where the upper left-hand block is the same for both representations as p 1 is unique. Moreover, these two matrices are similar. Therefore, we only need to show that ![ $${\\mu }_{{A}_{22}} = {\\mu }_{{A}_{22}^{{\\prime}}}$$ ](A81414_1_En_2_Chapter_IEq352.gif) if the two block diagonal matrices ![ $$\\left \[\\begin{array}{cc} {A}_{11} & 0 \\\\ 0 &{A}_{22} \\end{array} \\right \]\\text{ and }\\left \[\\begin{array}{cc} {A}_{11} & 0 \\\\ 0 &{A}_{22}^{{\\prime}} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equih.gif) are similar ![ $$\\left \[\\begin{array}{cc} {A}_{11} & 0 \\\\ 0 &{A}_{22} \\end{array} \\right \] = {B}^{-1}\\left \[\\begin{array}{cc} {A}_{11} & 0 \\\\ 0 &{A}_{22}^{{\\prime}} \\end{array} \\right \]B.$$ ](A81414_1_En_2_Chapter_Equii.gif) If p is any polynomial, then ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{cc} p\\left \({A}_{11}\\right \)& 0 \\\\ 0 &p\\left \({A}_{22}\\right \) \\end{array} \\right \]& =& p\\left \(\\left \[\\begin{array}{cc} {A}_{11} & 0 \\\\ 0 &{A}_{22} \\end{array} \\right \]\\right \) \\\\ & =& p\\left \({B}^{-1}\\left \[\\begin{array}{cc} {A}_{11} & 0 \\\\ 0 &{A}_{22}^{{\\prime}} \\end{array} \\right \]B\\right \) \\\\ & =& {B}^{-1}\\left \(p\\left \(\\left \[\\begin{array}{cc} {A}_{11} & 0 \\\\ 0 &{A}_{22}^{{\\prime}} \\end{array} \\right \]\\right \)\\right \)B \\\\ & =& {B}^{-1}\\left \[\\begin{array}{cc} p\\left \({A}_{11}\\right \)& 0 \\\\ 0 &p\\left \({A}_{22}^{{\\prime}}\\right \) \\end{array} \\right \]B.\\end{array}$$ ](A81414_1_En_2_Chapter_Equ102.gif) In particular, the two matrices ![ $$\\left \[\\begin{array}{cc} p\\left \({A}_{11}\\right \)& 0 \\\\ 0 &p\\left \({A}_{22}\\right \) \\end{array} \\right \]\\text{ and}\\left \[\\begin{array}{cc} p\\left \({A}_{11}\\right \)& 0 \\\\ 0 &p\\left \({A}_{22}^{{\\prime}}\\right \) \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equij.gif) always have the same rank. Since the upper left-hand corners are identical, this shows that pA 22 and pA 22 ′ have the same rank. As a special case, it follows that pA 22 = 0 if and only if pA 22 ′ = 0. This shows that A 22 and A 22 ′ have the same minimal polynomials and hence that p 2 is uniquely defined. □ In some texts, the Frobenius canonical form is also known as the rational canonical form. The reason is that it will have rational entries if we start with an n ×n matrix with rational entries. To see why this is, simply observe that the polynomials have rational coefficients starting with p 1, the minimal polynomial. In some other texts, the rational canonical form is refined by further factoring the characteristic or minimal polynomials into irreducible components over the rationals. One of the advantages of the Frobenius canonical form is that it does not depend on the scalar field. That is, if ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \) \\subset \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{L}\\right \)$$ ](A81414_1_En_2_Chapter_IEq353.gif), then the form does not depend on whether we compute it using ![ $$\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq354.gif) or ![ $$\\mathbb{L}$$ ](A81414_1_En_2_Chapter_IEq355.gif). Definition 2.7.2. The unique polynomials p 1,..., p k are called the similarity invariants, elementary divisors or invariant factors for L. Clearly, two matrices are similar if they have the same similarity invariants as they have the same Frobenius canonical form. Conversely, similar matrices are both similar to the same Frobenius canonical form and hence have the same similarity invariants. It is possible to calculate the similarity invariants using only the elementary row and column operations (see Sect. 1.13.) The specific construction is covered in Sect. 2.9 and is related to the Smith normal form. The following corollary shows that several of the matrices related to companion matrices are in fact similar. Various exercises have been devoted to establishing this fact, but using the Frobenius canonical form we get a very elegant characterization of when a linear map is similar to a companion matrix. Corollary 2.7.3. If two linear operators on an n-dimensional vector space have the same minimal polynomials of degree n, then they have the same Frobenius canonical form and are thus similar. Proof. If degμ L = dimV, then the first block in the Frobenius canonical form is an n ×n matrix. Thus, there is only one block in this decomposition. This proves the claim. □ We can redefine the characteristic polynomial using similarity invariants. However, it is not immediately clear why it agrees with the definition given in Sect. 2.3 as we do not know that that definition gives the same answer for similar matrices (see, however, Sect. 5.7 for a proof that uses determinants). Definition 2.7.4. The characteristic polynomial of a linear operator L : V -> V on a finite-dimensional vector space is the product of its similarity invariants: ![ $${\\chi }_{L}\\left \(t\\right \) = {p}_{1}\\left \(t\\right \)\\cdots {p}_{k}\\left \(t\\right \).$$ ](A81414_1_En_2_Chapter_Equik.gif) This gives us a way of defining the characteristic polynomial, but it does not tells us how to compute it. For that, the row reduction technique or determinants are the way to go. In this vein, we can also define the determinant as ![ $$\\det L ={ \\left \(-1\\right \)}^{n}{\\chi }_{ L}\\left \(0\\right \).$$ ](A81414_1_En_2_Chapter_Equil.gif) The problem is that one of the key properties of determinants ![ $$\\det \\left \(K \\circ L\\right \) =\\det \\left \(K\\right \)\\det \\left \(L\\right \)$$ ](A81414_1_En_2_Chapter_Equim.gif) does not follow easily from this definition. We do, however, get that similar matrices, and linear operators have the same determinant: ![ $$\\det \\left \(K \\circ L \\circ{K}^{-1}\\right \) =\\det \\left \(L\\right \).$$ ](A81414_1_En_2_Chapter_Equin.gif) Example 2.7.5. As a general sort of example, let us see what the Frobenius canonical form for ![ $$A = \\left \[\\begin{array}{cc} {C}_{{q}_{1}} & 0 \\\\ 0 &{C}_{{q}_{2}} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equio.gif) is when q 1 and q 2 are relatively prime. Note that if ![ $$0 = p\\left \(A\\right \) = \\left \[\\begin{array}{cc} p\\left \({C}_{{q}_{1}}\\right \)& 0 \\\\ 0 &p\\left \({C}_{{q}_{2}}\\right \) \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equip.gif) then both q 1 and q 2 divide p. Conversely, if q 1 and q 2 both divide p, it also follows that pA = 0. Since the least common multiple of q 1 and q 2 is q 1 ⋅q 2, we see that ![ $${\\mu }_{A} = {q}_{1} \\cdot{q}_{2} = {\\chi }_{A}$$ ](A81414_1_En_2_Chapter_IEq356.gif). Thus, p 1 = q 1 ⋅q 2. This shows that the Frobenius canonical form is simply ![ $${C}_{{q}_{1}\\cdot {q}_{2}}$$ ](A81414_1_En_2_Chapter_IEq357.gif). The general case where there might be a nontrivial greatest common divisor is relegated to the exercises. Example 2.7.6. We now give a few examples showing that the characteristic and minimal polynomials alone do not yield sufficient information to determine all the similarity invariants when the dimension is ≥ 4 (see exercises for dimensions 2 and 3). We consider all possible canonical forms in dimension 4, where the characteristic polynomial is t 4. There are four nontrivial cases given by: ![ $$\\left \[\\begin{array}{cccc} 0&0&0&0\\\\ 1 &0 &0 &0 \\\\ 0&1&0&0\\\\ 0 &0 &1 &0 \\end{array} \\right \],\\left \[\\begin{array}{cccc} 0&0&0&0\\\\ 1 &0 &0 &0 \\\\ 0&1&0&0\\\\ 0 &0 &0 &0 \\end{array} \\right \],\\left \[\\begin{array}{cccc} 0&0&0&0\\\\ 1 &0 &0 &0 \\\\ 0&0&0&0\\\\ 0 &0 &0 &0 \\end{array} \\right \],\\left \[\\begin{array}{cccc} 0&0&0&0\\\\ 1 &0 &0 &0 \\\\ 0&0&0&0\\\\ 0 &0 &1 &0 \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equiq.gif) For the first, we know that ![ $$\\mu= {p}_{1} = {t}^{4}$$ ](A81414_1_En_2_Chapter_IEq358.gif). For the second, we have two blocks where ![ $$\\mu= {p}_{1} = {t}^{3}$$ ](A81414_1_En_2_Chapter_IEq359.gif) and p 2 = t. For the third, we have ![ $$\\mu= {p}_{1} = {t}^{2}$$ ](A81414_1_En_2_Chapter_IEq360.gif) while ![ $${p}_{2} = {p}_{3} = t$$ ](A81414_1_En_2_Chapter_IEq361.gif). Finally, the fourth has ![ $$\\mu= {p}_{1} = {p}_{2} = {t}^{2}$$ ](A81414_1_En_2_Chapter_IEq362.gif). The last two matrices clearly do not have the same canonical form, but they do have the same characteristic and minimal polynomials. Example 2.7.7. Lastly, let us compute the Frobenius canonical form for a projection E : V -> V. As we shall see, this is clearly a situation where we should just stick to diagonalization as the Frobenius canonical form is far less informative. Apparently, we just need to find all possible Frobenius canonical forms that are also projections. The simplest are of course just 0 V and 1 V . In all other cases the minimal polynomial is t 2 − t. The companion matrix for that polynomial is ![ $$\\left \[\\begin{array}{cc} 0&\\quad 0\\\\ 1 &\\quad 1 \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equir.gif) so we expect to have one or several of those blocks, but note that we cannot have more than ![ $$\\left \\lfloor \\frac{\\dim V } {2} \\right \\rfloor $$ ](A81414_1_En_2_Chapter_IEq363.gif) of such blocks. The rest of the diagonal entries must now correspond to companion matrices for either t or t − 1. But we cannot use both as these two polynomials do not divide each other. This gives us two types of Frobenius canonical forms: ![ $$\\left \[\\begin{array}{cccccccc} 0&0\\\\ 1 &1\\\\ & &\\ddots \\\\ & &&0&0\\\\ & & &1 &1 \\\\ & && & &0\\\\ & & & & & &\\ddots\\\\ & & & & & &&0 \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equis.gif) or ![ $$\\left \[\\begin{array}{cccccccc} 0&0\\\\ 1 &1\\\\ & &\\ddots \\\\ & &&0&0\\\\ & & &1 &1 \\\\ & && & &1\\\\ & & & & & &\\ddots\\\\ & & & & & &&1 \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equit.gif) To find the correct canonical form for E, we just select the Frobenius canonical form that gives us the correct rank. If ![ $$\\mathrm{rank}E \\leq \\left \\lfloor \\frac{\\dim V } {2} \\right \\rfloor $$ ](A81414_1_En_2_Chapter_IEq364.gif) it will be of the first type and otherwise of the second. ### 2.7.1 Exercises 1. What are the similarity invariants for a companion matrix C p ? 2. Let A ∈ Mat n ×n ℝ, and n ≥ 2. (a) Show that when n is odd, then it is not possible to have ![ $${p}_{1}\\left \(t\\right \) = {t}^{2} + 1$$ ](A81414_1_En_2_Chapter_IEq365.gif). (b) Show by example that one can have ![ $${p}_{1}\\left \(t\\right \) = {t}^{2} + 1$$ ](A81414_1_En_2_Chapter_IEq366.gif) for all even n. (c) Show by example that one can have ![ $${p}_{1}\\left \(t\\right \) = {t}^{3} + t$$ ](A81414_1_En_2_Chapter_IEq367.gif) for all odd n. 3. If L : V -> V is an operator on a 2-dimensional space, then either ![ $${p}_{1} = {\\mu }_{L} = {\\chi }_{L}$$ ](A81414_1_En_2_Chapter_IEq368.gif) or L = λ1 V . 4. If L : V -> V is an operator on a 3-dimensional space, then either ![ $${p}_{1} = {\\mu }_{L} = {\\chi }_{L}$$ ](A81414_1_En_2_Chapter_IEq369.gif), ![ $${p}_{1} = \\left \(t - \\alpha \\right \)\\left \(t - \\beta \\right \)$$ ](A81414_1_En_2_Chapter_IEq370.gif) and ![ $${p}_{2} = \\left \(t - \\beta \\right \),$$ ](A81414_1_En_2_Chapter_IEq371.gif) or L = λ1 V . Note that in the second case you know that p 1 has degree 2, the key is to show that it factors as described. 5. Let L : V -> V be a linear operator on a finite-dimensional space. Show that V = C x for some x ∈ V if and only if μ L = χ L . 6. Show that the matrix ![ $$\\left \[\\begin{array}{cccc} {\\lambda }_{1} & 1 &\\cdots & 0 \\\\ 0 &{\\lambda }_{2} & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots & 1\\\\ 0 & 0 &\\cdots&{\\lambda }_{ n}\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equiu.gif) is similar to a companion matrix. 7. Let L : V -> V be a linear operator on a finite-dimensional vector space such that V = C x for some x ∈ V. Show that all invariant subspaces for L are of the form C z for some z ∈ V. Hint: This relies on showing that if an invariant subspace is not cyclic, then degμ L < dimV. 8. Consider two companion matrices C p and C q ; show that the similarity invariants for the block diagonal matrix ![ $$\\left \[\\begin{array}{cc} {C}_{p}& \\quad 0 \\\\ 0 &\\quad {C}_{q}\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equiv.gif) are p 1 = lcmp, q and p 2 = gcdp, q. Hint: Use Propositions 2.1.4 and 2.1.5 to show that p 1 ⋅p 2 = p ⋅q. 9. Is it possible to find the similarity invariants for ![ $$\\left \[\\begin{array}{ccc} {C}_{p}& \\quad 0 & \\quad 0 \\\\ 0 &\\quad {C}_{q}& \\quad 0 \\\\ 0 & \\quad 0 &\\quad {C}_{r}\\end{array} \\right \]?$$ ](A81414_1_En_2_Chapter_Equiw.gif) Note that you can easily find p 1 = l cmp, q, r, so the issue is whether it is possible to decide what p 2 should be. 10. Show that ![ $$A,B \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq372.gif) are similar if and only if rankpA = rankpB for all ![ $$p \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq373.gif). (Recall that two matrices have the same rank if and only if they are equivalent and that equivalent matrices certainly need not be similar. This is what makes the exercise interesting.) 11. The previous exercise can be made into a checkable condition: Show that ![ $$A,B \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq374.gif) are similar if and only if χ A = χ B and rankpA = rankpB for all p that divide χ A . (Note that as χ A has a unique prime factorization (see Theorem 2.1.7), this means that we only have to check a finite number of conditions.) 12. Show that any linear map with the property that ![ $${\\chi }_{L}\\left \(t\\right \) = \\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{n}\\right \) \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_Equix.gif) for ![ $${\\lambda }_{1},\\ldots,{\\lambda }_{n} \\in\\mathbb{F}$$ ](A81414_1_En_2_Chapter_IEq375.gif) has an upper triangular matrix representation. 13. Let L : V -> V be a linear operator on a finite-dimensional vector space. Use the Frobenius canonical form to show that ![ $$\\mathrm{tr}\\left \(L\\right \) = -{\\alpha }_{n-1},$$ ](A81414_1_En_2_Chapter_IEq376.gif) where ![ $${\\chi }_{L}\\left \(t\\right \) = {t}^{n} + {\\alpha }_{n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{0}$$ ](A81414_1_En_2_Chapter_IEq377.gif). This is the result mentioned in Proposition 2.3.11. 14. Assume that L : V -> V satisfies ![ $${\\left \(L - {\\lambda }_{0}{1}_{V }\\right \)}^{k} = 0,$$ ](A81414_1_En_2_Chapter_IEq378.gif) for some k > 1, but ![ $${\\left \(L - {\\lambda }_{0}{1}_{V }\\right \)}^{k-1}\\neq 0$$ ](A81414_1_En_2_Chapter_IEq379.gif). Show that kerL − λ01 V is neither ![ $$\\left \\{0\\right \\}$$ ](A81414_1_En_2_Chapter_IEq380.gif) nor V. Show that kerL − λ01 V does not have a complement in V that is L-invariant. 15. (The Cayley-Hamilton Theorem) Show the Cayley-Hamilton theorem using the Frobenius canonical form. ## 2.8 The Jordan Canonical Form* In this section, we present a proof of the Jordan canonical form. We start with a somewhat more general point of view that in the end is probably the most important feature of this special canonical form. Theorem 2.8.1. (The Jordan-Chevalley Decomposition) Let L : V -> V be a linear operator on an n-dimensional complex vector space. Then ![ $$L = S + N,$$ ](A81414_1_En_2_Chapter_IEq381.gif) where S is diagonalizable, N n = 0, and SN = NS. Proof. First, use the Fundamental Theorem of Algebra 2.1.8 to factor the minimal polynomial ![ $${\\mu }_{L}\\left \(t\\right \) ={ \\left \(t - {\\lambda }_{1}\\right \)}^{{m}_{1} }\\cdots {\\left \(t - {\\lambda }_{k}\\right \)}^{{m}_{k} },$$ ](A81414_1_En_2_Chapter_Equiy.gif) where λ1,..., λ k are distinct. If we define ![ $${M}_{i} =\\mathrm{ ker}{\\left \(L - {\\lambda }_{i}\\right \)}^{{m}_{i} },$$ ](A81414_1_En_2_Chapter_Equiz.gif) then the proof of Lemma 2.5.6 together with Exercise 20 in Sect. 2.5 shows that ![ $$V = {M}_{1} \\oplus \\cdots\\oplus{M}_{k}.$$ ](A81414_1_En_2_Chapter_Equja.gif) We can now define ![ $$S{\\vert }_{{M}_{i}} = {\\lambda }_{i}{1}_{V }{\\vert }_{{M}_{i}} = {\\lambda }_{i}{1}_{{M}_{i}}$$ ](A81414_1_En_2_Chapter_Equjb.gif) ![ $$N{\\vert }_{{M}_{i}} = \\left \(L - {\\lambda }_{i}{1}_{V }\\right \){\\vert }_{{M}_{i}} = L{\\vert }_{{M}_{i}} - {\\lambda }_{i}{1}_{{M}_{i}}$$ ](A81414_1_En_2_Chapter_Equjc.gif) Clearly, ![ $$L = S + N$$ ](A81414_1_En_2_Chapter_IEq382.gif), S is diagonalizable and SN = NS. Finally, since μ L L = 0, it follows that N n = 0. □ It is in fact possible to show that the Jordan-Chevalley decomposition is unique, i.e., the operators S and N are uniquely determined by L (see exercises to this chapter). As a corollary of the above proof, we obtain: Corollary 2.8.2. Let C p be a companion matrix with ![ $$p\\left \(t\\right \) ={ \\left \(t - \\lambda \\right \)}^{n}$$ ](A81414_1_En_2_Chapter_IEq383.gif) . Then, C p is similar to a Jordan block : ![ $$\\left \[J\\right \] = \\left \[\\begin{array}{cccccc} \\lambda &1 & 0 &\\cdots &0 & 0\\\\ 0 &\\lambda& 1 &\\cdots& 0 & 0 \\\\ 0 & 0 &\\lambda & \\ddots & \\vdots & \\vdots\\\\ 0 & 0 & 0 & \\ddots & 1 & 0 \\\\ \\vdots & \\vdots & \\vdots &\\cdots &\\lambda &1\\\\ 0 & 0 & 0 &\\cdots& 0 &\\lambda\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equjd.gif) Moreover, the eigenspace for λ is one-dimensional and is generated by the first basis vector. We can now give a simple proof of the so-called Jordan canonical form. Interestingly, the famous analyst Weierstrass deserves equal credit as he too proved the result at about the same time. Theorem 2.8.3. (The Jordan-Weierstrass Canonical form) Let L : V -> V be a linear operator on a finite-dimensional complex vector space. Then, we can find L-invariant subspaces M 1 ,....,M s such that ![ $$V = {M}_{1} \\oplus \\cdots\\oplus{M}_{s}$$ ](A81414_1_En_2_Chapter_Equje.gif) and each ![ $$L{\\vert }_{{M}_{i}}$$ ](A81414_1_En_2_Chapter_IEq384.gif) has a matrix representation of the form ![ $$\\left \[\\begin{array}{ccccc} {\\lambda }_{i}& 1 & 0 &\\cdots & 0 \\\\ 0 &{\\lambda }_{i}& 1 &\\cdots & 0 \\\\ 0 & 0 &{\\lambda }_{i}& \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\vdots & \\ddots & 1\\\\ 0 & 0 & 0 &\\cdots&{\\lambda }_{ i} \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equjf.gif) where λ i is an eigenvalue for L. Proof. First, we invoke the Jordan-Chevalley decomposition ![ $$L = S + N$$ ](A81414_1_En_2_Chapter_IEq385.gif). Then, we decompose V into eigenspaces for S: ![ $$V =\\ker \\left \(S - {\\lambda }_{1}{1}_{V }\\right \) \\oplus \\cdots\\oplus \\ker \\left \(S - {\\lambda }_{k}{1}_{V }\\right \).$$ ](A81414_1_En_2_Chapter_Equjg.gif) Each of these eigenspaces is invariant for N since S and N commute. Specifically, if Sx = λx, then ![ $$S\\left \(N\\left \(x\\right \)\\right \) = N\\left \(S\\left \(x\\right \)\\right \) = N\\left \(\\lambda x\\right \) = \\lambda N\\left \(x\\right \),$$ ](A81414_1_En_2_Chapter_Equjh.gif) showing that Nx is also an eigenvector for the eigenvalue λ. This reduces the problem to showing that operators of the form λ1 W + N, where N n = 0 have the desired decomposition. Since the homothety λ ⋅1 W is always diagonal in any basis, it then suffices to show the theorem holds for operators N such that N n = 0. The similarity invariants for such an operator all have to look like t k so the blocks in the Frobenius canonical form must look like ![ $$\\left \[\\begin{array}{cccc} 0&0&\\cdots &0\\\\ 1 &0 & \\ddots \\\\ \\vdots & \\ddots & \\ddots &0\\\\ 0 & & 1 &0 \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equji.gif) If e 1,..., e k is the basis yielding this matrix representation, then ![ $$\\begin{array}{rcl} N\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{k} \\end{array} \\right \]& =& \\left \[\\begin{array}{cccc} {e}_{2} & \\cdots &{e}_{k}&0 \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{k} \\end{array} \\right \]\\left \[\\begin{array}{cccc} 0&0&\\cdots &0\\\\ 1 &0 & \\ddots \\\\ \\vdots & \\ddots & \\ddots &0\\\\ 0 & & 1 &0 \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_2_Chapter_Equ103.gif) Reversing the basis to e k ,..., e 1 then gives us the desired block ![ $$\\begin{array}{rcl} N\\left \[\\begin{array}{ccc} {e}_{k}&\\cdots &{e}_{1} \\end{array} \\right \]& =& \\left \[\\begin{array}{cccc} 0&{e}_{k}&\\cdots &{e}_{2} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {e}_{k}&\\cdots &{e}_{1} \\end{array} \\right \]\\left \[\\begin{array}{cccc} 0&1&\\cdots &0\\\\ 0 &0 & \\ddots \\\\ \\vdots & & \\ddots &1\\\\ 0 & & &0 \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_2_Chapter_Equ104.gif) □ In this decomposition, it is possible for several of the subspaces M i to correspond to the same eigenvalue. Given that the eigenspace for each Jordan block is one-dimensional we see that each eigenvalue corresponds to as many blocks as the geometric multiplicity of the eigenvalue. The job of calculating the Jordan canonical form is in general quite hard. Here we confine ourselves to the 2- and 3-dimensional situations. Corollary 2.8.4. Let L : V -> V be a complex linear operator where dim V = 2. Either L is diagonalizable and there is a basis where ![ $$\\left \[L\\right \] = \\left \[\\begin{array}{cc} {\\lambda }_{1} & \\quad 0 \\\\ 0 &\\quad {\\lambda }_{2} \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equjj.gif) or L is not diagonalizable and there is a basis where ![ $$\\left \[L\\right \] = \\left \[\\begin{array}{cc} \\lambda &\\quad 1\\\\ 0 &\\quad \\lambda\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equjk.gif) Note that in case L is diagonalizable, we either have that L = λ1 V or that the eigenvalues are distinct. In the nondiagonalizable case, there is only one eigenvalue. Corollary 2.8.5. Let L : V -> V be a complex linear operator where dim V = 3. Either L is diagonalizable and there is a basis where ![ $$\\left \[L\\right \] = \\left \[\\begin{array}{ccc} {\\lambda }_{1} & \\quad 0 & \\quad 0 \\\\ 0 &\\quad {\\lambda }_{2} & \\quad 0 \\\\ 0 & \\quad 0 &\\quad {\\lambda }_{3} \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equjl.gif) or L is not diagonalizable and there is a basis where one of the following two situations occur: ![ $$\\left \[L\\right \] = \\left \[\\begin{array}{ccc} {\\lambda }_{1} & \\quad 0 & \\quad 0 \\\\ 0 &\\quad {\\lambda }_{2} & \\quad 1 \\\\ 0 & \\quad 0 &\\quad {\\lambda }_{2} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equjm.gif) or ![ $$\\left \[L\\right \] = \\left \[\\begin{array}{ccc} \\lambda &\\quad 1 & \\quad 0\\\\ 0 &\\quad \\lambda& \\quad 1 \\\\ 0 & \\quad 0 &\\quad \\lambda\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equjn.gif) Remark 2.8.6. It is possible to check which of these situations occur by knowing the minimal and characteristic polynomials. We note that the last case happens precisely when there is only one eigenvalue with geometric multiplicity 1. The second case happens if either L has two eigenvalues each with geometric multiplicity 1 or if L has one eigenvalue with geometric multiplicity 2. ### 2.8.1 Exercises 1. Find the Jordan canonical forms for the matrices ![ $$\\left \[\\begin{array}{cc} 1&\\quad 0\\\\ 1 &\\quad 1\\end{array} \\right \],\\left \[\\begin{array}{cc} 1&\\quad 1\\\\ 0 &\\quad 2\\end{array} \\right \],\\left \[\\begin{array}{cc} 2&\\quad - 1\\\\ 4 &\\quad - 2\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equjo.gif) 2. Find the basis that yields the Jordan canonical form for ![ $$\\left \[\\begin{array}{cc} \\lambda&\\quad - 1\\\\ {\\lambda }^{2 } & \\quad - \\lambda \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equjp.gif) 3. Find the Jordan canonical form for the matrix ![ $$\\left \[\\begin{array}{cc} {\\lambda }_{1} & \\quad 1 \\\\ 0 &\\quad {\\lambda }_{2}\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equjq.gif) Hint: The answer depends on λ1 and λ2. 4. Find the Jordan canonical forms for the matrix ![ $$\\left \[\\begin{array}{cc} 0 & \\quad 1\\\\ - {\\lambda }_{1 } {\\lambda }_{2 } & \\quad {\\lambda }_{1 } + {\\lambda }_{2}\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equjr.gif) 5. Find the Jordan canonical forms for the matrix ![ $$\\left \[\\begin{array}{ccc} {\\lambda }^{2} & \\quad - 2\\lambda& \\quad 1 \\\\ {\\lambda }^{3} & \\quad - 2{\\lambda }^{2} & \\quad \\lambda\\\\ {\\lambda }^{4} & \\quad - 2{\\lambda }^{3} & \\quad {\\lambda }^{2}\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equjs.gif) 6. Find the Jordan canonical forms for the matrix ![ $$\\left \[\\begin{array}{ccc} {\\lambda }_{1} & \\quad 1 & \\quad 0 \\\\ 0 &\\quad {\\lambda }_{2} & \\quad 1 \\\\ 0 & \\quad 0 &\\quad {\\lambda }_{3}\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equjt.gif) 7. Find the Jordan canonical forms for the matrix ![ $$\\left \[\\begin{array}{ccc} 0 & \\quad 1 & \\quad 0\\\\ 0 & \\quad 0 & \\quad 1 \\\\ {\\lambda }_{1}{\\lambda }_{2}{\\lambda }_{3} & \\quad -\\left \({\\lambda }_{1}{\\lambda }_{2} + {\\lambda }_{2}{\\lambda }_{3} + {\\lambda }_{1}{\\lambda }_{3}\\right \)&\\quad {\\lambda }_{1} + {\\lambda }_{2} + {\\lambda }_{3}\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equju.gif) 8. Find the Jordan canonical forms for the matrices ![ $$\\left \[\\begin{array}{ccc} 0& \\quad 1 &\\quad 0\\\\ 0 & \\quad 0 &\\quad 1 \\\\ 2&\\quad - 5&\\quad 4\\end{array} \\right \],\\left \[\\begin{array}{ccc} 0& \\quad 1 &\\quad 0\\\\ 0 & \\quad 0 &\\quad 1 \\\\ 1&\\quad - 3&\\quad 3\\end{array} \\right \],\\left \[\\begin{array}{ccc} 0& \\quad 1 &\\quad 0\\\\ 0 & \\quad 0 &\\quad 1 \\\\ 6&\\quad - 11&\\quad 6\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equjv.gif) 9. An operator L : V -> V on an n-dimensional vector space over any field is said to be nilpotent if L k = 0 for some k. (a) Show that χ L t = t n . (b) Show that L can be put in triangular form. (c) Show that L is diagonalizable if and only if L = 0. (d) Find a real matrix such that its real eigenvalues are 0 but which is not nilpotent. 10. Let L : V -> V be a linear operator on an n-dimensional complex vector space. Show that for p ∈ ℂt, the operator pL is nilpotent if and only if the eigenvalues of L are roots of p. What goes wrong with this statement in the real case when pt = t 2 + 1 and dimV is odd? 11. Show that if ![ $$\\ker \\left \({\\left \(L - \\lambda {1}_{V }\\right \)}^{k}\\right \)\\neq \\ker \\left \({\\left \(L - \\lambda {1}_{ V }\\right \)}^{k-1}\\right \),$$ ](A81414_1_En_2_Chapter_Equjw.gif) then the algebraic multiplicity of λ is ≥ k. Give an example where the algebraic multiplicity > k and ![ $$\\ker \\left \({\\left \(L - \\lambda {1}_{V }\\right \)}^{k+1}\\right \) =\\ker \\left \({\\left \(L - \\lambda {1}_{ V }\\right \)}^{k}\\right \)\\neq \\ker \\left \({\\left \(L - \\lambda {1}_{ V }\\right \)}^{k-1}\\right \).$$ ](A81414_1_En_2_Chapter_Equjx.gif) 12. Show that if L : V -> V is a linear operator such that ![ $$\\begin{array}{rcl}{ \\chi }_{L}\\left \(t\\right \)& =&{ \\left \(t - {\\lambda }_{1}\\right \)}^{{n}_{1} }\\cdots {\\left \(t - {\\lambda }_{k}\\right \)}^{{n}_{k} }, \\\\ {\\mu }_{L}\\left \(t\\right \)& =&{ \\left \(t - {\\lambda }_{1}\\right \)}^{{m}_{1} }\\cdots {\\left \(t - {\\lambda }_{k}\\right \)}^{{m}_{k} }, \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ105.gif) then m i corresponds to the largest Jordan block that has λ i on the diagonal. Using that show that m i is the first integer such that ![ $$\\ker \\left \({\\left \(L - {\\lambda }_{i}{1}_{V }\\right \)}^{{m}_{i} }\\right \) =\\ker \\left \({\\left \(L - {\\lambda }_{i}{1}_{V }\\right \)}^{{m}_{i}+1}\\right \).$$ ](A81414_1_En_2_Chapter_Equjy.gif) 13. Show that if L : V -> V is a linear operator on an n-dimensional complex vector space with distinct eigenvalues λ1,..., λ k , then pL = 0, where ![ $$p\\left \(t\\right \) ={ \\left \(t - {\\lambda }_{1}\\right \)}^{n-k+1}\\cdots {\\left \(t - {\\lambda }_{ k}\\right \)}^{n-k+1}.$$ ](A81414_1_En_2_Chapter_Equjz.gif) Hint: Try k = 2. 14. Assume that L = S + N is a Jordan-Chevalley decompositions, i.e., SN = NS, S is diagonalizable, and N n = N ′ n = 0, where n is the dimension of the vector space. (a) Show that S and N commute with L. (b) Show that L and S have the same eigenvalues. (c) Show that if λ is an eigenvalue for L, then ![ $$\\ker \\left \({\\left \(L - \\lambda {1}_{V }\\right \)}^{n}\\right \) =\\ker \\left \(\\left \(S - \\lambda {1}_{ V }\\right \)\\right \).$$ ](A81414_1_En_2_Chapter_Equka.gif) (d) Show that the Jordan-Chevalley decomposition is unique. (e) Find polynomials p, q such that S = pL and N = qL. ## 2.9 The Smith Normal Form* In this section, we show that the row reduction method we developed in Sect. 2.3 to compute the characteristic polynomial can be enhanced to give a direct method for computing similarity invariants provided we also use column operations in addition to row operations. Let ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \)$$ ](A81414_1_En_2_Chapter_IEq386.gif) be the set of all n ×n matrices that have entries in ![ $$\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq387.gif). The operations we allow are those coming from multiplying by the elementary matrices: I kl , R kl rt, and M k α. Recall that we used these operations to compute the characteristic polynomial in Sect. 2.3. When multiplied on the left, these matrices have the effect of: * I kl interchanges rows k and l. * R kl rt multiplies row l by ![ $$r\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq388.gif) and adds it to row k. * M k α multiplies row k by ![ $$\\alpha\\in\\mathbb{F} -\\left \\{0\\right \\}$$ ](A81414_1_En_2_Chapter_IEq389.gif). While when multiplied on right: * I kl interchanges columns k and l. * R kl rt multiplies column k by ![ $$r\\left \(t\\right \) \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq390.gif) and adds it to column l. * M k α multiplies column k by ![ $$\\alpha\\in\\mathbb{F} -\\left \\{0\\right \\}$$ ](A81414_1_En_2_Chapter_IEq391.gif). Define ![ $$G{l}_{n}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \) \\subset \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \)$$ ](A81414_1_En_2_Chapter_Equkb.gif) as the set of all matrices P such that we can find an inverse ![ $$Q \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \)$$ ](A81414_1_En_2_Chapter_IEq392.gif), i.e., ![ $$PQ = QP = {1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_2_Chapter_IEq393.gif). As for regular matrices (see Theorem 1.13.14), we obtain Proposition 2.9.1. The elementary matrices generate ![ $$G{l}_{n}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \)$$ ](A81414_1_En_2_Chapter_IEq394.gif). Proof. The elementary matrices I kl , R kl rt, and M k α all lie in ![ $$G{l}_{n}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \)$$ ](A81414_1_En_2_Chapter_IEq395.gif) as they have inverses given by I kl , R kl − rt, and M k α − 1, respectively. Let ![ $$P \\in G{l}_{n}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \),$$ ](A81414_1_En_2_Chapter_IEq396.gif) then ![ $${P}^{-1} \\in G{l}_{n}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \)$$ ](A81414_1_En_2_Chapter_IEq397.gif). Now perform row operations on P − 1 as in Theorem 2.3.6 until we obtain an upper triangular matrix: ![ $$U = \\left \[\\begin{array}{cclc} {p}_{1}\\left \(t\\right \)& {_\\ast} &\\cdots & {_\\ast} \\\\ 0 &{p}_{2}\\left \(t\\right \)&\\cdots & {_\\ast}\\\\ \\vdots & \\vdots &\\ddots & \\vdots \\\\ 0 & 0 &\\cdots &{p}_{n}\\left \(t\\right \) \\end{array} \\right \] \\in G{l}_{n}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \)$$ ](A81414_1_En_2_Chapter_Equkc.gif) However, the upper triangular matrix U cannot be invertible unless its diagonal entries are nonzero scalars. Thus, we can assume that p i = 1. We can then perform row operations to eliminate all entries above the diagonal as well. This shows that there is a matrix Q that is a product of elementary matrices and such that ![ $$Q{P}^{-1} = {1}_{{ \\mathbb{F}}^{n}}.$$ ](A81414_1_En_2_Chapter_Equkd.gif) Multiplying by P on the right on both sides then shows that ![ $$P = Q$$ ](A81414_1_En_2_Chapter_Equke.gif) which in turn proves our claim. □ We can now explain how far it is possible to reduce a matrix with polynomials as entries using row and column operations. As with regular matrices, we obtain a diagonal form (see Corollary 1.13.19), but the diagonal entries have a more complicated relationship between each other. Theorem 2.9.2. (The Smith Normal Form) Let ![ $$C \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \)$$ ](A81414_1_En_2_Chapter_IEq398.gif) , then we can find ![ $$P,Q \\in G{l}_{n}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \)$$ ](A81414_1_En_2_Chapter_IEq399.gif) such that ![ $$PCQ = \\left \[\\begin{array}{cccc} {q}_{1}\\left \(t\\right \)& 0 &\\cdots & 0 \\\\ 0 &{q}_{2}\\left \(t\\right \)&\\cdots & 0\\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ 0 & 0 &\\cdots &{q}_{n}\\left \(t\\right \) \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equkf.gif) where q i t ∈ ![ $$\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq400.gif) divides q i+1 t and q 1 t,...,q n t are monic if they are nonzero. Moreover, with these conditions, q 1 t,...,q n t are unique. Proof. Note that having found the diagonal form, we can always make the nonzero polynomials monic so we are not going to worry about that issue. We start by giving a construction for finding ![ $$P,Q \\in G{l}_{n}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \)$$ ](A81414_1_En_2_Chapter_IEq401.gif) such that ![ $$PCQ = \\left \[\\begin{array}{cc} {q}_{1}\\left \(t\\right \)& 0\\\\ 0 &D \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equkg.gif) where ![ $$D \\in \\mathrm{{ Mat}}_{\\left \(n-1\\right \)\\times \\left \(n-1\\right \)}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \)$$ ](A81414_1_En_2_Chapter_IEq402.gif) and q 1 t divides all of the entries in D. If C = 0, there is nothing to prove so assume C≠0. Step 1: Use row and column interchanges until the ![ $$\\left \(1,1\\right \)$$ ](A81414_1_En_2_Chapter_IEq403.gif) entry is the entry with the lowest degree among all nonvanishing entries. Step 2: For each entry p 1j , j > 1 in the first row, write it as p 1j = s 1j p 11 + r 1j where degr 1j < degp 11 and apply the column operation CR 1j − s 1j so that the ![ $$\\left \(1,j\\right \)$$ ](A81414_1_En_2_Chapter_IEq404.gif) entry becomes r 1j . Step 3: For each entry p i1, i > 1 in the first column, write it as p i1 = s i1 p 11 + r i1 where degr i1 < degp 11 and apply the row operation R i1 − s i1 C so that the ![ $$\\left \(i,1\\right \)$$ ](A81414_1_En_2_Chapter_IEq405.gif) entry becomes r i1. Step 4: If some nonzero entry has degree < degp 11, go back to Step 1. Otherwise, go to Step 5. Step 5: We know that p 11 is the only nonzero entry in the first row and column and all other nonzero entries have degree ≥ degp 11. If p 11 divides all entries, we have the desired form. Otherwise, use the column operation CR i11 for some i > 1 to obtain a matrix where the first column has non-zero entries of degree ≥ degp 11 and go back to Step 3. This process will terminate in a finite number of steps and yields ![ $$PCQ = \\left \[\\begin{array}{cc} {q}_{1}\\left \(t\\right \)& 0\\\\ 0 &D \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equkh.gif) where ![ $$D \\in \\mathrm{{ Mat}}_{\\left \(n-1\\right \)\\times \\left \(n-1\\right \)}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \)$$ ](A81414_1_En_2_Chapter_IEq406.gif) and q 1 t divides all of the entries in D. To obtain the desired diagonal form, we can repeat this process with D or use induction on n. Next, we have to check uniqueness of the diagonal entries. We concentrate on showing uniqueness of q 1 t and q 2 t. Note that if ![ $$C = {P}^{-1}\\left \[\\begin{array}{cc} {q}_{1}\\left \(t\\right \)& 0 \\\\ 0 &D \\end{array} \\right \]{Q}^{-1}$$ ](A81414_1_En_2_Chapter_Equki.gif) and q 1 t divides the entries in D, then it also divides all entries in C. Conversely, the relationship ![ $$PCQ = \\left \[\\begin{array}{cc} {q}_{1}\\left \(t\\right \)& 0\\\\ 0 &D \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equkj.gif) shows that any polynomial that divides all of the entries in C must in particular divide q 1 t. This implies that q 1 t is the greatest common divisor of the entries in C, i.e., the monic polynomial of highest degree which divides all of the entries in C. Thus, q 1 t is uniquely defined. To see that q 2 t is also uniquely defined, we need to show that if C is equivalent to both ![ $$\\left \[\\begin{array}{cc} {q}_{1}\\left \(t\\right \)& 0\\\\ 0 &D \\end{array} \\right \]\\text{ and }\\left \[\\begin{array}{cc} {q}_{1}\\left \(t\\right \)& 0 \\\\ 0 &{D}^{{\\prime}} \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equkk.gif) then D and D ′ have the same greatest common divisors for their entries. It suffices to show that the greatest common divisor q for all entries in D divides all entries in D ′ . To show this, first, observe that ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{cc} {q}_{1}\\left \(t\\right \)& 0 \\\\ 0 &{D}^{{\\prime}} \\end{array} \\right \]& =& P\\left \[\\begin{array}{cc} {q}_{1}\\left \(t\\right \)& 0\\\\ 0 &D \\end{array} \\right \]Q \\\\ & =& \\left \[\\begin{array}{cc} {P}_{11} & {P}_{12} \\\\ {P}_{21} & {P}_{22} \\end{array} \\right \]\\left \[\\begin{array}{cc} {q}_{1}\\left \(t\\right \)& 0\\\\ 0 &D \\end{array} \\right \]\\left \[\\begin{array}{cc} {Q}_{11} & {Q}_{12} \\\\ {Q}_{21} & {Q}_{22} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{cc} {P}_{11}{q}_{1}\\left \(t\\right \)&{P}_{12}D \\\\ {P}_{21}{q}_{1}\\left \(t\\right \)&{P}_{22}D \\end{array} \\right \]\\left \[\\begin{array}{cc} {Q}_{11} & {Q}_{12} \\\\ {Q}_{21} & {Q}_{22} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{cc} {P}_{11}{q}_{1}\\left \(t\\right \){Q}_{11} + {P}_{12}D{Q}_{21} & {P}_{11}{q}_{1}\\left \(t\\right \){Q}_{12} + {P}_{12}D{Q}_{22} \\\\ {P}_{21}{q}_{1}\\left \(t\\right \){Q}_{11} + {P}_{22}D{Q}_{21} & {P}_{21}{q}_{1}\\left \(t\\right \){Q}_{12} + {P}_{22}D{Q}_{22} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_2_Chapter_Equ106.gif) As q 1 divides all entries of D, we have that q = pq 1. Looking at the relationship ![ $${D}^{{\\prime}} = {P}_{ 21}{q}_{1}\\left \(t\\right \){Q}_{12} + {P}_{22}D{Q}_{22},$$ ](A81414_1_En_2_Chapter_Equkl.gif) we observe that if p divides all of the entries in, say, P 21, then q will divide all entries in P 21 q 1 tQ 12 and consequently also in D ′ . To show that p divides the entries in P 21, we use the relationship ![ $${q}_{1} = {P}_{11}{q}_{1}\\left \(t\\right \){Q}_{11} + {P}_{12}D{Q}_{21}.$$ ](A81414_1_En_2_Chapter_Equkm.gif) As every element in D is a multiple of q 1, we can write it as D = q 1 E, where p divides every entry in E. This gives us ![ $$1 = {P}_{11}{Q}_{11} + {P}_{12}E{Q}_{21}.$$ ](A81414_1_En_2_Chapter_Equkn.gif) Thus, p divides 1 − P 11 Q 11, which implies that p and Q 11 are relatively prime. The relationship ![ $$0 = {P}_{21}{q}_{1}\\left \(t\\right \){Q}_{11} + {P}_{22}D{Q}_{21}.$$ ](A81414_1_En_2_Chapter_Equko.gif) in turn shows that p divides the entries in P 21 Q 11. As p and Q 11 are relatively prime, this shows that p divides the entries in P 21. □ Example 2.9.3. If ![ $$A = \\lambda {1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_2_Chapter_IEq407.gif), then ![ $$t{1}_{{\\mathbb{F}}^{n}} - A = \\left \(t - \\lambda \\right \){1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_2_Chapter_IEq408.gif) is already in diagonal form. Thus, we see that q 1 t = ⋯ = q n t = t − λ. The Smith normal form can be used to give a very effective way of solving fairly complicated systems of higher order linear differential equations. If we start with ![ $$C \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \)$$ ](A81414_1_En_2_Chapter_IEq409.gif), then we can create a system of n differential equations for the functions x 1,..., x n by interpreting the variable t in the polynomials as the derivative D: ![ $$\\begin{array}{rcl} Cx& =& \\left \[\\begin{array}{cccc} {p}_{11}\\left \(D\\right \) & {p}_{12}\\left \(D\\right \) &\\cdots & {p}_{1n}\\left \(D\\right \) \\\\ {p}_{21}\\left \(D\\right \) & {p}_{22}\\left \(D\\right \) &\\cdots & {p}_{2n}\\left \(D\\right \)\\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ {p}_{n1}\\left \(D\\right \)&{p}_{n2}\\left \(D\\right \)&\\cdots &{p}_{nn}\\left \(D\\right \) \\end{array} \\right \]\\left \[\\begin{array}{c} {x}_{1} \\\\ {x}_{2}\\\\ \\vdots \\\\ {x}_{n} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{c} {p}_{11}\\left \(D\\right \){x}_{1} + {p}_{12}\\left \(D\\right \){x}_{2} + \\cdots+ {p}_{1n}\\left \(D\\right \){x}_{n} \\\\ {p}_{21}\\left \(D\\right \){x}_{1} + {p}_{22}\\left \(D\\right \){x}_{2} + \\cdots+ {p}_{2n}\\left \(D\\right \){x}_{n}\\\\ \\vdots \\\\ {p}_{n1}\\left \(D\\right \){x}_{1} + {p}_{n2}\\left \(D\\right \){x}_{2} + \\cdots+ {p}_{nn}\\left \(D\\right \){x}_{n} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{c} {b}_{1} \\\\ {b}_{2}\\\\ \\vdots \\\\ {b}_{n}\\end{array} \\right \], \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ107.gif) where b 1,..., b n are given functions. To solve such a system, we use the Smith normal form ![ $$PCQ = \\left \[\\begin{array}{cccc} {q}_{1}\\left \(t\\right \)& 0 &\\cdots & 0 \\\\ 0 &{q}_{2}\\left \(t\\right \)&\\cdots & 0\\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ 0 & 0 &\\cdots &{q}_{n}\\left \(t\\right \) \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equkp.gif) and define ![ $$\\left \[\\begin{array}{c} {y}_{1} \\\\ {y}_{2}\\\\ \\vdots \\\\ {y}_{n} \\end{array} \\right \] = {Q}^{-1}\\left \[\\begin{array}{c} {x}_{1}\\\\ {x}_{2}\\\\ \\vdots \\\\ {x}_{n} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equkq.gif) ![ $$\\left \[\\begin{array}{c} {c}_{1} \\\\ {c}_{2}\\\\ \\vdots \\\\ {c}_{n}\\end{array} \\right \] = P\\left \[\\begin{array}{c} {b}_{1} \\\\ {b}_{2}\\\\ \\vdots \\\\ {b}_{n}\\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equkr.gif) We then start by solving the decoupled system ![ $$\\left \[\\begin{array}{cccc} {q}_{1}\\left \(D\\right \)& 0 &\\cdots & 0 \\\\ 0 &{q}_{2}\\left \(D\\right \)&\\cdots & 0\\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ 0 & 0 &\\cdots &{q}_{n}\\left \(D\\right \) \\end{array} \\right \]\\left \[\\begin{array}{c} {y}_{1} \\\\ {y}_{2}\\\\ \\vdots \\\\ {y}_{n} \\end{array} \\right \] = \\left \[\\begin{array}{c} {c}_{1} \\\\ {c}_{2}\\\\ \\vdots \\\\ {c}_{n}\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equks.gif) which is really just n-independent higher order equations ![ $$\\begin{array}{rcl} {q}_{1}\\left \(D\\right \){y}_{1}& =& {c}_{1} \\\\ {q}_{2}\\left \(D\\right \){y}_{2}& =& {c}_{2} \\\\ & \\vdots & \\\\ {q}_{n}\\left \(D\\right \){y}_{n}& =& {c}_{n} \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ108.gif) and then we find the original functions by ![ $$\\left \[\\begin{array}{c} {x}_{1} \\\\ {x}_{2}\\\\ \\vdots \\\\ {x}_{n} \\end{array} \\right \] = Q\\left \[\\begin{array}{c} {y}_{1} \\\\ {y}_{2}\\\\ \\vdots \\\\ {y}_{n} \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equkt.gif) This use of the Smith normal form is similarly very effective in solving systems of linear recurrences (recurrences were discussed at the end of Sect. 2.2 in relation to solving higher order differential equations). Example 2.9.4. Consider the 2 ×2 system of differential equations that comes from ![ $$C = \\left \[\\begin{array}{cc} \\left \(t - {\\lambda }_{1}\\right \)\\left \(t - {\\lambda }_{2}\\right \)\\left \(t - {\\lambda }_{3}\\right \)&\\left \(t - {\\lambda }_{1}\\right \)\\left \(t - {\\lambda }_{2}\\right \) \\\\ \\left \(t - {\\lambda }_{1}\\right \)\\left \(t - {\\lambda }_{3}\\right \) & \\left \(t - {\\lambda }_{1}\\right \) \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equku.gif) i.e., ![ $$\\left \[\\begin{array}{cc} \\left \(D - {\\lambda }_{1}\\right \)\\left \(D - {\\lambda }_{2}\\right \)\\left \(D - {\\lambda }_{3}\\right \)&\\left \(D - {\\lambda }_{1}\\right \)\\left \(D - {\\lambda }_{2}\\right \) \\\\ \\left \(D - {\\lambda }_{1}\\right \)\\left \(D - {\\lambda }_{3}\\right \) & \\left \(D - {\\lambda }_{1}\\right \) \\end{array} \\right \]\\left \[\\begin{array}{c} {x}_{1} \\\\ {x}_{2}\\end{array} \\right \] = \\left \[\\begin{array}{c} {b}_{1} \\\\ {b}_{2} \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equkv.gif) Here ![ $${R}_{21}\\left \(-\\left \(t - {\\lambda }_{2}\\right \)\\right \){I}_{12}C{I}_{12}{R}_{12}\\left \(-\\left \(t - {\\lambda }_{3}\\right \)\\right \) = \\left \[\\begin{array}{cc} t - {\\lambda }_{1} & 0\\\\ 0 &0 \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equkw.gif) So we have to start by solving ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{cc} t - {\\lambda }_{1} & 0\\\\ 0 &0 \\end{array} \\right \]\\left \[\\begin{array}{c} {y}_{1} \\\\ {y}_{2}\\end{array} \\right \]& =& {R}_{21}\\left \(-\\left \(t - {\\lambda }_{2}\\right \)\\right \){I}_{12}\\left \[\\begin{array}{c} {b}_{1} \\\\ {b}_{2} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{c} {b}_{2} \\\\ {b}_{1} -\\left \(D - {\\lambda }_{2}\\right \){b}_{2} \\end{array} \\right \] \\\\ \\end{array}$$ ](A81414_1_En_2_Chapter_Equ109.gif) In order to solve that system, we have to require that b 1, b 2 are related by ![ $$\\left \(D - {\\lambda }_{2}\\right \){b}_{2} = D{b}_{2} - {\\lambda }_{2}{b}_{2} = {b}_{1}.$$ ](A81414_1_En_2_Chapter_Equkx.gif) If that is the case, then y 2 can be any function and y 1 is found by solving ![ $$D{y}_{1} - {\\lambda }_{1}{y}_{1} = {b}_{2}.$$ ](A81414_1_En_2_Chapter_Equky.gif) We then find the original solutions from ![ $$\\left \[\\begin{array}{c} {x}_{1} \\\\ {x}_{2}\\end{array} \\right \] = {I}_{12}{R}_{12}\\left \(-\\left \(t - {\\lambda }_{3}\\right \)\\right \)\\left \[\\begin{array}{c} {y}_{1} \\\\ {y}_{2}\\end{array} \\right \] = \\left \[\\begin{array}{c} {y}_{2} \\\\ {y}_{1} -\\left \(D{y}_{2} - {\\lambda }_{3}{y}_{2}\\right \) \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equkz.gif) Definition 2.9.5. The monic polynomials q 1 t,..., q n t are called the invariant factors of ![ $$C \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \)$$ ](A81414_1_En_2_Chapter_IEq410.gif). Note that some of these polynomials can vanish: q k + 1 t = ⋯ = q n t = 0. In case ![ $$C = t{1}_{{\\mathbb{F}}^{n}} - A,$$ ](A81414_1_En_2_Chapter_IEq411.gif) where ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq412.gif), the invariant factors are related to the similarity invariants of A that we defined in Sect. 2.7. Before proving this, we need to gain a better understanding of the invariant factors. Proposition 2.9.6. The invariant factors of ![ $$t{1}_{{\\mathbb{F}}^{n}} - A$$ ](A81414_1_En_2_Chapter_IEq413.gif) and ![ $$t{1}_{{\\mathbb{F}}^{n}} - {A}^{{\\prime}}$$ ](A81414_1_En_2_Chapter_IEq414.gif) are the same if A and A ′ are similar. Proof. Assume that A = BA ′ B − 1, then ![ $$t{1}_{{\\mathbb{F}}^{n}} - A = B\\left \(t{1}_{{\\mathbb{F}}^{n}} - {A}^{{\\prime}}\\right \){B}^{-1}.$$ ](A81414_1_En_2_Chapter_Equla.gif) In particular, ![ $$t{1}_{{\\mathbb{F}}^{n}} - A$$ ](A81414_1_En_2_Chapter_IEq415.gif) and ![ $$t{1}_{{\\mathbb{F}}^{n}} - {A}^{{\\prime}}$$ ](A81414_1_En_2_Chapter_IEq416.gif) are equivalent. Since the Smith normal form is unique this shows that they have the same Smith Normal Form. □ This proposition allows to define the invariant factors related to a linear operator L : V -> V on a finite-dimensional vector space by computing the invariant factors of ![ $$t{1}_{{\\mathbb{F}}^{n}} -\\left \[L\\right \]$$ ](A81414_1_En_2_Chapter_IEq417.gif) for any matrix representation ![ $$\\left \[L\\right \]$$ ](A81414_1_En_2_Chapter_IEq418.gif) of L. Next, we check what happens for companion matrices. Proposition 2.9.7. The invariant factors of ![ $$t{1}_{{\\mathbb{F}}^{n}} - {C}_{p}$$ ](A81414_1_En_2_Chapter_IEq419.gif), where Cp is the companion matrix for a monic polynomial p of degree n are given by q1 = ⋯ = qn−1 = 1 and qn = p. Proof. Let C p be the companion matrix for ![ $$p\\left \(t\\right \) = {t}^{n} + {\\alpha }_{n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{1}t + {\\alpha }_{0} \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq420.gif), i.e., ![ $${C}_{p} = \\left \[\\begin{array}{ccccc} 0&0&\\cdots &0& - {\\alpha }_{0} \\\\ 1&0&\\cdots &0& - {\\alpha }_{1} \\\\ 0&1&\\cdots &0& - {\\alpha }_{2}\\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ 0&0&\\cdots &1& - {\\alpha }_{n-1} \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equlb.gif) Then, ![ $$t{1}_{{\\mathbb{F}}^{n}}-{C}_{p} = \\left \[\\begin{array}{ccccc} t & 0 &\\cdots & 0 & {\\alpha }_{0} \\\\ - 1& t &\\cdots & 0 & {\\alpha }_{1} \\\\ 0 & - 1&\\cdots & 0 & {\\alpha }_{2}\\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ 0 & 0 &\\cdots & - 1&t + {\\alpha }_{n-1} \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equlc.gif) We know from Proposition 2.6.3 that ![ $$t{1}_{{\\mathbb{F}}^{n}} - {C}_{p}$$ ](A81414_1_En_2_Chapter_IEq421.gif) is row equivalent to a matrix of the form ![ $$\\left \[\\begin{array}{ccccc} - 1& t &\\cdots & 0 & {\\alpha }_{1} \\\\ 0 & - 1&\\cdots & 0 & {\\alpha }_{2} \\\\ 0 & 0 & \\ddots & \\vdots & \\vdots\\\\ \\vdots & \\vdots & & - 1 & t + {\\alpha }_{ n-1} \\\\ 0 & 0 &\\cdots & 0 &{t}^{n} + {\\alpha }_{n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{1}t + {\\alpha }_{0} \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equld.gif) We can change the − 1 diagonal entries to 1. We can then use column operations to eliminate the t entries to the right of the diagonal entries in columns 2,..., n − 1 as well as the entries in the last column that are in the rows 1,..., n − 1. This results in the equivalent diagonal matrix ![ $$\\left \[\\begin{array}{cccc} 1& 0 &\\cdots &0\\\\ 0 & \\ddots & & \\vdots \\\\ \\vdots & & 1 &0\\\\ 0 &\\cdots& 0 &p \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equle.gif) This must be the Smith normal form. □ We can now show how the similarity invariant of a linear operator can be computed using the Smith normal form. Theorem 2.9.8. Let L : V -> V be a linear operator on a finite-dimensional vector space ![ $${p}_{1},\\ldots,{p}_{k} \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq422.gif) the similarity invariants, and ![ $${q}_{1},\\ldots,{q}_{n} \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq423.gif) the invariant factors of ![ $$t{1}_{{\\mathbb{F}}^{n}} -\\left \[L\\right \]$$ ](A81414_1_En_2_Chapter_IEq424.gif) . Then, q n−i = p i+1 for i = 0,...,k − 1 and q j = 1 for j = 1,...,n − k. Proof. We start by selecting the Frobenius canonical form ![ $$\\left \[L\\right \] = \\left \[\\begin{array}{cccc} {C}_{{p}_{1}} & 0 && 0 \\\\ 0 &{C}_{{p}_{2}}\\\\ & & \\ddots \\\\ 0 & &&{C}_{{p}_{k}} \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equlf.gif) as the matrix representation for L. The previous proposition gives the invariant factors of the blocks ![ $$t{1}_{{\\mathbb{F}}^{\\mathrm{deg}{p}_{i}}} - {C}_{{p}_{i}}$$ ](A81414_1_En_2_Chapter_IEq425.gif). This tells us that if we only do row and column operations that respect the block diagonal form, then ![ $$t{1}_{{\\mathbb{F}}^{n}} -\\left \[L\\right \]$$ ](A81414_1_En_2_Chapter_IEq426.gif) is equivalent to a block diagonal matrix ![ $$C = \\left \[\\begin{array}{cccc} {C}_{1} & 0 && 0 \\\\ 0 &{C}_{2}\\\\ & & \\ddots \\\\ 0 & &&{C}_{k} \\end{array} \\right \],$$ ](A81414_1_En_2_Chapter_Equlg.gif) where C i is a diagonal matrix ![ $${C}_{i} = \\left \[\\begin{array}{cccc} 1&0&& 0\\\\ 0 &1\\\\ & &\\ddots \\\\ 0& &&{p}_{i} \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equlh.gif) We can now perform row and column interchanges on C to obtain the diagonal matrix ![ $$\\left \[\\begin{array}{cccccc} 1\\\\ &\\ddots\\\\ &&1 \\\\ && &{p}_{k}\\\\ & & & &\\ddots \\\\ && & &&{p}_{1} \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equli.gif) Since the Smith normal form is unique and the Frobenius normal form has the property that p i + 1 divides p i for i = 1,...k − 1 we have obtained the Smith normal form for ![ $$t{1}_{{\\mathbb{F}}^{n}} -\\left \[L\\right \]$$ ](A81414_1_En_2_Chapter_IEq427.gif) and proven the claim. □ ### 2.9.1 Exercises 1. Find the Smith normal form for the matrices (a) ![ $$\\left \[\\begin{array}{cc} \\left \(t - {\\lambda }_{1}\\right \)\\left \(t - {\\lambda }_{2}\\right \)\\left \(t - {\\lambda }_{3}\\right \)&\\left \(t - {\\lambda }_{1}\\right \)\\left \(t - {\\lambda }_{2}\\right \) \\\\ \\left \(t - {\\lambda }_{1}\\right \)\\left \(t - {\\lambda }_{2}\\right \) & \\left \(t - {\\lambda }_{1}\\right \)\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equlj.gif) (b) ![ $$\\left \[\\begin{array}{ccc} t - 1 &{t}^{2} + t - 2& t - 1 \\\\ {t}^{3} + {t}^{2} - 4t + 2& 0 &{t}^{2} - t\\\\ t - 1 & 0 & t - 1 \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equlk.gif) (c) ![ $$\\left \[\\begin{array}{cccc} - t& 0 & 0 & 0\\\\ 1 & - t & 0 & 0 \\\\ 0 & 0 & - t& 0\\\\ 0 & 0 & 0 & -t \\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equll.gif) (d) ![ $$\\left \[\\begin{array}{cccc} - t& 0 & 0 & 0\\\\ 1 & - t & 0 & 0 \\\\ 0 & 0 & - t& 0\\\\ 0 & 0 & 1 & -t \\end{array} \\right \].$$ ](A81414_1_En_2_Chapter_Equlm.gif) 2. Show that if ![ $$C \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \)$$ ](A81414_1_En_2_Chapter_IEq428.gif) has a k ×k minor that belongs to ![ $$G{l}_{k}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \),$$ ](A81414_1_En_2_Chapter_IEq429.gif) then q 1 t = ⋯ = q k t = 1. A k ×kminor is a k ×k matrix that is obtained form C by deleting all but k columns and k rows. 3. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq430.gif) and consider the two linear operators ![ $${L}_{A},{R}_{A} :\\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \) \\rightarrow \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq431.gif) defined by L A X = AX and R A X = XA. Are L A and R A similar? 4. Let C p and C q be companion matrices. Show that ![ $$\\left \[\\begin{array}{cc} {C}_{p}& \\quad 0 \\\\ 0 &\\quad {C}_{q}\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equln.gif) has ![ $$\\begin{array}{rcl} {q}_{1}\\left \(t\\right \)& =& \\cdots= {q}_{n-2}\\left \(t\\right \) = 1, \\\\ {q}_{n-1}\\left \(t\\right \)& =& \\mathrm{gcd}\\left \(p,q\\right \), \\\\ {q}_{n}\\left \(t\\right \)& =& \\mathrm{lcm}\\left \(p,q\\right \).\\end{array}$$ ](A81414_1_En_2_Chapter_Equ110.gif) 5. Find the similarity invariants for ![ $$\\left \[\\begin{array}{ccc} {C}_{p}& \\quad 0 & \\quad 0 \\\\ 0 &\\quad {C}_{q}& \\quad 0 \\\\ 0 & \\quad 0 &\\quad {C}_{r}\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equlo.gif) 6. Find the Smith normal form of a diagonal matrix ![ $$\\left \[\\begin{array}{cccc} {p}_{1} \\\\ & {p}_{2}\\\\ & & \\ddots \\\\ & & & {p}_{n}\\end{array} \\right \]$$ ](A81414_1_En_2_Chapter_Equlp.gif) where ![ $${p}_{1},\\ldots,{p}_{n} \\in\\mathbb{F}\\left \[t\\right \]$$ ](A81414_1_En_2_Chapter_IEq432.gif). Hint: Start with n = 2. 7. Show that ![ $$A,B \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_2_Chapter_IEq433.gif) are similar if ![ $$\\left \(t{1}_{{\\mathbb{F}}^{n}} - A\\right \),\\left \(t{1}_{{\\mathbb{F}}^{n}} - B\\right \) \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\left \[t\\right \]\\right \)$$ ](A81414_1_En_2_Chapter_IEq434.gif) are equivalent. Hint: Use the Smith normal form. It is interesting to note that there is a proof which does not use the Smith normal form (see [Serre, Theorem 6.3.2]). References Axler. Axler, S.: Linear Algebra Done Right. Springer-Verlag, New York (1997) Bretscher. Bretscher, O.: Linear Algebra with Applications, 2nd edn. Prentice-Hall, Upper Saddle River (2001) Curtis. Curtis, C.W.: Linear Algebra: An Introductory Approach. Springer-Verlag, New York (1984) Greub. Greub, W.: Linear Algebra, 4th edn. Springer-Verlag, New York (1981) Halmos. Halmos, P.R.: Finite-Dimensional Vector Spaces. Springer-Verlag, New York (1987) Hoffman-Kunze. Hoffman, K., Kunze, R.: Linear Algebra. Prentice-Hall, Upper Saddle River (1961) Lang. Lang, S.: Linear Algebra, 3rd edn. Springer-Verlag, New York (1987) Roman. Roman, S.: Advanced Linear Algebra, 2nd edn. Springer-Verlag, New York (2005) Serre. Serre, D.: Matrices, Theory and Applications. Springer-Verlag, New York (2002) Peter PetersenUndergraduate Texts in MathematicsLinear Algebra201210.1007/978-1-4614-3612-6_3(C) Springer Science+Business Media New York 2012 # 3. Inner Product Spaces Peter Petersen1 (1) Department of Mathematics, University of California, Los Angeles, CA, USA Abstract So far, we have only discussed vector spaces without adding any further structure to the space. In this chapter, we shall study so-called inner product spaces. These are vector spaces where in addition we know the length of each vector and the angle between two vectors. Since this is what we are used to from the plane and space, it would seem like a reasonable extra layer of information. So far, we have only discussed vector spaces without adding any further structure to the space. In this chapter, we shall study so-called inner product spaces. These are vector spaces where in addition we know the length of each vector and the angle between two vectors. Since this is what we are used to from the plane and space, it would seem like a reasonable extra layer of information. We shall cover some of the basic constructions such as Gram-Schmidt orthogonalization, orthogonal projections, and orthogonal complements. We also prove the Cauchy-Schwarz and Bessel inequalities. In the last sections, we introduce the adjoint of linear maps. The adjoint helps us understand the connections between image and kernel and leads to a very interesting characterization of orthogonal projections. Finally, we also explain matrix exponentials and how they can be used to solve systems of linear differential equations. In this and the following chapter, vector spaces always have either real or complex scalars. ## 3.1 Examples of Inner Products ### 3.1.1 Real Inner Products We start by considering the (real) plane ℝ 2 = α1, α2 : α1, α2 ∈ ℝ. The length of a vector is calculated via the Pythagorean theorem: ![ $$\\left \\Vert \\left \({\\alpha }_{1},{\\alpha }_{2}\\right \)\\right \\Vert = \\sqrt{{\\alpha }_{1 }^{2 } + {\\alpha }_{2 }^{2}}.$$ ](A81414_1_En_3_Chapter_Equa.gif) The angle between two vectors x = α1, α2 and y = β1, β2 is a little trickier to compute. First, we normalize the vectors ![ $$\\begin{array}{rcl} & \\frac{1} {\\left \\Vert x\\right \\Vert }x,& \\\\ & \\frac{1} {\\left \\Vert y\\right \\Vert }y & \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ1.gif) so that they lie on the unit circ Le. We then trace the arc on the unit circ Le between the vectors in order to find the angle θ. If x = 1, 0, the definitions of cosine and sine (see Fig. 3.1) tell us that this angle can be computed via ![ $$\\begin{array}{rcl} \\cos \\theta & =& \\frac{{\\beta }_{1}} {\\left \\Vert y\\right \\Vert }, \\\\ \\sin \\theta & =& \\frac{{\\beta }_{2}} {\\left \\Vert y\\right \\Vert }.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ2.gif) Fig. 3.1 Definition of angle This suggests that if we define ![ $$\\begin{array}{rcl} \\cos {\\theta }_{1}& =& \\frac{{\\alpha }_{1}} {\\left \\Vert x\\right \\Vert },\\:\\sin {\\theta }_{1} = \\frac{{\\alpha }_{2}} {\\left \\Vert x\\right \\Vert }, \\\\ \\cos {\\theta }_{2}& =& \\frac{{\\beta }_{1}} {\\left \\Vert y\\right \\Vert },\\:\\sin {\\theta }_{2} = \\frac{{\\beta }_{2}} {\\left \\Vert y\\right \\Vert }, \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ3.gif) then ![ $$\\begin{array}{rcl} \\cos \\theta & =& \\cos \\left \({\\theta }_{2} - {\\theta }_{1}\\right \) \\\\ & =& \\cos {\\theta }_{1}\\cos {\\theta }_{2} +\\sin {\\theta }_{1}\\sin {\\theta }_{2} \\\\ & =& \\frac{{\\alpha }_{1}{\\beta }_{1} + {\\alpha }_{2}{\\beta }_{2}} {\\left \\Vert x\\right \\Vert \\cdot \\left \\Vert y\\right \\Vert }.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ4.gif) So if the inner or dot product of x and y is defined by ![ $$\\left \(x\\vert y\\right \) = {\\alpha }_{1}{\\beta }_{1} + {\\alpha }_{2}{\\beta }_{2},$$ ](A81414_1_En_3_Chapter_Equb.gif) then we obtain the relationship ![ $$\\left \(x\\vert y\\right \) = \\left \\Vert x\\right \\Vert \\left \\Vert y\\right \\Vert \\cos \\theta.$$ ](A81414_1_En_3_Chapter_Equc.gif) The length of vectors can also be calculated via ![ $$\\left \(x\\vert x\\right \) ={ \\left \\Vert x\\right \\Vert }^{2}.$$ ](A81414_1_En_3_Chapter_Equd.gif) The ![ $$\\left \(x\\vert y\\right \)$$ ](A81414_1_En_3_Chapter_IEq1.gif) notation is used so as not to confuse the expression with pairs of vectors ![ $$\\left \(x,y\\right \)$$ ](A81414_1_En_3_Chapter_IEq2.gif). One also often sees ![ $$\\left \\langle x,y\\right \\rangle$$ ](A81414_1_En_3_Chapter_IEq3.gif) or ![ $$\\left \\langle x\\vert y\\right \\rangle$$ ](A81414_1_En_3_Chapter_IEq4.gif) used for inner products. The key properties that we shall use to generalize the idea of an inner product are: 1. ![ $$\\left \(x\\vert x\\right \) ={ \\left \\Vert x\\right \\Vert }^{2} > 0$$ ](A81414_1_En_3_Chapter_IEq5.gif) unless x = 0. 2. ![ $$\\left \(x\\vert y\\right \) = \\left \(y\\vert x\\right \)$$ ](A81414_1_En_3_Chapter_IEq6.gif). 3. x -> x | y is linear. One can immediately generalize this algebraically defined inner product to ℝ 3 and even ℝ n by ![ $$\\begin{array}{rcl} \\left \(x\\vert y\\right \)& =& \\left \(\\left.\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n} \\end{array} \\right \]\\right \\vert \\left \[\\begin{array}{c} {\\beta }_{1}\\\\ \\vdots \\\\ {\\beta }_{n} \\end{array} \\right \]\\right \) \\\\ & =& {x}^{t}y \\\\ & =& \\left \[\\begin{array}{ccc} {\\alpha }_{1} & \\cdots &{\\alpha }_{n} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\beta }_{1}\\\\ \\vdots \\\\ {\\beta }_{n} \\end{array} \\right \] \\\\ & =& {\\alpha }_{1}{\\beta }_{1} + \\cdots+ {\\alpha }_{n}{\\beta }_{n}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ5.gif) The three above-mentioned properties still remain true, but we seem to have lost the connection with the angle. This is settled by observing thatCauchy's inequality holds: ![ $$\\begin{array}{rcl}{ \\left \(x\\vert y\\right \)}^{2}& \\leq & \\left \(x\\vert x\\right \)\\left \(y\\vert y\\right \),\\text{ or} \\\\ {\\left \({\\alpha }_{1}{\\beta }_{1} + \\cdots+ {\\alpha }_{n}{\\beta }_{n}\\right \)}^{2}& \\leq & \\left \({\\alpha }_{ 1}^{2} + \\cdots+ {\\alpha }_{ n}^{2}\\right \)\\left \({\\beta }_{ 1}^{2} + \\cdots+ {\\beta }_{ n}^{2}\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ6.gif) In other words, ![ $$-1 \\leq\\frac{\\left \(x\\vert y\\right \)} {\\left \\Vert x\\right \\Vert \\left \\Vert y\\right \\Vert } \\leq1.$$ ](A81414_1_En_3_Chapter_Eque.gif) This implies that the angle can be redefined up to sign through the equation ![ $$\\cos \\theta= \\frac{\\left \(x\\vert y\\right \)} {\\left \\Vert x\\right \\Vert \\left \\Vert y\\right \\Vert }.$$ ](A81414_1_En_3_Chapter_Equf.gif) In addition, as we shall see, the three properties can be used as axioms for inner products. Two vectors are said to be orthogonal or perpendicular if their inner product vanishes. With this definition, the proof of thePythagorean theorem becomes completely algebraic: ![ $${\\left \\Vert x\\right \\Vert }^{2} +{ \\left \\Vert y\\right \\Vert }^{2} ={ \\left \\Vert x + y\\right \\Vert }^{2},$$ ](A81414_1_En_3_Chapter_Equg.gif) if x and y are orthogonal. To see why this is true, note that the properties of the inner product imply: ![ $$\\begin{array}{rcl}{ \\left \\Vert x + y\\right \\Vert }^{2}& =& \\left \(x + y\\vert x + y\\right \) \\\\ & =& \\left \(x\\vert x\\right \) + \\left \(y\\vert y\\right \) + \\left \(x\\vert y\\right \) + \\left \(y\\vert x\\right \) \\\\ & =& \\left \(x\\vert x\\right \) + \\left \(y\\vert y\\right \) + 2\\left \(x\\vert y\\right \) \\\\ & =&{ \\left \\Vert x\\right \\Vert }^{2} +{ \\left \\Vert y\\right \\Vert }^{2} + 2\\left \(x\\vert y\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ7.gif) Thus, the relation ![ $${\\left \\Vert x\\right \\Vert }^{2} +{ \\left \\Vert y\\right \\Vert }^{2} ={ \\left \\Vert x + y\\right \\Vert }^{2}$$ ](A81414_1_En_3_Chapter_IEq7.gif) holds precisely when ![ $$\\left \(x\\vert y\\right \) = 0$$ ](A81414_1_En_3_Chapter_IEq8.gif). The inner product also comes in handy in expressing several other geometric constructions. The projection of a vector x onto the line in the direction of y (see Fig. 3.2) is given by ![ $$\\begin{array}{rcl} \\mathrm{{proj}}_{y}\\left \(x\\right \)& =& \\left \(x\\left \\vert \\frac{y} {\\left \\Vert y\\right \\Vert }\\right.\\right \)\\frac{y} {\\left \\Vert y\\right \\Vert } \\\\ & =& \\frac{\\left \(x\\vert y\\right \)y} {\\left \(y\\vert y\\right \)}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ8.gif) All planes that have normal n, i.e., are perpendicular to n, are defined by the equation ![ $$\\left \(x\\vert n\\right \) = c,$$ ](A81414_1_En_3_Chapter_Equh.gif) where c is determined by any point x 0 that lies in the plane: c = x 0 | n (see also Fig. 3.3). Fig. 3.2 Projection Fig. 3.3 Plane ### 3.1.2 Complex Inner Products Let us now see what happens if we try to use complex scalars. Our geometric picture seems to disappear, but we shall insist that the real part of a complex inner product must have the (geometric) properties we have already discussed. Let us start with the complex plane ℂ. Recall that if z = α1 + α2 i, then the complex conjugate is the reflection of z in the first coordinate axis and is defined by ![ $$\\bar{z} = {\\alpha }_{1} - {\\alpha }_{2}i$$ ](A81414_1_En_3_Chapter_IEq9.gif). Note that ![ $$z \\rightarrow \\bar{ z}$$ ](A81414_1_En_3_Chapter_IEq10.gif) is not complex linear but only linear with respect to real scalar multiplication. Conjugation has some further important properties: ![ $$\\begin{array}{rcl} \\left \\Vert z\\right \\Vert & =& \\sqrt{z \\cdot \\bar{ z}}, \\\\ \\overline{z \\cdot w}& =& \\bar{z} \\cdot \\bar{ w}, \\\\ {z}^{-1}& =& \\frac{\\bar{z}} {{\\left \\Vert z\\right \\Vert }^{2}} \\\\ \\mathrm{Re}\\left \(z\\right \)& =& \\frac{z +\\bar{ z}} {2} \\\\ \\mathrm{Im}\\left \(z\\right \)& =& \\frac{z -\\bar{ z}} {2i}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ9.gif) Given that ![ $${\\left \\Vert z\\right \\Vert }^{2} = z\\bar{z}$$ ](A81414_1_En_3_Chapter_IEq11.gif), it seems natural to define the complex inner product by ![ $$\\left \(z\\vert w\\right \) = z\\bar{w}$$ ](A81414_1_En_3_Chapter_IEq12.gif). Thus, it is not just complex multiplication. If we take the real part, we also note that we retrieve the real inner product defined above: ![ $$\\begin{array}{rcl} \\mathrm{Re}\\left \(z\\vert w\\right \)& =& \\mathrm{Re}\\left \(z\\bar{w}\\right \) \\\\ & =& \\mathrm{Re}\\left \(\\left \({\\alpha }_{1} + {\\alpha }_{2}i\\right \)\\left \({\\beta }_{1} - {\\beta }_{2}i\\right \)\\right \) \\\\ & =& {\\alpha }_{1}{\\beta }_{1} + {\\alpha }_{2}{\\beta }_{2}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ10.gif) Having established this, we should be happy and just accept the fact that complex inner products include conjugations. The three important properties for complex inner products are 1. ![ $$\\left \(x\\vert x\\right \) ={ \\left \\Vert x\\right \\Vert }^{2} > 0$$ ](A81414_1_En_3_Chapter_IEq13.gif) unless x = 0. 2. ![ $$\\left \(x\\vert y\\right \) = \\overline{\\left \(y\\vert x\\right \)}$$ ](A81414_1_En_3_Chapter_IEq14.gif). 3. x -> x | y is complex linear. The inner product on ℂ n is defined by ![ $$\\begin{array}{rcl} \\left \(x\\vert y\\right \)& =& \\left \(\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n} \\end{array} \\right \]\\left \\vert \\left \[\\begin{array}{c} {\\beta }_{1}\\\\ \\vdots \\\\ {\\beta }_{n} \\end{array} \\right \]\\right.\\right \) \\\\ & =& {x}^{t}\\bar{y} \\\\ & =& \\left \[\\begin{array}{ccc} {\\alpha }_{1} & \\cdots &{\\alpha }_{n} \\end{array} \\right \]\\left \[\\begin{array}{c} \\bar{{\\beta }}_{1}\\\\ \\vdots \\\\ \\bar{{\\beta }}_{n} \\end{array} \\right \] \\\\ & =& {\\alpha }_{1}\\bar{{\\beta }}_{1} + \\cdots+ {\\alpha }_{n}\\bar{{\\beta }}_{n}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ11.gif) If we take the real part of this inner product, we get the inner product on ℝ 2n ≃ ℂ n . We say that two complex vectors are orthogonal if their inner product vanishes. This is not quite the same as in the real case, as the two vectors 1 and i in ℂ are not complex orthogonal even though they are orthogonal as real vectors. To spell this out a little further, let us consider thePythagorean theorem for complex vectors. Note that ![ $$\\begin{array}{rcl}{ \\left \\Vert x + y\\right \\Vert }^{2}& =& \\left \(x + y\\vert x + y\\right \) \\\\ & =& \\left \(x\\vert x\\right \) + \\left \(y\\vert y\\right \) + \\left \(x\\vert y\\right \) + \\left \(y\\vert x\\right \) \\\\ & =& \\left \(x\\vert x\\right \) + \\left \(y\\vert y\\right \) + \\left \(x\\vert y\\right \) + \\overline{\\left \(x\\vert y\\right \)} \\\\ & =&{ \\left \\Vert x\\right \\Vert }^{2} +{ \\left \\Vert y\\right \\Vert }^{2} + 2\\mathrm{Re}\\left \(x\\vert y\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ12.gif) Thus, only the real part of the inner product needs to vanish for this theorem to hold. This should not come as a surprise as we already knew the result to be true in this case. ### 3.1.3 A Digression on Quaternions* Another very interesting space that contains some new algebra as well as geometry is ℂ 2 ≃ ℝ 4. This is the space-time of special relativity. In this short section, we mention some of the important features of this space. In analogy with writing ![ $$\\mathbb{C} =\\mathrm{{span}}_{\\mathbb{R}}\\left \\{1,i\\right \\}$$ ](A81414_1_En_3_Chapter_IEq15.gif), let us define ![ $$\\begin{array}{rcl} \\mathbb{H}& =& \\mathrm{{span}}_{\\mathbb{C}}\\left \\{1,j\\right \\} \\\\ & =& \\mathrm{{span}}_{\\mathbb{R}}\\left \\{1,i,1 \\cdot j,i \\cdot j\\right \\} \\\\ & =& \\mathrm{{span}}_{\\mathbb{R}}\\left \\{1,i,j,k\\right \\}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ13.gif) The three vectors i, j, k form the usual basis for the three-dimensional space ℝ 3. The remaining coordinate in ℍ is the time coordinate. In ℍ, we also have a conjugation that changes the sign in front of the imaginary numbers i, j, k ![ $$\\begin{array}{rcl} \\bar{q}& =& \\overline{{\\alpha }_{0} + {\\alpha }_{1}i + {\\alpha }_{2}j + {\\alpha }_{3}k} \\\\ & =& {\\alpha }_{0} - {\\alpha }_{1}i - {\\alpha }_{2}j - {\\alpha }_{3}k.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ14.gif) To make perfect sense of things, we need to figure out how to multiply i, j, k. In line with i 2 = − 1, we also define j 2 = − 1 and k 2 = − 1. As for the mixed products, we have already defined ij = k. More generally, we can decide how to compute these products by using the cross product in ℝ 3. Thus, ![ $$\\begin{array}{rcl} ij& =& k = -ji, \\\\ jk& =& i = -kj, \\\\ ki& =& j = -ik.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ15.gif) This enables us to multiply q 1, q 2 ∈ ℍ. The multiplication is not commutative, but it is associative (unlike the cross product), and nonzero elements have inverses. The fact that the imaginary numbers i, j, k anti-commute shows that conjugation must reverse the order of multiplication (like taking inverses of matrices and quaternions) ![ $$\\overline{pq} =\\bar{ q}\\bar{p}.$$ ](A81414_1_En_3_Chapter_Equi.gif) As with real and complex numbers, we have that ![ $$q\\bar{q} ={ \\left \\vert q\\right \\vert }^{2} = {\\alpha }_{ 0}^{2} + {\\alpha }_{ 1}^{2} + {\\alpha }_{ 2}^{2} + {\\alpha }_{ 3}^{2}.$$ ](A81414_1_En_3_Chapter_Equj.gif) This shows that every nonzero quaternion has an inverse given by ![ $${q}^{-1} = \\frac{\\bar{q}} {{\\left \\vert q\\right \\vert }^{2}}.$$ ](A81414_1_En_3_Chapter_Equk.gif) The space ℍ with usual vector addition and this multiplication is called the space of quaternions. The name was chosen by Hamilton who invented these numbers and wrote voluminous material on their uses. As with complex numbers, we have a real part, namely, the part without i, j, k, that can be calculated by ![ $$\\mathrm{Re}q = \\frac{q +\\bar{ q}} {2}.$$ ](A81414_1_En_3_Chapter_Equl.gif) The usual real inner product on ℝ 4 can now be defined by ![ $$\\left \(p\\vert q\\right \) =\\mathrm{ Re}\\left \(p \\cdot \\bar{ q}\\right \).$$ ](A81414_1_En_3_Chapter_Equm.gif) If we ignore the conjugation but still take the real part, we obtain something else entirely ![ $$\\begin{array}{rcl}{ \\left \(p\\vert q\\right \)}_{1,3}& =& \\mathrm{Re}\\left \(pq\\right \) \\\\ & =& \\mathrm{Re}\\left \({\\alpha }_{0} + {\\alpha }_{1}i + {\\alpha }_{2}j + {\\alpha }_{3}k\\right \)\\left \({\\beta }_{0} + {\\beta }_{1}i + {\\beta }_{2}j + {\\beta }_{3}k\\right \) \\\\ & =& {\\alpha }_{0}{\\beta }_{0} - {\\alpha }_{1}{\\beta }_{1} - {\\alpha }_{2}{\\beta }_{2} - {\\alpha }_{3}{\\beta }_{3}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ16.gif) We note that restricted to the time axis this is the usual inner product while if restricted to the space part it is the negative of the usual inner product. This pseudo-inner product is what is used in special relativity. The subscript 1,3 refers to the signs that appear in the formula, 1 plus and 3 minuses. Note that one can have ![ $${\\left \(q\\vert q\\right \)}_{1,3} = 0$$ ](A81414_1_En_3_Chapter_IEq16.gif) without q = 0. The geometry of such an inner product is thus quite different from the usual ones we introduced above. The purpose of this very brief encounter with quaternions and space-times is to show that they appear quite naturally in the context of linear algebra. While we will not use them here, they are used quite a bit in more advanced mathematics and physics. ### 3.1.4 Exercises 1. Using the algebraic properties of inner products, show the law of cosines ![ $${c}^{2} = {a}^{2} + {b}^{2} - 2ab\\cos \\theta,$$ ](A81414_1_En_3_Chapter_Equn.gif) where a and b are adjacent sides in a triangle forming an angle θ and c is the opposite side. 2. Here are some matrix constructions of both complex and quaternion numbers. (a) Show that ℂ is isomorphic (same addition and multiplication) to the set of real 2 ×2 matrices of the form ![ $$\\left \[\\begin{array}{cc} \\alpha & - \\beta \\\\ \\beta& \\alpha \\end{array} \\right \].$$ ](A81414_1_En_3_Chapter_Equo.gif) (b) Show that ℍ is isomorphic to the set of complex 2 ×2 matrices of the form ![ $$\\left \[\\begin{array}{cc} z & -\\bar{ w}\\\\ w & \\bar{z}\\end{array} \\right \].$$ ](A81414_1_En_3_Chapter_Equp.gif) (c) Show that ℍ is isomorphic to the set of real 4 ×4 matrices ![ $$\\left \[\\begin{array}{cc} A& - {B}^{t} \\\\ B & {A}^{t}\\end{array} \\right \]$$ ](A81414_1_En_3_Chapter_Equq.gif) that consists of 2 ×2 blocks ![ $$A = \\left \[\\begin{array}{cc} \\alpha & - \\beta \\\\ \\beta& \\alpha \\end{array} \\right \],B = \\left \[\\begin{array}{cc} \\gamma& - \\delta \\\\ \\delta& \\gamma \\end{array} \\right \].$$ ](A81414_1_En_3_Chapter_Equr.gif) (d) Show that the quaternionic 2 ×2 matrices of the form ![ $$\\left \[\\begin{array}{cc} p& -\\bar{ q}\\\\ q & \\bar{p}\\end{array} \\right \]$$ ](A81414_1_En_3_Chapter_Equs.gif) form a real vector space isomorphic to ℝ 8 but that matrix multiplication does not necessarily give us a matrix of this type. What goes wrong in this case? 3. If q ∈ ℍ − 0, consider the map Ad q : ℍ -> ℍ defined by Ad q x = qxq − 1. (a) Show that x = 1 is an eigenvector with eigenvalue 1. (b) Show that Ad q maps ![ $$\\mathrm{{span}}_{\\mathbb{R}}\\left \\{i,j,k\\right \\}$$ ](A81414_1_En_3_Chapter_IEq17.gif) to itself and defines an isometry on ℝ 3. (c) Show that ![ $$\\mathrm{{Ad}}_{{q}_{1}} =\\mathrm{{ Ad}}_{{q}_{2}}$$ ](A81414_1_En_3_Chapter_IEq18.gif) if and only if q 1 = λq 2, where λ ∈ ℝ. ## 3.2 Inner Products Recall that we only use real or complex vector spaces. Thus, the field ![ $$\\mathbb{F}$$ ](A81414_1_En_3_Chapter_IEq19.gif) of scalars is always ℝ or ℂ. Definition 3.2.1. An inner product on a vector space V over ![ $$\\mathbb{F}$$ ](A81414_1_En_3_Chapter_IEq20.gif) is an ![ $$\\mathbb{F}$$ ](A81414_1_En_3_Chapter_IEq21.gif)-valued pairing ![ $$\\left \(x\\vert y\\right \)$$ ](A81414_1_En_3_Chapter_IEq22.gif) for x, y ∈ V, i.e., a map ![ $$\\left \(x\\vert y\\right \) : V \\times V \\rightarrow\\mathbb{F}$$ ](A81414_1_En_3_Chapter_IEq23.gif), that satisfies: (1) ![ $$\\left \(x\\vert x\\right \) \\geq0$$ ](A81414_1_En_3_Chapter_IEq24.gif) and vanishes only when x = 0. (2) ![ $$\\left \(x\\vert y\\right \) = \\overline{\\left \(y\\vert x\\right \)}$$ ](A81414_1_En_3_Chapter_IEq25.gif). (3) For each y ∈ V, the map x -> x | y is linear. A vector space with an inner product is called an inner product space.In the real case, the inner product is also called a Euclidean structure,while in the complex situation, the inner product is known as an Hermitian structure. Observe that a complex inner product ![ $$\\left \(x\\vert y\\right \)$$ ](A81414_1_En_3_Chapter_IEq26.gif) always defines a real inner product Rex | y that is symmetric and linear with respect to real scalar multiplication. One also uses the term dot product for the standard inner products in ℝ n and ℂ n .The term scalar product is also used quite often as a substitute for inner product. In fact, this terminology seems better as it indicates that the product of two vectors becomes a scalar. We note that the second property really only makes sense when the inner product is complex valued. If V is a real vector space, then the inner product is real valued and hence symmetric in x and y, i.e., ![ $$\\left \(x\\vert y\\right \) = \\left \(y\\vert x\\right \)$$ ](A81414_1_En_3_Chapter_IEq27.gif). In the complex case, property 2 implies that ![ $$\\left \(x\\vert x\\right \)$$ ](A81414_1_En_3_Chapter_IEq28.gif) is real, thus showing that the condition in property 1 makes sense. If we combine the second and third conditions, we get the sesqui-linearity properties: ![ $$\\begin{array}{rcl} \\left \({\\alpha }_{1}{x}_{1} + {\\alpha }_{2}{x}_{2}\\vert y\\right \)& =& {\\alpha }_{1}\\left \({x}_{1}\\vert y\\right \) + {\\alpha }_{2}\\left \({x}_{2}\\vert y\\right \), \\\\ \\left \(x\\vert {\\beta }_{1}{y}_{1} + {\\beta }_{2}{y}_{2}\\right \)& =& \\bar{{\\beta }}_{1}\\left \(x\\vert {y}_{1}\\right \) +\\bar{ {\\beta }}_{2}\\left \(x\\vert {y}_{2}\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ17.gif) In particular, we have the scaling property ![ $$\\begin{array}{rcl} \\left \(\\alpha x\\vert \\alpha x\\right \)& =& \\alpha \\bar{\\alpha }\\left \(x\\vert x\\right \) \\\\ & =&{ \\left \\vert \\alpha \\right \\vert }^{2}\\left \(x\\vert x\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ18.gif) We define the length or norm of a vector by ![ $$\\left \\Vert x\\right \\Vert = \\sqrt{\\left \(x\\vert x \\right \)}.$$ ](A81414_1_En_3_Chapter_Equt.gif) In case ![ $$\\left \(x\\vert y\\right \)$$ ](A81414_1_En_3_Chapter_IEq29.gif) is complex, we see that ![ $$\\left \(x\\vert y\\right \)$$ ](A81414_1_En_3_Chapter_IEq30.gif) and Rex | y define the same norm. Note that ![ $$\\left \\Vert x\\right \\Vert$$ ](A81414_1_En_3_Chapter_IEq31.gif) is nonnegative and only vanishes when x = 0. We also have the scaling property ![ $$\\left \\Vert \\alpha x\\right \\Vert = \\left \\vert \\alpha \\right \\vert \\left \\Vert x\\right \\Vert$$ ](A81414_1_En_3_Chapter_IEq32.gif). The triangle inequality: ![ $$\\left \\Vert x + y\\right \\Vert \\leq \\left \\Vert x\\right \\Vert + \\left \\Vert y\\right \\Vert$$ ](A81414_1_En_3_Chapter_IEq33.gif) will be established later in this section after some important preparatory work (see Corollary 3.2.11). Before studying the properties of inner products further, let us list some important examples. In Sect. 3.1, we already introduced what we shall refer to as the standard inner product structures on ℝ n and ℂ n . Example 3.2.2. If we have an inner product on V, then we also get an inner product on all of the subspaces of V. Example 3.2.3. If we have inner products on V and W, both with respect to ![ $$\\mathbb{F}$$ ](A81414_1_En_3_Chapter_IEq34.gif), then we get an inner product on V ×W defined by ![ $$\\left \(\\left \({x}_{1},{y}_{1}\\right \)\\left \\vert \\left \({x}_{2},{y}_{2}\\right \)\\right.\\right \) = \\left \({x}_{1}\\vert {x}_{2}\\right \) + \\left \({y}_{1}\\vert {y}_{2}\\right \).$$ ](A81414_1_En_3_Chapter_Equu.gif) Note that ![ $$\\left \(x,0\\right \)$$ ](A81414_1_En_3_Chapter_IEq35.gif) and ![ $$\\left \(0,y\\right \)$$ ](A81414_1_En_3_Chapter_IEq36.gif) always have zero inner product. Example 3.2.4. Given that Mat n ×m ℂ = ℂ n ⋅m , we have an inner product on this space. As we shall see, it has an interesting alternate construction. Let A, B ∈ Mat n ×m ℂ the adjoint B ∗ is the transpose combined with conjugating each entry ![ $${B}^{{_\\ast}} = \\left \[\\begin{array}{ccc} \\bar{{\\beta }}_{11} & \\cdots & \\bar{{\\beta }}_{n1}\\\\ \\vdots & \\ddots & \\vdots \\\\ \\bar{{\\beta }}_{1m}&\\cdots &\\bar{{\\beta }}_{nm} \\end{array} \\right \].$$ ](A81414_1_En_3_Chapter_Equv.gif) The inner product ![ $$\\left \(A\\vert B\\right \)$$ ](A81414_1_En_3_Chapter_IEq37.gif) can now be defined as ![ $$\\begin{array}{rcl} \\left \(A\\vert B\\right \)& =& \\mathrm{tr}\\left \(A{B}^{{_\\ast}}\\right \) \\\\ & =& \\mathrm{tr}\\left \({B}^{{_\\ast}}A\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ19.gif) In case m = 1, we have Mat n ×1 ℂ = ℂ n , and we recover the standard inner product from the entry in the 1 ×1 matrix B ∗ A. In the general case, we note that it also defines the usual inner product as ![ $$\\begin{array}{rcl} \\left \(A\\vert B\\right \)& =& \\mathrm{tr}\\left \(A{B}^{{_\\ast}}\\right \) \\\\ & =& {\\sum\\nolimits }_{i,j}{\\alpha }_{ij}\\bar{{\\beta }}_{ij}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ20.gif) Example 3.2.5. Let V = C 0 a, b, ℂ and define ![ $$\\left \(f\\vert g\\right \) ={ \\int\\nolimits \\nolimits }_{a}^{b}f\\left \(t\\right \)\\overline{g\\left \(t\\right \)}\\mathrm{d}t.$$ ](A81414_1_En_3_Chapter_Equw.gif) Then, ![ $${\\left \\Vert f\\right \\Vert }_{2} = \\sqrt{\\left \(f, f \\right \)}.$$ ](A81414_1_En_3_Chapter_Equx.gif) If V = C 0 a, b, ℝ, then we have the real inner product ![ $$\\left \(f\\vert g\\right \) ={ \\int\\nolimits \\nolimits }_{a}^{b}f\\left \(t\\right \)g\\left \(t\\right \)\\mathrm{d}t.$$ ](A81414_1_En_3_Chapter_Equy.gif) In the above example, it is often convenient to normalize the inner product so that the function f = 1 is of unit length. This normalized inner product is defined as ![ $$\\left \(f\\vert g\\right \) = \\frac{1} {b - a}{\\int\\nolimits \\nolimits }_{a}^{b}f\\left \(t\\right \)\\overline{g\\left \(t\\right \)}\\mathrm{d}t.$$ ](A81414_1_En_3_Chapter_Equz.gif) Example 3.2.6. Another important infinite-dimensional inner product space is the space ℓ 2 first investigated by Hilbert. It is the collection of all real or complex sequences ![ $$\\left \({\\alpha }_{n}\\right \)$$ ](A81414_1_En_3_Chapter_IEq38.gif) such that ∑ n α n 2 < ∞. We have not specified the index set for n, but we always think of it as being ℕ, ℕ 0, or ℤ. Because these index sets are all bijectively equivalent, they all the define the same space but with different indices for the coordinates α n . Addition and scalar multiplication are defined by ![ $$\\begin{array}{rcl} \\left \({\\alpha }_{n}\\right \) + \\left \({\\beta }_{n}\\right \)& =& \\left \({\\alpha }_{n} + {\\beta }_{n}\\right \), \\\\ \\beta \\left \({\\alpha }_{n}\\right \)& =& \\left \(\\beta {\\alpha }_{n}\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ21.gif) Since ![ $$\\begin{array}{rcl} {\\sum\\nolimits }_{n}{\\left \\vert \\beta {\\alpha }_{n}\\right \\vert }^{2}& =&{ \\left \\vert \\beta \\right \\vert }^{2}{ \\sum\\nolimits }_{n}{\\left \\vert {\\alpha }_{n}\\right \\vert }^{2}, \\\\ {\\sum\\nolimits }_{n}{\\left \\vert {\\alpha }_{n} + {\\beta }_{n}\\right \\vert }^{2}& \\leq & {\\sum\\nolimits }_{n}\\left \(2{\\left \\vert {\\alpha }_{n}\\right \\vert }^{2} + 2{\\left \\vert {\\beta }_{ n}\\right \\vert }^{2}\\right \) \\\\ & =& 2{\\sum\\nolimits }_{n}{\\left \\vert {\\alpha }_{n}\\right \\vert }^{2} + 2{\\sum\\nolimits }_{n}{\\left \\vert {\\beta }_{n}\\right \\vert }^{2}, \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ22.gif) it follows that ℓ 2 is a subspace of the vector space of all sequences. The inner product ![ $$\\left \(\\left \({\\alpha }_{n}\\right \)\\vert \\left \({\\beta }_{n}\\right \)\\right \)$$ ](A81414_1_En_3_Chapter_IEq39.gif) is defined by ![ $$\\left \(\\left \({\\alpha }_{n}\\right \)\\vert \\left \({\\beta }_{n}\\right \)\\right \) ={ \\sum\\nolimits }_{n}{\\alpha }_{n}\\bar{{\\beta }}_{n}.$$ ](A81414_1_En_3_Chapter_Equaa.gif) For that to make sense, we need to know that ![ $${\\sum\\nolimits }_{n}\\left \\vert {\\alpha }_{n}\\bar{{\\beta }}_{n}\\right \\vert < \\infty.$$ ](A81414_1_En_3_Chapter_Equab.gif) This follows from ![ $$\\begin{array}{rcl} \\left \\vert {\\alpha }_{n}\\bar{{\\beta }}_{n}\\right \\vert & =& \\left \\vert {\\alpha }_{n}\\right \\vert \\left \\vert \\bar{{\\beta }}_{n}\\right \\vert \\\\ & =& \\left \\vert {\\alpha }_{n}\\right \\vert \\left \\vert {\\beta }_{n}\\right \\vert \\\\ & \\leq &{ \\left \\vert {\\alpha }_{n}\\right \\vert }^{2} +{ \\left \\vert {\\beta }_{ n}\\right \\vert }^{2} \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ23.gif) and the fact that ![ $${\\sum\\nolimits }_{n}\\left \({\\left \\vert {\\alpha }_{n}\\right \\vert }^{2} +{ \\left \\vert {\\beta }_{ n}\\right \\vert }^{2}\\right \) < \\infty.$$ ](A81414_1_En_3_Chapter_Equac.gif) Definition 3.2.7. We say that two vectors x and y are orthogonal or perpendicular if ![ $$\\left \(x\\vert y\\right \) = 0$$ ](A81414_1_En_3_Chapter_IEq40.gif), and we denote this by x ⊥ y. The proof of the Pythagorean theorem for both ℝ n and ℂ n clearly carries over to this more abstract situation. So if ![ $$\\left \(x\\vert y\\right \) = 0$$ ](A81414_1_En_3_Chapter_IEq41.gif), then ![ $${\\left \\Vert x + y\\right \\Vert }^{2} ={ \\left \\Vert x\\right \\Vert }^{2} +{ \\left \\Vert y\\right \\Vert }^{2}$$ ](A81414_1_En_3_Chapter_IEq42.gif). Definition 3.2.8. The orthogonal projection of a vector x onto a nonzero vector y is defined by ![ $$\\begin{array}{rcl} \\mathrm{{proj}}_{y}\\left \(x\\right \)& =& \\left \(x\\left \\vert \\frac{y} {\\left \\Vert y\\right \\Vert }\\right.\\right \)\\frac{y} {\\left \\Vert y\\right \\Vert } \\\\ & =& \\frac{\\left \(x\\vert y\\right \)} {\\left \(y\\vert y\\right \)}y.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ24.gif) This projection creates a vector in the subspace spanned by y. The fact that it makes sense to call it the orthogonal projection is explained in the next proposition (Fig. 3.4). Fig. 3.4 Orthogonal projection Proposition 3.2.9. Given a nonzero y, the map x -> projy x is linear and a projection with the further property that x − projy x and projy x are orthogonal. In particular ![ $${\\left \\Vert x\\right \\Vert }^{2} ={ \\left \\Vert x -\\mathrm{{ proj}}_{ y}\\left \(x\\right \)\\right \\Vert }^{2} +{ \\left \\Vert \\mathrm{{proj}}_{ y}\\left \(x\\right \)\\right \\Vert }^{2},$$ ](A81414_1_En_3_Chapter_Equad.gif) and ![ $$\\left \\Vert \\mathrm{{proj}}_{y}\\left \(x\\right \)\\right \\Vert \\leq \\left \\Vert x\\right \\Vert.$$ ](A81414_1_En_3_Chapter_Equae.gif) Proof. The definition of proj y x immediately implies that it is linear from the linearity of the inner product and that it is a projection that follows from ![ $$\\begin{array}{rcl} \\mathrm{{proj}}_{y}\\left \(\\mathrm{{proj}}_{y}\\left \(x\\right \)\\right \)& =& \\mathrm{{proj}}_{y}\\left \(\\frac{\\left \(x\\vert y\\right \)} {\\left \(y\\vert y\\right \)}y\\right \) \\\\ & =& \\frac{\\left \(x\\vert y\\right \)} {\\left \(y\\vert y\\right \)}\\mathrm{{proj}}_{y}\\left \(y\\right \) \\\\ & =& \\frac{\\left \(x\\vert y\\right \)} {\\left \(y\\vert y\\right \)} \\frac{\\left \(y\\vert y\\right \)} {\\left \(y\\vert y\\right \)}y \\\\ & =& \\frac{\\left \(x\\vert y\\right \)} {\\left \(y\\vert y\\right \)}y \\\\ & =& \\mathrm{{proj}}_{y}\\left \(x\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ25.gif) To check orthogonality, simply compute ![ $$\\begin{array}{rcl} \\left \(x -\\mathrm{{ proj}}_{y}\\left \(x\\right \)\\vert \\mathrm{{proj}}_{y}\\left \(x\\right \)\\right \)& =& \\left \(x -\\frac{\\left \(x\\vert y\\right \)} {\\left \(y\\vert y\\right \)}y\\left \\vert \\frac{\\left \(x\\vert y\\right \)} {\\left \(y\\vert y\\right \)}y\\right.\\right \) \\\\ & =& \\left \(x\\left \\vert \\frac{\\left \(x\\vert y\\right \)} {\\left \(y\\vert y\\right \)}y\\right.\\right \) -\\left \(\\frac{\\left \(x\\vert y\\right \)} {\\left \(y\\vert y\\right \)}y\\left \\vert \\frac{\\left \(x\\vert y\\right \)} {\\left \(y\\vert y\\right \)}y\\right.\\right \) \\\\ & =& \\frac{\\overline{\\left \(x\\vert y\\right \)}} {\\left \(y\\vert y\\right \)}\\left \(x\\vert y\\right \) -\\frac{{\\left \\vert \\left \(x\\vert y\\right \)\\right \\vert }^{2}} {{\\left \\vert \\left \(y\\vert y\\right \)\\right \\vert }^{2}} \\left \(y\\vert y\\right \) \\\\ & =& \\frac{{\\left \\vert \\left \(x\\vert y\\right \)\\right \\vert }^{2}} {\\left \(y\\vert y\\right \)} -\\frac{{\\left \\vert \\left \(x\\vert y\\right \)\\right \\vert }^{2}} {\\left \(y\\vert y\\right \)} \\\\ & =& 0.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ26.gif) The Pythagorean theorem now implies the relationship ![ $${\\left \\Vert x\\right \\Vert }^{2} ={ \\left \\Vert x -\\mathrm{{ proj}}_{ y}\\left \(x\\right \)\\right \\Vert }^{2} +{ \\left \\Vert \\mathrm{{proj}}_{ y}\\left \(x\\right \)\\right \\Vert }^{2}.$$ ](A81414_1_En_3_Chapter_Equaf.gif) Using ![ $${\\left \\Vert x -\\mathrm{{ proj}}_{y}\\left \(x\\right \)\\right \\Vert }^{2} \\geq0$$ ](A81414_1_En_3_Chapter_IEq43.gif), we then obtain the inequality ![ $$\\left \\Vert \\mathrm{{proj}}_{y}\\left \(x\\right \)\\right \\Vert \\leq \\left \\Vert x\\right \\Vert$$ ](A81414_1_En_3_Chapter_IEq44.gif). □ Two important corollaries follow almost directly from this result. Corollary 3.2.10. (The Cauchy-Schwarz Inequality) ![ $$\\left \\vert \\left \(x\\vert y\\right \)\\right \\vert \\leq \\left \\Vert x\\right \\Vert \\left \\Vert y\\right \\Vert.$$ ](A81414_1_En_3_Chapter_Equag.gif) Proof. If y = 0, the inequality is trivial. Otherwise, use ![ $$\\begin{array}{rcl} \\left \\Vert x\\right \\Vert & \\geq & \\left \\Vert \\mathrm{{proj}}_{y}\\left \(x\\right \)\\right \\Vert \\\\ & =& \\left \\vert \\frac{\\left \(x\\vert y\\right \)} {\\left \(y\\vert y\\right \)}\\right \\vert \\left \\Vert y\\right \\Vert \\\\ & =& \\frac{\\left \\vert \\left \(x\\vert y\\right \)\\right \\vert } {\\left \\Vert y\\right \\Vert }.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ27.gif) □ Corollary 3.2.11. (The Triangle Inequality) ![ $$\\left \\Vert x + y\\right \\Vert \\leq \\left \\Vert x\\right \\Vert + \\left \\Vert y\\right \\Vert.$$ ](A81414_1_En_3_Chapter_Equah.gif) Proof. We expand ![ $$\\left \\Vert x + y\\right \\Vert$$ ](A81414_1_En_3_Chapter_IEq45.gif) and use the Cauchy-Schwarz inequality ![ $$\\begin{array}{rcl}{ \\left \\Vert x + y\\right \\Vert }^{2}& =& \\left \(x + y\\vert x + y\\right \) \\\\ & =&{ \\left \\Vert x\\right \\Vert }^{2} + 2\\mathrm{Re}\\left \(x\\vert y\\right \) +{ \\left \\Vert y\\right \\Vert }^{2} \\\\ & \\leq &{ \\left \\Vert x\\right \\Vert }^{2} + 2\\left \\vert \\left \(x\\vert y\\right \)\\right \\vert +{ \\left \\Vert y\\right \\Vert }^{2} \\\\ & \\leq &{ \\left \\Vert x\\right \\Vert }^{2} + 2\\left \\Vert x\\right \\Vert \\left \\Vert y\\right \\Vert +{ \\left \\Vert y\\right \\Vert }^{2} \\\\ & =&{ \\left \(\\left \\Vert x\\right \\Vert + \\left \\Vert y\\right \\Vert \\right \)}^{2}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ28.gif) □ ### 3.2.1 Exercises 1. Show that a hyperplane H = x ∈ V : a | x = α in a real n-dimensional inner product space V can be represented as an affine subspace ![ $$H = \\left \\{{t}_{1}{x}_{1} + \\cdots+ {t}_{n}{x}_{n} : {t}_{1} + \\cdots+ {t}_{n} = 1\\right \\},$$ ](A81414_1_En_3_Chapter_Equai.gif) where x 1,..., x n ∈ H. Find conditions on x 1,..., x n so that they generate a hyperplane (see Exercise 8 in Sect. 1.10 for the definition of an affine subspace). 2. Let x = 2, 1 and y = 3, 1 in ℝ 2. If z ∈ ℝ 2 satisfies ![ $$\\left \(z\\vert x\\right \) = 1$$ ](A81414_1_En_3_Chapter_IEq46.gif) and ![ $$\\left \(z\\vert y\\right \) = 2$$ ](A81414_1_En_3_Chapter_IEq47.gif), then find the coordinates for z. 3. Show that it is possible to find k vectors x 1,..., x k ∈ ℝ n such that ![ $$\\left \\Vert {x}_{i}\\right \\Vert = 1$$ ](A81414_1_En_3_Chapter_IEq48.gif) and ![ $$\\left \({x}_{i}\\vert {x}_{j}\\right \) < 0$$ ](A81414_1_En_3_Chapter_IEq49.gif), i≠j only when k ≤ n + 1. Show that for any such choice of k vectors, we get a linearly independent set by deleting any one of the k vectors. 4. In a real inner product space V select y≠0. For fixed α ∈ ℝ, show that H = x ∈ V : proj y x = αy describes a hyperplane with normal y. 5. Let V be an inner product space and let y, z ∈ V. Show that y = z if and only if ![ $$\\left \(x\\vert y\\right \) = \\left \(x\\vert z\\right \)$$ ](A81414_1_En_3_Chapter_IEq50.gif) for all x ∈ V. 6. Prove the Cauchy-Schwarz inequality by expanding the right-hand side of the inequality ![ $$0 \\leq {\\left \\Vert x -\\frac{\\left \(x\\vert y\\right \)} {{\\left \\Vert y\\right \\Vert }^{2}} y\\right \\Vert }^{2}.$$ ](A81414_1_En_3_Chapter_Equaj.gif) 7. Let V be an inner product space and x 1,..., x n , y 1,..., y n ∈ V. Show the following generalized Cauchy-Schwarz inequality: ![ $${\\left \({\\sum\\nolimits }_{i=1}^{n}\\left \\vert \\left \({x}_{ i}\\vert {y}_{i}\\right \)\\right \\vert \\right \)}^{2} \\leq \\left \({\\sum\\nolimits }_{i=1}^{n}{\\left \\Vert {x}_{ i}\\right \\Vert }^{2}\\right \)\\left \({\\sum\\nolimits }_{i=1}^{n}{\\left \\Vert {y}_{ i}\\right \\Vert }^{2}\\right \).$$ ](A81414_1_En_3_Chapter_Equak.gif) 8. Let S n − 1 = x ∈ ℝ n : x = 1 be the unit sphere. When n = 1, it consists of two points, When n = 2, it is a circ Le, and when n = 3 a sphere. A finite subset ![ $$\\left \\{{x}_{1},\\ldots,{x}_{k}\\right \\} \\in{S}^{n-1}$$ ](A81414_1_En_3_Chapter_IEq51.gif) is said to consist of equidistant points if ∡x i , x j = θ for all i≠j. (a) Show that this is equivalent to assuming that ![ $$\\left \({x}_{i}\\vert {x}_{j}\\right \) =\\cos \\theta $$ ](A81414_1_En_3_Chapter_IEq52.gif) for all i≠j. (b) Show that S 0 contains a set of two equidistant points, S 1 a set of three equidistant points, and S 2 a set of four equidistant points. (c) Using induction on n, show that a set of equidistant points in S n − 1 contains no more than n + 1 elements. 9. In an inner product space, show the parallelogram rule ![ $${\\left \\Vert x - y\\right \\Vert }^{2} +{ \\left \\Vert x + y\\right \\Vert }^{2} = 2{\\left \\Vert x\\right \\Vert }^{2} + 2{\\left \\Vert y\\right \\Vert }^{2}.$$ ](A81414_1_En_3_Chapter_Equal.gif) Here x and y describe the sides in a parallelogram and x + y and x − y the diagonals. 10. In a complex inner product space, show that ![ $$4\\left \(x\\vert y\\right \) ={ \\sum\\nolimits }_{k=0}^{3}{i}^{k}{\\left \\Vert x + {i}^{k}y\\right \\Vert }^{2}.$$ ](A81414_1_En_3_Chapter_Equam.gif) ## 3.3 Orthonormal Bases Let us fix an inner product space V. Definition 3.3.1. A possibly infinite collection e 1,..., e n ,... of vectors in V is said to be orthogonal if ![ $$\\left \({e}_{i}\\vert {e}_{j}\\right \) = 0$$ ](A81414_1_En_3_Chapter_IEq53.gif) for i≠j. If in addition these vectors are of unit length, i.e., ![ $$\\left \({e}_{i}\\vert {e}_{j}\\right \) = {\\delta }_{ij}$$ ](A81414_1_En_3_Chapter_IEq54.gif), then we call the collection orthonormal. The usual bases for ℝ n and ℂ n are evidently orthonormal collections. Since they are also bases, we call them orthonormal bases. Lemma 3.3.2. Let e 1 ,...,e n be orthonormal. Then, e 1 ,...,e n are linearly independent and any element x ∈ span e 1 ,...,e n has the expansion ![ $$x = \\left \(x\\vert {e}_{1}\\right \){e}_{1} + \\cdots \\left \(x\\vert {e}_{n}\\right \){e}_{n}.$$ ](A81414_1_En_3_Chapter_Equan.gif) Proof. Note that if x = α1 e 1 + ⋯ + α n e n , then ![ $$\\begin{array}{rcl} \\left \(x\\vert {e}_{i}\\right \)& =& \\left \({\\alpha }_{1}{e}_{1} + \\cdots+ {\\alpha }_{n}{e}_{n}\\vert {e}_{i}\\right \) \\\\ & =& {\\alpha }_{1}\\left \({e}_{1}\\vert {e}_{i}\\right \) + \\cdots+ {\\alpha }_{n}\\left \({e}_{n}\\vert {e}_{i}\\right \) \\\\ & =& {\\alpha }_{1}{\\delta }_{1i} + \\cdots+ {\\alpha }_{n}{\\delta }_{ni} \\\\ & =& {\\alpha }_{i}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ29.gif) In case x = 0, this gives us linear independence, and in case x ∈ spane 1,..., e n , we have computed the ith coordinate using the inner product. □ We shall use the equation ![ $$\\left \({\\alpha }_{1}{e}_{1} + \\cdots+ {\\alpha }_{n}{e}_{n}\\vert {e}_{i}\\right \) = {\\alpha }_{i}$$ ](A81414_1_En_3_Chapter_Equao.gif) from the above proof repeatedly throughout the next two chapters. This allows us to construct a special isomorphism between spane 1,..., e n and ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_3_Chapter_IEq55.gif). Definition 3.3.3. We say that two inner product spaces V and W over ![ $$\\mathbb{F}$$ ](A81414_1_En_3_Chapter_IEq56.gif) are isometric, if we can find an isometryL : V -> W, i.e., an isomorphism such that ![ $$\\left \(L\\left \(x\\right \)\\vert L\\left \(y\\right \)\\right \) = \\left \(x\\vert y\\right \)$$ ](A81414_1_En_3_Chapter_IEq57.gif). Lemma 3.3.4. If V admits a basis that is orthonormal, then V is isometric to ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_3_Chapter_IEq58.gif). Proof. Choose an orthonormal basis e 1,..., e n for V and define the usual isomorphism ![ $$L : {\\mathbb{F}}^{n} \\rightarrow V$$ ](A81414_1_En_3_Chapter_IEq59.gif) by ![ $$\\begin{array}{rcl} L\\left \(\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n} \\end{array} \\right \]\\right \)& =& \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n} \\end{array} \\right \] \\\\ & =& {\\alpha }_{1}{e}_{1} + \\cdots+ {\\alpha }_{n}{e}_{n}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ30.gif) Let ![ $$a = \\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n} \\end{array} \\right \]\\:\\mathrm{and}\\:b = \\left \[\\begin{array}{c} {\\beta }_{1}\\\\ \\vdots \\\\ {\\beta }_{n} \\end{array} \\right \]$$ ](A81414_1_En_3_Chapter_Equap.gif) then ![ $$\\begin{array}{rcl} \\left \(L\\left \(a\\right \)\\vert L\\left \(b\\right \)\\right \)& =& \\left \(L\\left \(a\\right \)\\vert {\\beta }_{1}{e}_{1} + \\cdots+ {\\beta }_{n}{e}_{n}\\right \) \\\\ & =& \\bar{{\\beta }}_{1}\\left \(L\\left \(a\\right \)\\vert {e}_{1}\\right \) + \\cdots+\\bar{ {\\beta }}_{n}\\left \(L\\left \(a\\right \)\\vert {e}_{n}\\right \) \\\\ & =& \\bar{{\\beta }}_{1}\\left \({\\alpha }_{1}{e}_{1} + \\cdots+ {\\alpha }_{n}{e}_{n}\\vert {e}_{1}\\right \) + \\cdots+\\bar{ {\\beta }}_{n}\\left \({\\alpha }_{1}{e}_{1} + \\cdots+ {\\alpha }_{n}{e}_{n}\\vert {e}_{n}\\right \) \\\\ & =& \\bar{{\\beta }}_{1}{\\alpha }_{1} + \\cdots+\\bar{ {\\beta }}_{n}{\\alpha }_{n} \\\\ & =& \\left \(a\\vert b\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ31.gif) which is what we wanted to prove. □ Remark 3.3.5. Note that the inverse map that computes the coordinates of a vector is explicitly given by ![ $${L}^{-1}\\left \(x\\right \) = \\left \[\\begin{array}{c} \\left \(x\\vert {e}_{1}\\right \)\\\\ \\vdots \\\\ \\left \(x\\vert {e}_{n}\\right \) \\end{array} \\right \].$$ ](A81414_1_En_3_Chapter_Equaq.gif) We are now left with the nagging possibility that orthonormal bases might be very special and possibly not exist. The procedure for constructing orthonormal collections is known as the Gram-Schmidt procedure. It is not clear who invented the process, but these two people definitely promoted and used it to great effect. Given a linearly independent set x 1,..., x m in an inner product space V, it is possible to construct an orthonormal collection e 1,..., e m such that ![ $$\\mathrm{span}\\left \\{{x}_{1},\\ldots,{x}_{m}\\right \\} =\\mathrm{ span}\\left \\{{e}_{1},\\ldots,{e}_{m}\\right \\}.$$ ](A81414_1_En_3_Chapter_Equar.gif) The procedure is actually iterative and creates e 1,..., e m in such a way that ![ $$\\begin{array}{rcl} \\mathrm{span}\\left \\{{x}_{1}\\right \\}& =& \\mathrm{span}\\left \\{{e}_{1}\\right \\}, \\\\ \\mathrm{span}\\left \\{{x}_{1},{x}_{2}\\right \\}& =& \\mathrm{span}\\left \\{{e}_{1},{e}_{2}\\right \\}, \\\\ \\vdots& & \\vdots \\\\ \\mathrm{span}\\left \\{{x}_{1},\\ldots,{x}_{m}\\right \\}& =& \\mathrm{span}\\left \\{{e}_{1},\\ldots,{e}_{m}\\right \\}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ32.gif) This basically forces us to define e 1 as ![ $${e}_{1} = \\frac{1} {\\left \\Vert {x}_{1}\\right \\Vert }{x}_{1}.$$ ](A81414_1_En_3_Chapter_Equas.gif) Then, e 2 is constructed by considering ![ $$\\begin{array}{rcl}{ z}_{2}& =& {x}_{2} -\\mathrm{{ proj}}_{{x}_{1}}\\left \({x}_{2}\\right \) \\\\ & =& {x}_{2} -\\mathrm{{ proj}}_{{e}_{1}}\\left \({x}_{2}\\right \) \\\\ & =& {x}_{2} -\\left \({x}_{2}\\vert {e}_{1}\\right \){e}_{1}, \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ33.gif) and defining ![ $${e}_{2} = \\frac{1} {\\left \\Vert {z}_{2}\\right \\Vert }{z}_{2}.$$ ](A81414_1_En_3_Chapter_Equat.gif) Having constructed an orthonormal set e 1,..., e k , we can then define ![ $${z}_{k+1} = {x}_{k+1} -\\left \({x}_{k+1}\\vert {e}_{1}\\right \){e}_{1} -\\cdots-\\left \({x}_{k+1}\\vert {e}_{k}\\right \){e}_{k}.$$ ](A81414_1_En_3_Chapter_Equau.gif) As ![ $$\\begin{array}{rcl} \\mathrm{span}\\left \\{{x}_{1},\\ldots,{x}_{k}\\right \\}& =& \\mathrm{span}\\left \\{{e}_{1},\\ldots,{e}_{k}\\right \\}, \\\\ {x}_{k+1}& \\notin & \\mathrm{span}\\left \\{{x}_{1},\\ldots,{x}_{k}\\right \\}, \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ34.gif) we have that z k + 1≠0. Thus, we can define ![ $${e}_{k+1} = \\frac{1} {\\left \\Vert {z}_{k+1}\\right \\Vert }{z}_{k+1}.$$ ](A81414_1_En_3_Chapter_Equav.gif) To see that e k + 1 is perpendicular to e 1,..., e k , we note that ![ $$\\begin{array}{rcl} \\left \({e}_{k+1}\\vert {e}_{i}\\right \)& =& \\frac{1} {\\left \\Vert {z}_{k+1}\\right \\Vert }\\left \({z}_{k+1}\\vert {e}_{i}\\right \) \\\\ & =& \\frac{1} {\\left \\Vert {z}_{k+1}\\right \\Vert }\\left \({x}_{k+1}\\vert {e}_{i}\\right \) - \\frac{1} {\\left \\Vert {z}_{k+1}\\right \\Vert }\\left \(\\left.\\left \({x}_{k+1}\\vert {e}_{1}\\right \){e}_{1} + \\cdots+ \\left \({x}_{k+1}\\vert {e}_{k}\\right \){e}_{k}\\right \\vert {e}_{i}\\right \) \\\\ & =& \\frac{1} {\\left \\Vert {z}_{k+1}\\right \\Vert }\\left \({x}_{k+1}\\vert {e}_{i}\\right \) - \\frac{1} {\\left \\Vert {z}_{k+1}\\right \\Vert }\\left \({x}_{k+1}\\vert {e}_{i}\\right \) \\\\ & =& 0.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ35.gif) Since ![ $$\\begin{array}{rcl} \\mathrm{span}\\left \\{{x}_{1}\\right \\}& =& \\mathrm{span}\\left \\{{e}_{1}\\right \\}, \\\\ \\mathrm{span}\\left \\{{x}_{1},{x}_{2}\\right \\}& =& \\mathrm{span}\\left \\{{e}_{1},{e}_{2}\\right \\}, \\\\ \\vdots& & \\vdots \\\\ \\mathrm{span}\\left \\{{x}_{1},\\ldots,{x}_{m}\\right \\}& =& \\mathrm{span}\\left \\{{e}_{1},\\ldots,{e}_{m}\\right \\}, \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ36.gif) we have constructed e 1,..., e m in such a way that ![ $$\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \]B,$$ ](A81414_1_En_3_Chapter_Equaw.gif) where B is an upper triangular m ×m matrix with positive diagonal entries. Conversely, we have ![ $$\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \]R,$$ ](A81414_1_En_3_Chapter_Equax.gif) where R = B − 1 is also upper triangular with positive diagonal entries. Given that we have a formula for the expansion of each x k in terms of e 1,..., e k , we see that ![ $$R = \\left \[\\begin{array}{ccccc} \\left \({x}_{1}\\vert {e}_{1}\\right \)&\\left \({x}_{2}\\vert {e}_{1}\\right \)&\\left \({x}_{3}\\vert {e}_{1}\\right \)&\\cdots & \\left \({x}_{m}\\vert {e}_{1}\\right \) \\\\ 0 &\\left \({x}_{2}\\vert {e}_{2}\\right \)&\\left \({x}_{3}\\vert {e}_{2}\\right \)&\\cdots & \\left \({x}_{m}\\vert {e}_{2}\\right \) \\\\ 0 & 0 &\\left \({x}_{3}\\vert {e}_{3}\\right \)&\\cdots & \\left \({x}_{m}\\vert {e}_{3}\\right \)\\\\ \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\ 0 & 0 & 0 &\\cdots &\\left \({x}_{m}\\vert {e}_{m}\\right \) \\end{array}.\\right \]$$ ](A81414_1_En_3_Chapter_Equay.gif) We often abbreviate ![ $$\\begin{array}{rcl} A& =& \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \], \\\\ Q& =& \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \] \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ37.gif) and obtain the QR-factorizationA = QR. In case V is ℝ n or ℂ n A is a general n ×m matrix of rank m, Q is also an n ×m matrix of rank m with the added feature that its columns are orthonormal, and R is an upper triangular m ×m matrix. Note that in this interpretation, the QR-factorization is an improved Gauss elimination: A = PU, P ∈ Gl n and U upper triangular (see Sect. 1.13). With that in mind, it is not surprising that the QR-factorization gives us a way of inverting the linear map ![ $$\\left \[\\begin{array}{ccc} {x}_{1} & \\cdot s &{x}_{n} \\end{array} \\right \] : {\\mathbb{F}}^{n} \\rightarrow V$$ ](A81414_1_En_3_Chapter_Equaz.gif) when x 1,..., x n is a basis. First, recall that the isometry ![ $$\\left \[\\begin{array}{ccc} {e}_{1} & \\cdot s &{e}_{n} \\end{array} \\right \] : {\\mathbb{F}}^{n} \\rightarrow V$$ ](A81414_1_En_3_Chapter_Equba.gif) is easily inverted and the inverse can be symbolically represented as ![ $${\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{-1} = \\left \[\\begin{array}{c} \\overline{\\left \({e}_{1}\\vert \\cdot \\right \)}\\\\ \\vdots \\\\ \\overline{\\left \({e}_{n}\\vert \\cdot \\right \)} \\end{array} \\right \],$$ ](A81414_1_En_3_Chapter_Equbb.gif) or more precisely ![ $$\\begin{array}{rcl}{ \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{-1}\\left \(x\\right \)& =& \\left \[\\begin{array}{c} \\overline{\\left \({e}_{1}\\vert x\\right \)}\\\\ \\vdots \\\\ \\overline{\\left \({e}_{n}\\vert x\\right \)} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{c} \\left \(x\\vert {e}_{1}\\right \)\\\\ \\vdots \\\\ \\left \(x\\vert {e}_{n}\\right \) \\end{array}.\\right \]\\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ38.gif) This is the great feature of orthonormal bases, namely, that one has an explicit formula for the coordinates in such a basis. Next on the agenda is the construction of R − 1. Given that it is upper triangular, this is a reasonably easy problem in the theory of solving linear systems. However, having found the orthonormal basis through Gram-Schmidt, we have already found this inverse since ![ $$\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]R$$ ](A81414_1_En_3_Chapter_Equbc.gif) implies that ![ $$\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]{R}^{-1}$$ ](A81414_1_En_3_Chapter_Equbd.gif) and the goal of the process was to find e 1,..., e n as a linear combination of x 1,..., x n . Thus, we obtain the formula ![ $$\\begin{array}{rcl}{ \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]}^{-1}& =& {R}^{-1}{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{-1} \\\\ & =& {R}^{-1}\\left \[\\begin{array}{c} \\overline{\\left \({e}_{1}\\vert \\cdot \\right \)}\\\\ \\vdots \\\\ \\overline{\\left \({e}_{n}\\vert \\cdot \\right \)} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_3_Chapter_Equ39.gif) The Gram-Schmidt process, therefore, not only gives us an orthonormal basis but it also gives us a formula for the coordinates of a vector with respect to the original basis. It should also be noted that if we start out with a set x 1,..., x m that is not linearly independent, then this fact will be revealed in the process of constructing e 1,..., e m . We know from Lemma 1.12.3 that either x 1 = 0 or there is a smallest k such that x k + 1 is a linear combination of x 1,..., x k . In the latter case, we get to construct e 1,..., e k since x 1,..., x k were linearly independent. As x k + 1 ∈ spane 1,..., e k , we must have that ![ $${z}_{k+1} = {x}_{k+1} -\\left \({x}_{k+1}\\vert {e}_{1}\\right \){e}_{1} -\\cdots-\\left \({x}_{k+1}\\vert {e}_{k}\\right \){e}_{k} = 0$$ ](A81414_1_En_3_Chapter_Eqube.gif) since the way in which x k + 1 is expanded in terms of e 1,..., e k is given by ![ $${x}_{k+1} = \\left \({x}_{k+1}\\vert {e}_{1}\\right \){e}_{1} + \\cdots+ \\left \({x}_{k+1}\\vert {e}_{k}\\right \){e}_{k}.$$ ](A81414_1_En_3_Chapter_Equbf.gif) Thus, we fail to construct the unit vector e k + 1. With all this behind us, we have proved the important result. Theorem 3.3.6. (Uniqueness of Inner Product Spaces) An n-dimensional inner product space over ℝ, respectively ℂ, is isometric to ℝ n , respectively ℂ n. Definition 3.3.7. The operator norm, for a linear map L : V -> W between inner product spaces is defined as ![ $$\\left \\Vert L\\right \\Vert {=\\sup }_{\\left \\Vert x\\right \\Vert =1}\\left \\Vert L\\left \(x\\right \)\\right \\Vert.$$ ](A81414_1_En_3_Chapter_Equbg.gif) The operator norm is finite provided V is finite-dimensional. Theorem 3.3.8. Let L : V -> W be a linear map. Then ![ $$\\left \\Vert L\\left \(x\\right \)\\right \\Vert \\leq \\left \\Vert L\\right \\Vert \\left \\Vert x\\right \\Vert$$ ](A81414_1_En_3_Chapter_Equbh.gif) for all x ∈ V. And if V is a finite-dimensional inner product space, then ![ $$\\left \\Vert L\\right \\Vert {=\\sup }_{\\left \\Vert x\\right \\Vert =1}\\left \\Vert L\\left \(x\\right \)\\right \\Vert < \\infty.$$ ](A81414_1_En_3_Chapter_Equbi.gif) Proof. To establish the first claim, we only need to consider x ∈ V − 0. Then, ![ $$\\left \\Vert L\\left \(\\frac{x} {\\left \\Vert x\\right \\Vert }\\right \)\\right \\Vert \\leq \\left \\Vert L\\right \\Vert,$$ ](A81414_1_En_3_Chapter_Equbj.gif) and the claim follows by using linearity of L and the scaling property of the norm. When V is finite-dimensional select an orthonormal basis e 1,..., e n for V. Then, by using the Cauchy-Schwarz inequality (Corollary 3.2.10) and the triangle inequality (Corollary 3.2.11), we obtain ![ $$\\begin{array}{rcl} \\left \\Vert L\\left \(x\\right \)\\right \\Vert & =& \\left \\Vert L\\left \({\\sum\\nolimits }_{i=1}^{n}\\left \(x\\vert {e}_{ i}\\right \){e}_{i}\\right \)\\right \\Vert \\\\ & =& \\left \\Vert {\\sum\\nolimits }_{i=1}^{n}\\left \(x\\vert {e}_{ i}\\right \)L\\left \({e}_{i}\\right \)\\right \\Vert \\\\ & \\leq & {\\sum\\nolimits }_{i=1}^{n}\\left \\vert \\left \(x\\vert {e}_{ i}\\right \)\\right \\vert \\left \\Vert L\\left \({e}_{i}\\right \)\\right \\Vert \\\\ & \\leq & {\\sum\\nolimits }_{i=1}^{n}\\left \\Vert x\\right \\Vert \\left \\Vert L\\left \({e}_{ i}\\right \)\\right \\Vert \\\\ & =& \\left \({\\sum\\nolimits }_{i=1}^{n}\\left \\Vert L\\left \({e}_{ i}\\right \)\\right \\Vert \\right \)\\left \\Vert x\\right \\Vert.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ40.gif) Thus, ![ $$\\left \\Vert L\\right \\Vert \\leq {\\sum\\nolimits }_{i=1}^{n}\\left \\Vert L\\left \({e}_{ i}\\right \)\\right \\Vert.$$ ](A81414_1_En_3_Chapter_Equbk.gif) □ To finish the section, let us try to do a few concrete examples. Example 3.3.9. Consider the vectors x 1 = 1, 1, 0, x 2 = 1, 0, 1, and x 3 = 0, 1, 1, in ℝ 3. If we perform Gram-Schmidt, then the QR factorization is ![ $$\\left \[\\begin{array}{ccc} 1&1&0\\\\ 1 &0 &1 \\\\ 0&1&1 \\end{array} \\right \] = \\left \[\\begin{array}{ccc} \\frac{1} {\\sqrt{2}} & \\frac{1} {\\sqrt{6}} & - \\frac{1} {\\sqrt{3}} \\\\ \\frac{1} {\\sqrt{2}} & - \\frac{1} {\\sqrt{6}} & \\frac{1} {\\sqrt{3}} \\\\ 0 & \\frac{2} {\\sqrt{6}} & \\frac{1} {\\sqrt{3}} \\end{array} \\right \]\\left \[\\begin{array}{ccc} \\sqrt{2}& \\frac{1} {\\sqrt{2}} & \\frac{1} {\\sqrt{2}} \\\\ 0 & \\frac{3} {\\sqrt{6}} & \\frac{1} {\\sqrt{6}} \\\\ 0 & 0 & \\frac{2} {\\sqrt{3}} \\end{array} \\right \].$$ ](A81414_1_En_3_Chapter_Equbl.gif) Example 3.3.10. The Legendre polynomials of degrees 0, 1, and 2 on ![ $$\\left \[-1,1\\right \]$$ ](A81414_1_En_3_Chapter_IEq60.gif) are by definition the polynomials obtained via Gram-Schmidt from 1, t, t 2 with respect to the inner product ![ $$\\left \(f\\vert g\\right \) ={ \\int\\nolimits \\nolimits }_{-1}^{1}f\\left \(t\\right \)\\overline{g\\left \(t\\right \)}\\mathrm{d}t.$$ ](A81414_1_En_3_Chapter_Equbm.gif) We see that ![ $$\\left \\Vert 1\\right \\Vert = \\sqrt{2}$$ ](A81414_1_En_3_Chapter_IEq61.gif), so the first polynomial is ![ $${p}_{0}\\left \(t\\right \) = \\frac{1} {\\sqrt{2}}.$$ ](A81414_1_En_3_Chapter_Equbn.gif) To find p 1 t, we first find ![ $$\\begin{array}{rcl}{ z}_{1}& =& t -\\left \(t\\vert {p}_{0}\\right \){p}_{0} \\\\ & =& t -\\left \({\\int\\nolimits \\nolimits }_{-1}^{1}t \\frac{1} {\\sqrt{2}}\\mathrm{d}t\\right \) \\frac{1} {\\sqrt{2}} \\\\ & =& t.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ41.gif) Then, ![ $${p}_{1}\\left \(t\\right \) = \\frac{t} {\\left \\Vert t\\right \\Vert } = \\sqrt{\\frac{3} {2}}t.$$ ](A81414_1_En_3_Chapter_Equbo.gif) Finally, for p 2, we find ![ $$\\begin{array}{rcl}{ z}_{2}& =& {t}^{2} -\\left \({t}^{2}\\vert {p}_{ 0}\\right \){p}_{0} -\\left \({t}^{2}\\vert {p}_{ 1}\\right \){p}_{1} \\\\ & =& {t}^{2} -\\left \({\\int\\nolimits \\nolimits }_{-1}^{1}{t}^{2} \\frac{1} {\\sqrt{2}}\\mathrm{d}t\\right \) \\frac{1} {\\sqrt{2}} -\\left \({\\int\\nolimits \\nolimits }_{-1}^{1}{t}^{2}\\sqrt{\\frac{3} {2}}t\\mathrm{d}t\\right \)\\sqrt{\\frac{3} {2}}t \\\\ & =& {t}^{2} -\\frac{1} {3}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ42.gif) Thus, ![ $$\\begin{array}{rcl}{ p}_{2}\\left \(t\\right \)& =& \\frac{{t}^{2} -\\frac{1} {3}} {\\left \\Vert {t}^{2} -\\frac{1} {3}\\right \\Vert } \\\\ & =& \\sqrt{\\frac{45} {8}} \\left \({t}^{2} -\\frac{1} {3}\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ43.gif) Example 3.3.11. A system of real equations Ax = b can be interpreted geometrically as n equations ![ $$\\begin{array}{rcl} \\left \({a}_{1}\\vert x\\right \)& =& {\\beta }_{1}, \\\\ \\vdots& & \\vdots \\\\ \\left \({a}_{n}\\vert x\\right \)& =& {\\beta }_{n}, \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ44.gif) where a k is the kth row in A and β k the kth coordinate for b. The solutions will be the intersection of the n hyperplanes H k = z : a k | z = β k . Example 3.3.12. We wish to show that the trigonometric functions ![ $$1 =\\cos \\left \(0 \\cdot t\\right \),\\cos \\left \(t\\right \),\\cos \\left \(2t\\right \),\\ldots,\\sin \\left \(t\\right \),\\sin \\left \(2t\\right \),\\ldots $$ ](A81414_1_En_3_Chapter_Equbp.gif) are orthogonal in C 2π ∞ ℝ, ℝ with respect to the inner product ![ $$\\left \(f\\vert g\\right \) = \\frac{1} {2\\pi }{\\int\\nolimits \\nolimits }_{-\\pi }^{\\pi }f\\left \(t\\right \)g\\left \(t\\right \)\\mathrm{d}t.$$ ](A81414_1_En_3_Chapter_Equbq.gif) First, observe that cosmtsinnt is an odd function. This proves that ![ $$\\left \(\\cos \\left \(mt\\right \)\\vert \\sin \\left \(nt\\right \)\\right \) = 0.$$ ](A81414_1_En_3_Chapter_Equbr.gif) Thus, we are reduced to showing that each of the two sequences ![ $$\\begin{array}{rcl} & & 1,\\cos \\left \(t\\right \),\\cos \\left \(2t\\right \),\\ldots\\\\ & & \\sin \\left \(t\\right \),\\sin \\left \(2t\\right \),\\ldots\\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ45.gif) are orthogonal. Using integration by parts, we see ![ $$\\begin{array}{rcl} & \\left \(\\cos \\left \(mt\\right \)\\vert \\cos \\left \(nt\\right \)\\right \) & \\\\ & \\quad = \\frac{1} {2\\pi }{ \\int\\nolimits \\nolimits }_{-\\pi }^{\\pi }\\cos \\left \(mt\\right \)\\cos \\left \(nt\\right \)\\mathrm{d}t & \\\\ & \\quad = \\frac{1} {2\\pi }{\\left.\\frac{\\sin \\left \(mt\\right \)} {m} \\cos \\left \(nt\\right \)\\right \\vert }_{-\\pi }^{\\pi } - \\frac{1} {2\\pi }{ \\int\\nolimits \\nolimits }_{-\\pi }^{\\pi }\\frac{\\sin \\left \(mt\\right \)} {m} \\left \(-n\\right \)\\sin \\left \(nt\\right \)\\mathrm{d}t & \\\\ & \\quad = \\frac{n} {m} \\frac{1} {2\\pi }{ \\int\\nolimits \\nolimits }_{-\\pi }^{\\pi }\\sin \\left \(mt\\right \)\\sin \\left \(nt\\right \)\\mathrm{d}t & \\\\ & \\quad = \\frac{n} {m}\\left \(\\sin \\left \(mt\\right \)\\vert \\sin \\left \(nt\\right \)\\right \) & \\\\ & \\quad = \\frac{n} {m} \\frac{1} {2\\pi }{\\left.\\frac{-\\cos \\left \(mt\\right \)} {m} \\sin \\left \(nt\\right \)\\right \\vert }_{-\\pi }^{\\pi } - \\frac{n} {m} \\frac{1} {2\\pi }{ \\int\\nolimits \\nolimits }_{-\\pi }^{\\pi }\\frac{-\\cos \\left \(mt\\right \)} {m} n\\cos \\left \(nt\\right \)\\mathrm{d}t& \\\\ & \\quad ={ \\left \( \\frac{n} {m}\\right \)}^{2} \\frac{1} {2\\pi }{ \\int\\nolimits \\nolimits }_{-\\pi }^{\\pi }\\cos \\left \(mt\\right \)\\cos \\left \(nt\\right \)\\mathrm{d}t & \\\\ & \\quad ={ \\left \( \\frac{n} {m}\\right \)}^{2}\\left \(\\cos \\left \(mt\\right \)\\vert \\cos \\left \(nt\\right \)\\right \). & \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ46.gif) When n≠m and m > 0, this clearly proves that ![ $$\\left \(\\cos \\left \(mt\\right \)\\vert \\cos \\left \(nt\\right \)\\right \) = 0$$ ](A81414_1_En_3_Chapter_IEq62.gif) and in addition that ![ $$\\left \(\\sin \\left \(mt\\right \)\\vert \\sin \\left \(nt\\right \)\\right \) = 0$$ ](A81414_1_En_3_Chapter_IEq63.gif). Finally, let us compute the norm of these functions. Clearly, ![ $$\\left \\Vert 1\\right \\Vert = 1$$ ](A81414_1_En_3_Chapter_IEq64.gif). We just proved that ![ $$\\left \\Vert \\cos \\left \(mt\\right \)\\right \\Vert = \\left \\Vert \\sin \\left \(mt\\right \)\\right \\Vert$$ ](A81414_1_En_3_Chapter_IEq65.gif). This combined with the fact that ![ $${\\sin }{}^{2}\\left \(mt\\right \) {+\\cos }{}^{2}\\left \(mt\\right \) = 1$$ ](A81414_1_En_3_Chapter_Equbs.gif) shows that ![ $$\\left \\Vert \\cos \\left \(mt\\right \)\\right \\Vert = \\left \\Vert \\sin \\left \(mt\\right \)\\right \\Vert = \\frac{1} {\\sqrt{2}}$$ ](A81414_1_En_3_Chapter_Equbt.gif) Example 3.3.13. Let us try to do Gram-Schmidt on 1, cost, cos2 t using the above inner product. We already know that the first two functions are orthogonal, so ![ $$\\begin{array}{rcl}{ e}_{1}& =& 1, \\\\ {e}_{2}& =& \\sqrt{2}\\cos \\left \(t\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ47.gif) ![ $$\\begin{array}{rcl}{ z}_{2}& =& {\\cos }{}^{2}\\left \(t\\right \) -\\left \({\\cos }{}^{2}\\left \(t\\right \)\\vert 1\\right \)1 -\\left \({\\cos }{}^{2}\\left \(t\\right \)\\vert \\sqrt{2}\\cos \\left \(t\\right \)\\right \)\\sqrt{2}\\cos \\left \(t\\right \) \\\\& =& {\\cos }{}^{2}\\left \(t\\right \) - \\frac{1} {2\\pi }\\left \({\\int\\nolimits \\nolimits }_{-\\pi }^{{\\pi }\\cos }{}^{2}\\left \(t\\right \)\\mathrm{d}t\\right \) - \\frac{2} {2\\pi }\\left \({\\int\\nolimits \\nolimits }_{-\\pi }^{{\\pi }\\cos }{}^{2}\\left \(t\\right \)\\cos \\left \(t\\right \)\\mathrm{d}t\\right \)\\cos t \\\\& =& {\\cos }{}^{2}\\left \(t\\right \) -\\frac{1} {2} - \\frac{1} {\\pi }\\left \({\\int\\nolimits \\nolimits }_{-\\pi }^{{\\pi }\\cos }{}^{3}\\left \(t\\right \)\\mathrm{d}t\\right \)\\cos t \\\\ & =& {\\cos }{}^{2}\\left \(t\\right \) -\\frac{1} {2} \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ48.gif) Thus, the third function is ![ $$\\begin{array}{rcl}{ e}_{3}& =& \\frac{{\\cos }{}^{2}\\left \(t\\right \) -\\frac{1} {2}}{} \\left\\Vert\\cos{}^{2}\\left \(t\\right \) -\\frac{1} {2}\\right \\Vert {} \\\\ & =& 2{\\sqrt{2}\\cos }{}^{2}\\left \(t\\right \) -\\sqrt{2}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ49.gif) ### 3.3.1 Exercises 1. Use Gram-Schmidt on the vectors ![ $$\\left \[\\begin{array}{lllll} {x}_{1} & {x}_{2} & {x}_{3} & {x}_{4} & {x}_{5}\\end{array} \\right \] = \\left \[\\begin{array}{lllll} \\sqrt{5}& - 2&4 &e &3 \\\\ 0 &8 &\\pi&2 & - 10 \\\\ 0 &0 &1 + \\sqrt{2}&3 & - 4 \\\\ 0 &0 &0 & - 2&6\\\\ 0 &0 &0 &0 &1\\end{array} \\right \]$$ ](A81414_1_En_3_Chapter_Equbu.gif) to obtain an orthonormal basis for ![ $${\\mathbb{F}}^{5}$$ ](A81414_1_En_3_Chapter_IEq66.gif). 2. Find an orthonormal basis for ℝ 3 where the first vector is proportional to ![ $$\\left \(1,1,1\\right \)$$ ](A81414_1_En_3_Chapter_IEq67.gif). 3. Apply Gram-Schmidt to the collection x 1 = 1, 0, 1, 0, x 2 = 1, 1, 1, 0, x 3 = 0, 1, 0, 0. 4. Apply Gram-Schmidt to the collection x 1 = 1, 0, 1, 0, x 2 = 0, 1, 1, 0, x 3 = 0, 1, 0, 1 and complete to an orthonormal basis for ℝ 4. 5. Apply Gram-Schmidt to sint, sin2 t, sin3 t using the inner product ![ $$\\left \(f\\vert g\\right \) = \\frac{1} {2\\pi }{\\int\\nolimits \\nolimits }_{-\\pi }^{\\pi }f\\left \(t\\right \)g\\left \(t\\right \)\\mathrm{d}t.$$ ](A81414_1_En_3_Chapter_Equbv.gif) 6. Given an arbitrary collection of vectors x 1,..., x m in an inner product space V, show that it is possible to find orthogonal vectors z 1,..., z n ∈ V such that ![ $$\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m}\\end{array} \\right \] = \\left \[\\begin{array}{ccc} {z}_{1} & \\cdots &{z}_{n}\\end{array} \\right \]{A}_{\\mathrm{ref}},$$ ](A81414_1_En_3_Chapter_Equbw.gif) where A ref is an n ×m matrix in row echelon form (see Sect. 1.13). Explain how this can be used to solve systems of the form ![ $$\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m}\\end{array} \\right \]\\left \[\\begin{array}{c} {\\xi }_{1}\\\\ \\vdots \\\\ {\\xi }_{m}\\end{array} \\right \] = b.$$ ](A81414_1_En_3_Chapter_Equbx.gif) 7. The goal of this exercise is to understand the dual basis to a basis x 1,..., x n for an inner product space V. We say that x 1 ∗ ,..., x n ∗ is dual to x 1,..., x n if (x i | x j ∗ ) = δ ij . (a) Show that each basis has a unique dual basis (you have to show it exists, that it is a basis, and that there is only one such basis). (b) Show that if x 1,..., x n is a basis and ![ $$L : {\\mathbb{F}}^{n} \\rightarrow V$$ ](A81414_1_En_3_Chapter_IEq68.gif) is the usual coordinate isomorphism given by ![ $$\\begin{array}{rcl} L\\left \(\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n}\\end{array} \\right \]\\right \)& =& \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n}\\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n}\\end{array} \\right \] \\\\ & =& {\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{n}{x}_{n}, \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ50.gif) then its inverse is given by ![ $${L}^{-1}\\left \(x\\right \) = \\left \[\\begin{array}{c} \\left \(x\\vert {x}_{1}^{{_\\ast}}\\right \)\\\\ \\vdots \\\\ \\left \(x\\vert {x}_{n}^{{_\\ast}}\\right \)\\end{array} \\right \].$$ ](A81414_1_En_3_Chapter_Equby.gif) (c) Show that a basis is orthonormal if and only if it is self-dual, i.e., it is its own dual basis x i = x i ∗ , i = 1,..., n. (d) Given ![ $$\\left \(1,1,0\\right \),\\left \(1,0,1\\right \),\\left \(0,1,1\\right \) \\in{\\mathbb{R}}^{3}$$ ](A81414_1_En_3_Chapter_IEq69.gif), find the dual basis. (e) Find the dual basis for 1, t, t 2 ∈ P 2 with respect to the inner product ![ $$\\left \(f\\vert g\\right \) ={ \\int\\nolimits \\nolimits }_{-1}^{1}f\\left \(t\\right \)g\\left \(t\\right \)\\mathrm{d}t$$ ](A81414_1_En_3_Chapter_Equbz.gif) 8. Using the inner product ![ $$\\left \(f\\vert g\\right \) ={ \\int\\nolimits \\nolimits }_{0}^{1}f\\left \(t\\right \)g\\left \(t\\right \)\\mathrm{d}t$$ ](A81414_1_En_3_Chapter_Equca.gif) on ℝt, apply Gram-Schmidt to 1, t, t 2 to find an orthonormal basis for P 2. 9. (Legendre Polynomials) Consider the inner product ![ $$\\left \(f\\vert g\\right \) ={ \\int\\nolimits \\nolimits }_{a}^{b}f\\left \(t\\right \)g\\left \(t\\right \)\\mathrm{d}t$$ ](A81414_1_En_3_Chapter_Equcb.gif) on ℝt and define ![ $$\\begin{array}{rcl}{ q}_{2n}\\left \(t\\right \)& =&{ \\left \(t - a\\right \)}^{n}{\\left \(t - b\\right \)}^{n}, \\\\ {p}_{n}\\left \(t\\right \)& =& \\frac{{d}^{n}} {\\mathrm{d}{t}^{n}}\\left \({q}_{2n}\\left \(t\\right \)\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ51.gif) (a) Show that ![ $$\\begin{array}{rcl} {q}_{2n}\\left \(a\\right \)& =& {q}_{2n}\\left \(b\\right \) = 0, \\\\ & & \\vdots \\\\ \\frac{{d}^{n-1}{q}_{2n}} {\\mathrm{d}{t}^{n-1}} \\left \(a\\right \)& =& \\frac{{d}^{n-1}{q}_{2n}} {\\mathrm{d}{t}^{n-1}} \\left \(b\\right \) = 0.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ52.gif) (b) Show that p n has degree n. (c) Use induction on n to show that p n t is perpendicular to 1, t,..., t n − 1. Hint: Use integration by parts. (d) Show that p 0, p 1,..., p n ,... are orthogonal to each other. 10. (Lagrange Interpolation) Select n + 1 distinct points t 0,..., t n ∈ ℂ and consider ![ $$\\left \(p\\left \(t\\right \)\\vert q\\left \(t\\right \)\\right \) ={ \\sum\\nolimits }_{i=0}^{n}p\\left \({t}_{ i}\\right \)\\overline{q\\left \({t}_{i}\\right \)}.$$ ](A81414_1_En_3_Chapter_Equcc.gif) (a) Show that this defines an inner product on P n but not on ℂt. (b) Consider ![ $$\\begin{array}{rcl} {p}_{0}\\left \(t\\right \)& =& \\frac{\\left \(t - {t}_{1}\\right \)\\left \(t - {t}_{2}\\right \)\\cdots \\left \(t - {t}_{n}\\right \)} {\\left \({t}_{0} - {t}_{1}\\right \)\\left \({t}_{0} - {t}_{2}\\right \)\\cdots \\left \({t}_{0} - {t}_{n}\\right \)}, \\\\ {p}_{1}\\left \(t\\right \)& =& \\frac{\\left \(t - {t}_{0}\\right \)\\left \(t - {t}_{2}\\right \)\\cdots \\left \(t - {t}_{n}\\right \)} {\\left \({t}_{1} - {t}_{0}\\right \)\\left \({t}_{1} - {t}_{2}\\right \)\\cdots \\left \({t}_{1} - {t}_{n}\\right \)}, \\\\ & & \\vdots \\\\ {p}_{n}\\left \(t\\right \)& =& \\frac{\\left \(t - {t}_{0}\\right \)\\left \(t - {t}_{1}\\right \)\\cdots \\left \(t - {t}_{n-1}\\right \)} {\\left \({t}_{n} - {t}_{0}\\right \)\\left \({t}_{n} - {t}_{1}\\right \)\\cdots \\left \({t}_{n} - {t}_{n-1}\\right \)}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ53.gif) Show that p i t j = δ ij and that p 0,..., p n form an orthonormal basis for P n . (c) Use p 0,..., p n to solve the problem of finding a polynomial p ∈ P n such that pt i = b i . (d) Let λ1,..., λ n ∈ ℂ (they may not be distinct) and ![ $$f : \\mathbb{C} \\rightarrow\\mathbb{C}$$ ](A81414_1_En_3_Chapter_IEq70.gif) a function. Show that there is a polynomial pt ∈ ℂt such that pλ1 = fλ1,..., pλ n = fλ n . 11. (P. Enflo) Let V be a finite-dimensional inner product space and x 1,..., x n , y 1,..., y n ∈ V. Show Enflo's inequality ![ $${\\left \({\\sum\\nolimits }_{i,j=1}^{n}{\\left \\vert \\left \({x}_{ i}\\vert {y}_{j}\\right \)\\right \\vert }^{2}\\right \)}^{2} \\leq \\left \({\\sum\\nolimits }_{i,j=1}^{n}{\\left \\vert \\left \({x}_{ i}\\vert {x}_{j}\\right \)\\right \\vert }^{2}\\right \)\\left \({\\sum\\nolimits }_{i,j=1}^{n}{\\left \\vert \\left \({y}_{ i}\\vert {y}_{j}\\right \)\\right \\vert }^{2}\\right \).$$ ](A81414_1_En_3_Chapter_Equcd.gif) Hint: Use an orthonormal basis and start expanding on the left-hand side. 12. Let L : V -> V be an operator on a finite-dimensional inner product space. (a) If λ is an eigenvalue for L, then ![ $$\\left \\vert \\lambda \\right \\vert \\leq \\left \\Vert L\\right \\Vert.$$ ](A81414_1_En_3_Chapter_Equce.gif) (b) Given examples of 2 ×2 matrices where strict inequality always holds. 13. Let L : V 1 -> V 2 and K : V 2 -> V 3 be linear maps between finite-dimensional inner product spaces. Show that ![ $$\\left \\Vert K \\circ L\\right \\Vert \\leq \\left \\Vert K\\right \\Vert \\left \\Vert L\\right \\Vert.$$ ](A81414_1_En_3_Chapter_Equcf.gif) 14. Let L, K : V -> V be operators on a finite-dimensional inner product space. If K is invertible, show that ![ $$\\left \\Vert L\\right \\Vert = \\left \\Vert K \\circ L \\circ{K}^{-1}\\right \\Vert.$$ ](A81414_1_En_3_Chapter_Equcg.gif) 15. Let L, K : V -> W be linear maps between finite-dimensional inner product spaces. Show that ![ $$\\left \\Vert L + K\\right \\Vert \\leq \\left \\Vert L\\right \\Vert + \\left \\Vert K\\right \\Vert.$$ ](A81414_1_En_3_Chapter_Equch.gif) 16. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times m}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_3_Chapter_IEq71.gif). Show that ![ $$\\left \\vert {\\alpha }_{ij}\\right \\vert \\leq \\left \\Vert A\\right \\Vert,$$ ](A81414_1_En_3_Chapter_Equci.gif) where ![ $$\\left \\Vert A\\right \\Vert$$ ](A81414_1_En_3_Chapter_IEq72.gif) is the operator norm of the linear map ![ $$A : {\\mathbb{F}}^{m} \\rightarrow{\\mathbb{F}}^{n}$$ ](A81414_1_En_3_Chapter_IEq73.gif). Give examples where ![ $$\\left \\Vert A\\right \\Vert \\neq \\sqrt{\\mathrm{tr }\\left \(A{A}^{{_\\ast} } \\right \)} = \\sqrt{\\left \(A\\vert A \\right \)}.$$ ](A81414_1_En_3_Chapter_Equcj.gif) ## 3.4 Orthogonal Complements and Projections The goal of this section is to figure out if there is a best possible projection onto a subspace of a vector space. In general, there are quite a lot of projections, but if we have an inner product on the vector space, we can imagine that there should be a projection where the image of a vector is as close as possible to the original vector. Let M ⊂ V be a finite-dimensional subspace of an inner product space. From the previous section, we know that it is possible to find an orthonormal basis e 1,..., e m for M. Using that basis, we define E : V -> V by ![ $$E\\left \(x\\right \) = \\left \(x\\vert {e}_{1}\\right \){e}_{1} + \\cdots+ \\left \(x\\vert {e}_{m}\\right \){e}_{m}.$$ ](A81414_1_En_3_Chapter_Equck.gif) Note that Ez ∈ M for all z ∈ V. Moreover, if x ∈ M, then Ex = x. Thus E 2 z = Ez for all z ∈ V. This shows that E is a projection whose image is M. Next, let us identify the kernel. If x ∈ kerE, then ![ $$\\begin{array}{rcl} 0& =& E\\left \(x\\right \) \\\\ & =& \\left \(x\\vert {e}_{1}\\right \){e}_{1} + \\cdots+ \\left \(x\\vert {e}_{m}\\right \){e}_{m}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ54.gif) Since e 1,..., e m , is a basis this means that ![ $$\\left \(x\\vert {e}_{1}\\right \) = \\cdots= \\left \(x\\vert {e}_{m}\\right \) = 0$$ ](A81414_1_En_3_Chapter_IEq74.gif). This in turn is equivalent to the condition ![ $$\\left \(x\\vert z\\right \) = 0\\text{ for all }z \\in M,$$ ](A81414_1_En_3_Chapter_Equcl.gif) since any z ∈ M is a linear combination of e 1,..., e m . Definition 3.4.1. The set of all such vectors is denoted ![ $${M}^{\\perp } = \\left \\{x \\in V : \\left \(x\\vert z\\right \) = 0\\text{ for all }z \\in M\\right \\}$$ ](A81414_1_En_3_Chapter_Equcm.gif) and is called the orthogonal complement to M in V. Given that kerE = M ⊥ , we have a formula for the kernel that does not depend on E. Thus, E is simply the projection of V onto M along M ⊥ . The only problem with this characterization is that we do not know from the outset that V = M ⊕ M ⊥ . In case M is finite-dimensional, however, the existence of the projection E insures us that this must be the case as ![ $$x = E\\left \(x\\right \) + \\left \({1}_{V } - E\\right \)\\left \(x\\right \)$$ ](A81414_1_En_3_Chapter_Equcn.gif) and ![ $$\\left \({1}_{V } - E\\right \)\\left \(x\\right \) \\in \\ker \\left \(E\\right \) = {M}^{\\perp }$$ ](A81414_1_En_3_Chapter_IEq75.gif). Fig. 3.5 Orthogonal projection Definition 3.4.2. When there is an orthogonal direct sum decomposition: V = M ⊕ M ⊥ we call the projection onto M along M ⊥ the orthogonal projection onto M and denote it by proj M : V -> V. The vector proj M x also solves our problem of finding the vector in M that is closest to x. To see why this is true, choose z ∈ M and consider the triangle that has the three vectors x, proj M x, and z as vertices. The sides are given by x − proj M x, proj M x − z, and z − x (see Fig. 3.5). Since proj M x − z ∈ M and x − proj M x ∈ M ⊥ , these two vectors are perpendicular, and hence we have ![ $$ \\begin{array}{rcl}{ \\left \\Vert x -\\mathrm{{ proj}}_{M}\\left \(x\\right \)\\right \\Vert }^{2}& \\leq& \\\\ {\\left \\Vert x -\\mathrm{{ proj}}_{M}\\left \(x\\right \)\\right \\Vert }^{2} +{ \\left \\Vert \\mathrm{{proj}}_{ M}\\left \(x\\right \) - z\\right \\Vert }^{2}& =&{ \\left \\Vert x - z\\right \\Vert }^{2}, \\\\ \\end{array} $$ ](A81414_1_En_3_Chapter_Equ55.gif) where equality holds only when ![ $${\\left \\Vert \\mathrm{{proj}}_{M}\\left \(x\\right \) - z\\right \\Vert }^{2} = 0$$ ](A81414_1_En_3_Chapter_IEq76.gif), i.e., proj M x is the one and only point closest to x among all points in M. Let us collect the above information in a theorem. Theorem 3.4.3. (Orthogonal Sum Decomposition) Let V be an inner product space and M ⊂ V a finite-dimensional subspace. Then, V = M ⊕ M ⊥ and for any orthonormal basis e 1 ,...,e m for M, the projection onto M along M ⊥ is given by: ![ $$\\mathrm{{proj}}_{M}\\left \(x\\right \) = \\left \(x\\vert {e}_{1}\\right \){e}_{1} + \\cdots+ \\left \(x\\vert {e}_{m}\\right \){e}_{m}.$$ ](A81414_1_En_3_Chapter_Equco.gif) Corollary 3.4.4. If V is a finite-dimensional inner product space and M ⊂ V is a subspace, then ![ $$\\begin{array}{rcl} V & =& M \\oplus{M}^{\\perp }, \\\\ {\\left \({M}^{\\perp }\\right \)}^{\\perp }& =& {M}^{\\perp \\perp } = M, \\\\ \\dim V & =& \\dim M +\\dim {M}^{\\perp }.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ56.gif) Proof. The first statement was proven in Theorem 3.4.3. The third statement now follows from Corollary 1.10.14. To prove the second statement, select an orthonormal basis e 1,..., e k for M and e k + 1,..., e n for M ⊥ . Then, we see that e 1,..., e k ∈ M ⊥ ⊥ and consequently M ⊂ M ⊥ ⊥ . On the other hand, note that, if we apply the first and third statements to M ⊥ instead if M, then we obtain ![ $$\\dim V =\\dim {M}^{\\perp } +\\dim {M}^{\\perp \\perp }.$$ ](A81414_1_En_3_Chapter_Equcp.gif) In particular, we have dimM = dimM ⊥ ⊥ since M ⊂ M ⊥ ⊥ , this proves the claim. □ Orthogonal projections can also be characterized as follows. Theorem 3.4.5. (Characterization of Orthogonal Projections) Assume that V is a finite-dimensional inner product space and E : V -> V a projection onto M ⊂ V. Then, the following conditions are equivalent: (1) E = projM (2) im E ⊥ = ker E (3) ![ $$\\left \\Vert E\\left \(x\\right \)\\right \\Vert \\leq \\left \\Vert x\\right \\Vert$$ ](A81414_1_En_3_Chapter_IEq77.gif) for all x ∈ V Proof. We have already seen that the first two conditions are equivalent. These two conditions imply the third as x = Ex + 1 V − Ex is an orthogonal decomposition, and thus, ![ $$\\begin{array}{rcl}{ \\left \\Vert x\\right \\Vert }^{2}& =&{ \\left \\Vert E\\left \(x\\right \)\\right \\Vert }^{2} +{ \\left \\Vert \\left \({1}_{ V } - E\\right \)\\left \(x\\right \)\\right \\Vert }^{2} \\\\ & \\geq &{ \\left \\Vert E\\left \(x\\right \)\\right \\Vert }^{2}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ57.gif) It remains to be seen that the third condition implies that E is orthogonal. To prove this, choose x ∈ kerE ⊥ and observe that Ex = x − 1 V − Ex is an orthogonal decomposition since ![ $$\\left \({1}_{V } - E\\right \)\\left \(z\\right \) \\in \\ker \\left \(E\\right \)$$ ](A81414_1_En_3_Chapter_IEq78.gif) for all z ∈ V. Thus, ![ $$\\begin{array}{rcl}{ \\left \\Vert x\\right \\Vert }^{2}& \\geq &{ \\left \\Vert E\\left \(x\\right \)\\right \\Vert }^{2} \\\\ & =&{ \\left \\Vert x -\\left \({1}_{V } - E\\right \)\\left \(x\\right \)\\right \\Vert }^{2} \\\\ & =&{ \\left \\Vert x\\right \\Vert }^{2} +{ \\left \\Vert \\left \({1}_{ V } - E\\right \)\\left \(x\\right \)\\right \\Vert }^{2} \\\\ & \\geq &{ \\left \\Vert x\\right \\Vert }^{2} \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ58.gif) This means that ![ $$\\left \({1}_{V } - E\\right \)\\left \(x\\right \) = 0$$ ](A81414_1_En_3_Chapter_IEq79.gif) and hence x = Ex ∈ imE. Thus, kerE ⊥ ⊂ imE. We also know from the dimension formula (Theorem 1.11.7) and Corollary 3.4.4 that ![ $$\\begin{array}{rcl} \\dim \\left \(\\mathrm{im}\\left \(E\\right \)\\right \)& =& \\dim \\left \(V \\right \) -\\dim \\left \(\\ker \\left \(E\\right \)\\right \) \\\\ & =& \\dim \\left \(\\ker {\\left \(E\\right \)}^{\\perp }\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ59.gif) This shows that kerE ⊥ = imE. □ Example 3.4.6. Let V = ℝ n and M = span1,..., 1. Since ![ $${\\left \\Vert \\left \(1,\\ldots,1\\right \)\\right \\Vert }^{2} = n$$ ](A81414_1_En_3_Chapter_IEq80.gif), we see that ![ $$\\begin{array}{rcl} \\mathrm{{proj}}_{M}\\left \(x\\right \)& =& \\mathrm{{proj}}_{M}\\left \(\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{1} \\end{array} \\right \]\\right \) \\\\ & =& \\frac{1} {n}\\left \(\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{1} \\end{array} \\right \]\\left \\vert \\left \[\\begin{array}{c} 1\\\\ \\vdots\\\\ 1 \\end{array} \\right \]\\right.\\right \)\\left \[\\begin{array}{c} 1\\\\ \\vdots\\\\ 1 \\end{array} \\right \] \\\\ & =& \\frac{{\\alpha }_{1} + \\cdots+ {\\alpha }_{n}} {n} \\left \[\\begin{array}{c} 1\\\\ \\vdots\\\\ 1 \\end{array} \\right \] \\\\ & =& \\bar{\\alpha }\\left \[\\begin{array}{c} 1\\\\ \\vdots\\\\ 1 \\end{array} \\right \], \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ60.gif) where ![ $$\\bar{\\alpha }$$ ](A81414_1_En_3_Chapter_IEq81.gif) is the average or mean of the values α1,..., α n . Since proj M x is the closest element in M to x, we get a geometric interpretation of the average of α1,..., α n . If in addition we use that proj M x and x − proj M x are perpendicular, we arrive at a nice formula that helps compute the variance: ![ $$\\mathrm{Var}\\left \({\\alpha }_{1},\\ldots,{\\alpha }_{n}\\right \) = \\frac{1} {n - 1}{\\sum\\nolimits }_{i=1}^{n}{\\left \\vert {\\alpha }_{ i} -\\bar{ \\alpha }\\right \\vert }^{2},$$ ](A81414_1_En_3_Chapter_Equcq.gif) where ![ $$\\begin{array}{rcl} {\\sum\\nolimits }_{i=1}^{n}{\\left \\vert {\\alpha }_{ i} -\\bar{ \\alpha }\\right \\vert }^{2}& =&{ \\left \\Vert x -\\mathrm{{ proj}}_{ M}\\left \(x\\right \)\\right \\Vert }^{2} \\\\ & =&{ \\left \\Vert x\\right \\Vert }^{2} -{\\left \\Vert \\mathrm{{proj}}_{ M}\\left \(x\\right \)\\right \\Vert }^{2} \\\\ & =& {\\sum\\nolimits }_{i=1}^{n}{\\left \\vert {\\alpha }_{ i}\\right \\vert }^{2} -{\\sum\\nolimits }_{i=1}^{n}{\\left \\vert \\bar{\\alpha }\\right \\vert }^{2} \\\\ & =& \\left \({\\sum\\nolimits }_{i=1}^{n}{\\left \\vert {\\alpha }_{ i}\\right \\vert }^{2}\\right \) - n{\\left \\vert \\bar{\\alpha }\\right \\vert }^{2} \\\\ & =& \\left \({\\sum\\nolimits }_{i=1}^{n}{\\left \\vert {\\alpha }_{ i}\\right \\vert }^{2}\\right \) -\\frac{{\\left \({\\sum\\nolimits }_{i=1}^{n}{\\alpha }_{i}\\right \)}^{2}} {n}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ61.gif) Example 3.4.7. As above, let M ⊂ V be a finite-dimensional subspace of an inner product space and e 1,..., e m an orthonormal basis for M. Using the formula ![ $$\\begin{array}{rcl} \\mathrm{{proj}}_{M}\\left \(x\\right \)& =& \\left \(x\\vert {e}_{1}\\right \){e}_{1} + \\cdots+ \\left \(x\\vert {e}_{m}\\right \){e}_{m}, \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ62.gif) the inequality ![ $$\\left \\Vert \\mathrm{{proj}}_{M}\\left \(x\\right \)\\right \\Vert \\leq \\left \\Vert x\\right \\Vert$$ ](A81414_1_En_3_Chapter_IEq82.gif) translates into theBessel inequality ![ $${\\left \\vert \\left \(x\\vert {e}_{1}\\right \)\\right \\vert }^{2} + \\cdots+{ \\left \\vert \\left \(x\\vert {e}_{ m}\\right \)\\right \\vert }^{2} \\leq {\\left \\Vert x\\right \\Vert }^{2}.$$ ](A81414_1_En_3_Chapter_Equcr.gif) ### 3.4.1 Exercises 1. Consider Mat n ×n ℂ with the inner product ![ $$\\left \(A\\vert B\\right \) =\\mathrm{ tr}\\left \(A{B}^{{_\\ast}}\\right \)$$ ](A81414_1_En_3_Chapter_IEq83.gif). Describe the orthogonal complement to the space of all diagonal matrices. 2. Show that if M = spanz 1,..., z m , then ![ $${M}^{\\perp } = \\left \\{x \\in V : \\left \(x\\vert {z}_{ 1}\\right \) = \\cdots= \\left \(x\\vert {z}_{m}\\right \) = 0\\right \\}.$$ ](A81414_1_En_3_Chapter_Equcs.gif) 3. Assume V = M ⊕ M ⊥ , show that ![ $$x =\\mathrm{{ proj}}_{M}\\left \(x\\right \) +\\mathrm{{ proj}}_{{M}^{\\perp }}\\left \(x\\right \).$$ ](A81414_1_En_3_Chapter_Equct.gif) 4. Find the element in span1, cost, sint that is closest to sin2 t using the inner product ![ $$\\left \(f\\vert g\\right \) = \\frac{1} {2\\pi }{\\int\\nolimits \\nolimits }_{-\\pi }^{\\pi }f\\left \(t\\right \)g\\left \(t\\right \)\\mathrm{d}t.$$ ](A81414_1_En_3_Chapter_Equcu.gif) 5. Assume V = M ⊕ M ⊥ and that L : V -> V is a linear operator. Show that both M and M ⊥ are L-invariant if and only if proj M ∘ L = L ∘ proj M . 6. Let A ∈ Mat m ×n ℝ. a. Show that the row vectors of A are in the orthogonal complement of kerA. b. Use this to show that the row rank and column rank of A are the same. 7. Let M, N ⊂ V be subspaces of a finite-dimensional inner product space. Show that ![ $$\\begin{array}{rcl}{ \\left \(M + N\\right \)}^{\\perp }& =& {M}^{\\perp }\\cap{N}^{\\perp }, \\\\ {\\left \(M \\cap N\\right \)}^{\\perp }& =& {M}^{\\perp } + {N}^{\\perp }.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ63.gif) 8. Find the orthogonal projection onto span2, − 1, 1, 1, − 1, 0 by first computing the orthogonal projection onto the orthogonal complement. 9. Find the polynomial pt ∈ P 2 such that ![ $${\\int\\nolimits \\nolimits }_{0}^{2\\pi }{\\left \\vert p\\left \(t\\right \) -\\cos t\\right \\vert }^{2}\\mathrm{d}t$$ ](A81414_1_En_3_Chapter_Equcv.gif) is smallest possible. 10. Show that the decomposition into even and odd functions on C 0 − a, a, ℂ is orthogonal if we use the inner product ![ $$\\left \(f\\vert g\\right \) ={ \\int\\nolimits \\nolimits }_{-a}^{a}f\\left \(t\\right \)\\overline{g\\left \(t\\right \)}\\mathrm{d}t.$$ ](A81414_1_En_3_Chapter_Equcw.gif) 11. Using the inner product ![ $$\\left \(f\\vert g\\right \) ={ \\int\\nolimits \\nolimits }_{0}^{1}f\\left \(t\\right \)g\\left \(t\\right \)\\mathrm{d}t,$$ ](A81414_1_En_3_Chapter_Equcx.gif) find the orthogonal projection from ℂt onto span1, t = P 1. Given any p ∈ ℂt, you should express the orthogonal projection in terms of the coefficients of p. 12. Using the inner product ![ $$\\left \(f\\vert g\\right \) ={ \\int\\nolimits \\nolimits }_{0}^{1}f\\left \(t\\right \)g\\left \(t\\right \)\\mathrm{d}t,$$ ](A81414_1_En_3_Chapter_Equcy.gif) find the orthogonal projection from ℂt onto span1, t, t 2 = P 2. 13. Compute the orthogonal projection onto the following subspaces: (a) ![ $$\\mathrm{span}\\left \\{\\left \[\\begin{array}{c} 1\\\\ 1 \\\\ 1\\\\ 1\\end{array} \\right \]\\right \\}$$ ](A81414_1_En_3_Chapter_Equcz.gif) (b) ![ $$\\mathrm{span}\\left \\{\\left \[\\begin{array}{c} 1\\\\ -1 \\\\ 0\\\\ 1\\end{array} \\right \],\\left \[\\begin{array}{c} 1\\\\ 1 \\\\ 1\\\\ 0\\end{array} \\right \],\\left \[\\begin{array}{c} 2\\\\ 0 \\\\ 1\\\\ 1\\end{array} \\right \]\\right \\}$$ ](A81414_1_En_3_Chapter_Equda.gif) (c) ![ $$\\mathrm{span}\\left \\{\\left \[\\begin{array}{c} 1\\\\ i \\\\ 0\\\\ 0\\end{array} \\right \],\\left \[\\begin{array}{c} - i\\\\ 1 \\\\ 0\\\\ 0\\end{array} \\right \],\\left \[\\begin{array}{c} 0\\\\ 1 \\\\ i\\\\ 0\\end{array} \\right \]\\right \\}$$ ](A81414_1_En_3_Chapter_Equdb.gif) 14. (Selberg) Let x, y 1,..., y n ∈ V, where V is an inner product space. Show Selberg's generalization of Bessel's inequality ![ $${\\sum\\nolimits }_{i=1}^{n} \\frac{{\\left \\vert \\left \(x\\vert {y}_{i}\\right \)\\right \\vert }^{2}} {{\\sum\\nolimits }_{j=1}^{n}\\left \\vert \\left \({y}_{i}\\vert {y}_{j}\\right \)\\right \\vert } \\leq {\\left \\Vert x\\right \\Vert }^{2}.$$ ](A81414_1_En_3_Chapter_Equdc.gif) Hint: It is a long calculation that comes from expanding the nonnegative quantity ![ $${\\left \\Vert x -{\\sum\\nolimits }_{i=1}^{n} \\frac{\\left \(x\\vert {y}_{i}\\right \)} {{\\sum\\nolimits }_{j=1}^{n}\\left \\vert \\left \({y}_{i}\\vert {y}_{j}\\right \)\\right \\vert }{y}_{i}\\right \\Vert }^{2}.$$ ](A81414_1_En_3_Chapter_Equdd.gif) ## 3.5 Adjoint Maps To introduce the concept of adjoints of linear maps, we start with the construction for matrices, i.e., linear maps ![ $$A : {\\mathbb{F}}^{m} \\rightarrow{\\mathbb{F}}^{n}$$ ](A81414_1_En_3_Chapter_IEq84.gif), where ![ $$\\mathbb{F} = \\mathbb{R}$$ ](A81414_1_En_3_Chapter_IEq85.gif) or ℂ and ![ $${\\mathbb{F}}^{m}$$ ](A81414_1_En_3_Chapter_IEq86.gif) and ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_3_Chapter_IEq87.gif) are equipped with their standard inner products. We can write A as an n ×m matrix and define the adjoint A ∗ = \bar{ A} t , i.e., A ∗ is the transposed and conjugate of A. In case ![ $$\\mathbb{F} = \\mathbb{R}$$ ](A81414_1_En_3_Chapter_IEq88.gif), conjugation is irrelevant, so A ∗ = A t . Note that since A ∗ is an m ×n matrix, it corresponds to a linear map ![ $${A}^{{_\\ast}} : {\\mathbb{F}}^{n} \\rightarrow{\\mathbb{F}}^{m}$$ ](A81414_1_En_3_Chapter_IEq89.gif). This matrix adjoint satisfies the crucial property ![ $$\\left \(Ax\\vert y\\right \) = \\left \(x\\vert {A}^{{_\\ast}}y\\right \)$$ ](A81414_1_En_3_Chapter_Equde.gif) for all x ∈ V and y ∈ W. To see this, we simply think of x as an m ×1 matrix, y as an n ×1 matrix, and observe that ![ $$\\begin{array}{rcl} \\left \(Ax\\vert y\\right \)& =&{ \\left \(Ax\\right \)}^{t}\\bar{y} \\\\ & =& {x}^{t}{A}^{t}\\bar{y} \\\\ & =& {x}^{t}\\overline{\\left \(\\bar{{A}}^{t}y\\right \)} \\\\ & =& \\left \(x\\vert {A}^{{_\\ast}}y\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ64.gif) In the general case of a linear map L : V -> W between finite-dimensional spaces, we can try to define the adjoint through matrix representations. To this end, select orthonormal bases for V and W so that we have a diagram ![ $$\\begin{array}{ccc} V & \\frac{L} {\\rightarrow } & W \\\\ \\updownarrow& &\\updownarrow\\\\ {\\mathbb{F}}^{m}&\\frac{\\left \[L\\right \]} {\\rightarrow } &{\\mathbb{F}}^{n} \\end{array},$$ ](A81414_1_En_3_Chapter_Equdf.gif) where the vertical double arrows are isometries. Then, define L ∗ : W -> V as the linear map whose matrix representation is ![ $${\\left \[L\\right \]}^{{_\\ast}}$$ ](A81414_1_En_3_Chapter_IEq90.gif). In other words, ![ $$\\left \[{L}^{{_\\ast}}\\right \] ={ \\left \[L\\right \]}^{{_\\ast}}$$ ](A81414_1_En_3_Chapter_IEq91.gif) and the following diagram commutes: ![ $$\\begin{array}{ccc} V & \\frac{{L}^{{_\\ast}}} {\\leftarrow } & W \\\\ \\updownarrow& &\\updownarrow\\\\ {\\mathbb{F}}^{m}&\\frac{{\\left \[L\\right \]}^{{_\\ast}}} {\\leftarrow } &{\\mathbb{F}}^{n} \\end{array}.$$ ](A81414_1_En_3_Chapter_Equdg.gif) Because the vertical arrows are isometries we also have ![ $$\\left \(Lx\\vert y\\right \) = \\left \(x\\vert {L}^{{_\\ast}}y\\right \).$$ ](A81414_1_En_3_Chapter_Equdh.gif) Proposition 3.5.1. Let L : V -> W be a linear map between finite-dimensional spaces. Then, there is a unique adjoint L ∗ : W -> V with the property that ![ $$\\left \(Lx\\vert y\\right \) = \\left \(x\\vert {L}^{{_\\ast}}y\\right \)$$ ](A81414_1_En_3_Chapter_Equdi.gif) for all x ∈ V and y ∈ W. Proof. We saw already that such an adjoint exists, but we can give a similar construction of L ∗ that uses only an orthonormal basis e 1,..., e m for V. To define L ∗ y, we need to know the inner products ![ $$\\left \({L}^{{_\\ast}}y\\vert {e}_{j}\\right \)$$ ](A81414_1_En_3_Chapter_IEq92.gif). The relationship ![ $$\\left \(Lx\\vert y\\right \) = \\left \(x\\vert {L}^{{_\\ast}}y\\right \)$$ ](A81414_1_En_3_Chapter_IEq93.gif) indicates that ![ $$\\left \({L}^{{_\\ast}}y\\vert {e}_{j}\\right \)$$ ](A81414_1_En_3_Chapter_IEq94.gif) can be calculated as ![ $$\\begin{array}{rcl} \\left \({L}^{{_\\ast}}y\\vert {e}_{ j}\\right \)& =& \\overline{\\left \({e}_{j}\\vert {L}^{{_\\ast}}y\\right \)} \\\\ & =& \\overline{\\left \(L{e}_{j}\\vert y\\right \)} \\\\ & =& \\left \(y\\vert L{e}_{j}\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ65.gif) So let us define ![ $${L}^{{_\\ast}}y ={ \\sum\\nolimits }_{j=1}^{m}\\left \(y\\vert L{e}_{ j}\\right \){e}_{j}.$$ ](A81414_1_En_3_Chapter_Equdj.gif) This clearly defines a linear map L ∗ : W -> V satisfying ![ $$\\left \(L{e}_{j}\\vert y\\right \) = \\left \({e}_{j}\\vert {L}^{{_\\ast}}y\\right \).$$ ](A81414_1_En_3_Chapter_Equdk.gif) The more general condition ![ $$\\left \(Lx\\vert y\\right \) = \\left \(x\\vert {L}^{{_\\ast}}y\\right \)$$ ](A81414_1_En_3_Chapter_Equdl.gif) follows immediately by writing x as a linear combination of e 1,..., e m and using linearity in x on both sides of the equation. Next, we address the issue of whether the adjoint is uniquely defined, i.e., could there be two linear maps K i : W -> V, i = 1, 2 such that ![ $$\\left \(x\\vert {K}_{1}y\\right \) = \\left \(Lx\\vert y\\right \) = \\left \(x\\vert {K}_{2}y\\right \)?$$ ](A81414_1_En_3_Chapter_Equdm.gif) This would imply ![ $$\\begin{array}{rcl} 0& =& \\left \(x\\vert {K}_{1}y\\right \) -\\left \(x\\vert {K}_{2}y\\right \) \\\\ & =& \\left \(x\\vert {K}_{1}y - {K}_{2}y\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ66.gif) If x = K 1 y − K 2 y, then ![ $${\\left \\Vert {K}_{1}y - {K}_{2}y\\right \\Vert }^{2} = 0,$$ ](A81414_1_En_3_Chapter_Equdn.gif) and hence, K 1 y = K 2 y. This proves the claims. □ The adjoint has the following useful elementary properties. Proposition 3.5.2. Let L,K : V -> W, L 1 : V 1 -> V 2 , and L 2 : V 2 -> V 3 be linear maps between finite-dimensional inner product spaces. Then (1) ![ $${\\left \(L + K\\right \)}^{{_\\ast}} = {L}^{{_\\ast}} + {K}^{{_\\ast}}$$ ](A81414_1_En_3_Chapter_IEq95.gif). (2) L ∗∗ = L (3) ![ $${\\left \(\\lambda {1}_{V }\\right \)}^{{_\\ast}} =\\bar{ \\lambda }{1}_{V }$$ ](A81414_1_En_3_Chapter_IEq96.gif). (4) ![ $${\\left \({L}_{2}{L}_{1}\\right \)}^{{_\\ast}} = {L}_{1}^{{_\\ast}}{L}_{2}^{{_\\ast}}$$ ](A81414_1_En_3_Chapter_IEq97.gif). (5) If L is invertible, then ![ $${\\left \({L}^{-1}\\right \)}^{{_\\ast}} ={ \\left \({L}^{{_\\ast}}\\right \)}^{-1}$$ ](A81414_1_En_3_Chapter_IEq98.gif). Proof. The key to the proofs of these statements is the uniqueness statement in Proposition 3.5.1, i.e., any L ′ : W -> V such that ![ $$\\left \(Lx\\vert y\\right \) = \\left \(x\\vert {L}^{{\\prime}}y\\right \)$$ ](A81414_1_En_3_Chapter_IEq99.gif) for all x ∈ V and y ∈ W must be the adjoint L ′ = L ∗ . To check the first property, we calculate ![ $$\\begin{array}{rcl} \\left \(x\\vert {\\left \(L + K\\right \)}^{{_\\ast}}y\\right \)& =& \\left \(\\left \(L + K\\right \)x\\vert y\\right \) \\\\ & =& \\left \(Lx\\vert y\\right \) + \\left \(Kx\\vert y\\right \) \\\\ & =& \\left \(x\\vert {L}^{{_\\ast}}y\\right \) + \\left \(x\\vert {K}^{{_\\ast}}y\\right \) \\\\ & =& \\left \(x\\vert \\left \({L}^{{_\\ast}} + {K}^{{_\\ast}}\\right \)y\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ67.gif) The second is immediate from ![ $$\\begin{array}{rcl} \\left \(Lx\\vert y\\right \)& =& \\left \(x\\vert {L}^{{_\\ast}}y\\right \) \\\\ & =& \\overline{\\left \({L}^{{_\\ast}}y\\vert x\\right \)} \\\\ & =& \\overline{\\left \(y\\vert {L}^{{_\\ast}{_\\ast}}x\\right \)} \\\\ & =& \\left \({L}^{{_\\ast}{_\\ast}}x\\vert y\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ68.gif) The third property follows from ![ $$\\begin{array}{rcl} \\left \(\\lambda {1}_{V }\\left \(x\\right \)\\vert y\\right \)& =& \\left \(\\lambda x\\vert y\\right \) \\\\ & =& \\left \(x\\vert \\bar{\\lambda }y\\right \) \\\\ & =& \\left \(x\\vert \\bar{\\lambda }{1}_{V }\\left \(y\\right \)\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ69.gif) The fourth property ![ $$\\begin{array}{rcl} \\left \(x\\vert {\\left \({L}_{2}{L}_{1}\\right \)}^{{_\\ast}}y\\right \)& =& \\left \(\\left \({L}_{ 2}{L}_{1}\\right \)\\left \(x\\right \)\\vert y\\right \) \\\\ & =& \\left \({L}_{2}\\left \({L}_{1}\\left \(x\\right \)\\right \)\\vert y\\right \) \\\\ & =& \\left \({L}_{1}\\left \(x\\right \)\\vert {L}_{2}^{{_\\ast}}\\left \(y\\right \)\\right \) \\\\ & =& \\left \(x\\vert {L}_{1}^{{_\\ast}}\\left \({L}_{ 2}^{{_\\ast}}\\left \(y\\right \)\\right \)\\right \) \\\\ & =& \\left \(x\\vert \\left \({L}_{1}^{{_\\ast}}{L}_{ 2}^{{_\\ast}}\\right \)\\left \(y\\right \)\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ70.gif) And finally 1 V = L − 1 L implies that ![ $$\\begin{array}{rcl}{ 1}_{V }& =&{ \\left \({1}_{V }\\right \)}^{{_\\ast}} \\\\ & =&{ \\left \({L}^{-1}L\\right \)}^{{_\\ast}} \\\\ & =& {L}^{{_\\ast}}{\\left \({L}^{-1}\\right \)}^{{_\\ast}} \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ71.gif) as desired. □ Example 3.5.3. As an example, let us find the adjoint to ![ $$\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \] : {\\mathbb{F}}^{n} \\rightarrow V,$$ ](A81414_1_En_3_Chapter_Equdo.gif) when e 1,..., e n is an orthonormal basis. Recall that in Sect. 3.3, we already found a simple formula for the inverse ![ $${\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{-1}\\left \(x\\right \) = \\left \[\\begin{array}{c} \\left \(x\\vert {e}_{1}\\right \)\\\\ \\vdots \\\\ \\left \(x\\vert {e}_{n}\\right \) \\end{array} \\right \]$$ ](A81414_1_En_3_Chapter_Equdp.gif) and we proved that ![ $$\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]$$ ](A81414_1_En_3_Chapter_IEq100.gif)preserves inner products. If we let ![ $$x \\in{\\mathbb{F}}^{n}$$ ](A81414_1_En_3_Chapter_IEq101.gif) and y ∈ V, then we can write ![ $$y = \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \(z\\right \)$$ ](A81414_1_En_3_Chapter_IEq102.gif) for some ![ $$z \\in{\\mathbb{F}}^{n}$$ ](A81414_1_En_3_Chapter_IEq103.gif). With that in mind, we can calculate ![ $$\\begin{array}{rcl} \\left \(\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \(x\\right \)\\vert y\\right \)& =& \\left \(\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \(x\\right \)\\left \\vert \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \(z\\right \)\\right.\\right \) \\\\ & =& \\left \(x\\vert z\\right \) \\\\ & =& \\left \(x\\left \\vert {\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{-1}\\left \(y\\right \)\\right.\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ72.gif) Thus, ![ $${\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}} ={ \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{-1}.$$ ](A81414_1_En_3_Chapter_Equdq.gif) Below we shall generalize this relationship to all isomorphisms that preserve inner products, i.e., isometries. The fact that ![ $${\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}} ={ \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{-1}$$ ](A81414_1_En_3_Chapter_Equdr.gif) simplifies the job of calculating matrix representations with respect to orthonormal bases. Assume that L : V -> W is a linear map between finite-dimensional inner product spaces and that we have orthonormal bases e 1,..., e m for V and f 1,..., f n for W. Then, ![ $$\\begin{array}{rcl} L& =& \\left \[\\begin{array}{ccc} {f}_{1} & \\cdots &{f}_{n} \\end{array} \\right \]\\left \[L\\right \]{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \]}^{{_\\ast}}, \\\\ \\left \[L\\right \]& =&{ \\left \[\\begin{array}{ccc} {f}_{1} & \\cdots &{f}_{n} \\end{array} \\right \]}^{{_\\ast}}L\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \], \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ73.gif) or in diagram form ![ $$\\begin{array}{rrl} V & \\frac{L} {\\rightarrow } &W \\\\ {\\left \[\\begin{array}{ccc} {e}_{1}&\\cdots &{e}_{m} \\end{array} \\right \]}^{{_\\ast}}\\downarrow & &\\uparrow \\left \[\\begin{array}{ccc} {f}_{1}&\\cdots &{f}_{n} \\end{array} \\right \] \\\\ {\\mathbb{F}}^{m}&\\frac{\\left \[L\\right \]} {\\rightarrow } &{\\mathbb{F}}^{n} \\end{array}$$ ](A81414_1_En_3_Chapter_Equds.gif) ![ $$\\begin{array}{rrl} V & \\frac{L} {\\rightarrow } &W \\\\ \\left \[\\begin{array}{ccc} {e}_{1}&\\cdots &{e}_{m} \\end{array} \\right \] \\uparrow & &\\downarrow {\\left \[\\begin{array}{ccc} {f}_{1}&\\cdots &{f}_{n} \\end{array} \\right \]}^{{_\\ast}} \\\\ {\\mathbb{F}}^{m}&\\frac{\\left \[L\\right \]} {\\rightarrow } &{\\mathbb{F}}^{n} \\end{array}$$ ](A81414_1_En_3_Chapter_Equdt.gif) From this, we see that the matrix definition of the adjoint is justified since the properties of the adjoint now tell us that: ![ $$\\begin{array}{rcl}{ L}^{{_\\ast}}& =&{ \\left \(\\left \[\\begin{array}{ccc} {f}_{ 1} & \\cdots &{f}_{n} \\end{array} \\right \]\\left \[L\\right \]{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \]}^{{_\\ast}}\\right \)}^{{_\\ast}} \\\\ & =& \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \]{\\left \[L\\right \]}^{{_\\ast}}{\\left \[\\begin{array}{ccc} {f}_{ 1} & \\cdots &{f}_{n} \\end{array} \\right \]}^{{_\\ast}}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ74.gif) A linear map and its adjoint have some remarkable relationships between their images and kernels. These properties are called the Fredholm alternatives and named after Fredholm who first used these properties to clarify when certain linear systems Lx = b can be solved (see also Sect. 4.9). Theorem 3.5.4. (The Fredholm Alternative) Let L : V -> W be a linear map between finite-dimensional inner product spaces. Then ![ $$\\begin{array}{rcl} \\ker \\left \(L\\right \)& =& \\mathrm{im}{\\left \({L}^{{_\\ast}}\\right \)}^{\\perp }, \\\\ \\ker \\left \({L}^{{_\\ast}}\\right \)& =& \\mathrm{im}{\\left \(L\\right \)}^{\\perp }, \\\\ \\ker {\\left \(L\\right \)}^{\\perp }& =& \\mathrm{im}\\left \({L}^{{_\\ast}}\\right \), \\\\ \\ker {\\left \({L}^{{_\\ast}}\\right \)}^{\\perp }& =& \\mathrm{im}\\left \(L\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ75.gif) Proof. Since L ∗ ∗ = L and M ⊥ ⊥ = M, we see that all of the four statements are equivalent to each other. Thus. we need only prove the first. The two subspaces are characterized by ![ $$\\begin{array}{rcl} \\ker \\left \(L\\right \)& =& \\left \\{x \\in V : Lx = 0\\right \\}, \\\\ \\mathrm{im}{\\left \({L}^{{_\\ast}}\\right \)}^{\\perp }& =& \\left \\{x \\in V : \\left \(x\\vert {L}^{{_\\ast}}z\\right \) = 0\\text{ for all }z \\in W\\right \\}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ76.gif) Now, fix x ∈ V and use that ![ $$\\left \(Lx\\vert z\\right \) = \\left \(x\\vert {L}^{{_\\ast}}z\\right \)$$ ](A81414_1_En_3_Chapter_IEq104.gif) for all z ∈ V. This implies first that if x ∈ kerL, then also x ∈ imL ∗ ⊥ . Conversely, if 0 = x | L ∗ z = Lx | z for all z ∈ W, it must follow that Lx = 0 and hence x ∈ kerL. □ Corollary 3.5.5. (The Rank Theorem) Let L : V -> W be a linear map between finite-dimensional inner product spaces. Then ![ $$\\mathrm{rank}\\left \(L\\right \) =\\mathrm{ rank}\\left \({L}^{{_\\ast}}\\right \).$$ ](A81414_1_En_3_Chapter_Equdu.gif) Proof. Using the dimension formula (Theorem 1.11.7) for linear maps and that orthogonal complements have complementary dimension (Corollary 3.4.4) together with the Fredholm alternative, we see ![ $$\\begin{array}{rcl} \\dim V & =& \\dim \\left \(\\ker \\left \(L\\right \)\\right \) +\\dim \\left \(\\mathrm{im}\\left \(L\\right \)\\right \) \\\\ & =& \\dim \\left \(\\mathrm{im}{\\left \({L}^{{_\\ast}}\\right \)}^{\\perp }\\right \) +\\dim \\left \(\\mathrm{im}\\left \(L\\right \)\\right \) \\\\ & =& \\dim V -\\dim \\left \(\\mathrm{im}\\left \({L}^{{_\\ast}}\\right \)\\right \) +\\dim \\left \(\\mathrm{im}\\left \(L\\right \)\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ77.gif) This implies the result. □ Next, we give another proof of the rank theorem for real and complex matrices (see Theorem 1.12.11). Corollary 3.5.6. For a real or complex n × m matrix A, the column rank equals the row rank. Proof. First, note that ![ $$\\mathrm{rank}\\left \(B\\right \) =\\mathrm{ rank}\\left \(\\bar{B}\\right \)$$ ](A81414_1_En_3_Chapter_IEq105.gif) for all complex matrices B. Secondly, we know that the column rank of A is rankA is the same as the column rank, which by Corollary 3.5.5 equals rankA ∗ which in turn is the row rank of \bar{A}. This proves the result. □ Corollary 3.5.7. Let L : V -> V be a linear operator on a finite-dimensional inner product space. Then, λ is an eigenvalue for L if and only if ![ $$\\bar{\\lambda }$$ ](A81414_1_En_3_Chapter_IEq106.gif) is an eigenvalue for L ∗ . Moreover, these eigenvalue pairs have the same geometric multiplicity: ![ $$\\dim \\left \(\\ker \\left \(L - \\lambda {1}_{V }\\right \)\\right \) =\\dim \\left \(\\ker \\left \({L}^{{_\\ast}}-\\bar{ \\lambda }{1}_{ V }\\right \)\\right \).$$ ](A81414_1_En_3_Chapter_Equdv.gif) Proof. It suffices to prove the dimension statement. Note that ![ $${\\left \(L - \\lambda {1}_{V }\\right \)}^{{_\\ast}} = {L}^{{_\\ast}}-\\bar{ \\lambda }{1}_{V }$$ ](A81414_1_En_3_Chapter_IEq107.gif). Thus, the result follows if we can show ![ $$\\dim \\left \(\\ker \\left \(K\\right \)\\right \) =\\dim \\left \(\\ker \\left \({K}^{{_\\ast}}\\right \)\\right \)$$ ](A81414_1_En_3_Chapter_Equdw.gif) for K : V -> V. This comes from using the dimension formula (Theorem 1.11.7) and Corollary 3.5.5 ![ $$\\begin{array}{rcl} \\dim \\left \(\\ker \\left \(K\\right \)\\right \)& =& \\dim V -\\dim \\left \(\\mathrm{im}\\left \(K\\right \)\\right \) \\\\ & =& \\dim V -\\dim \\left \(\\mathrm{im}\\left \({K}^{{_\\ast}}\\right \)\\right \) \\\\ & =& \\dim \\left \(\\ker \\left \({K}^{{_\\ast}}\\right \)\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ78.gif) □ ### 3.5.1 Exercises 1. Let V and W be finite-dimensional inner product spaces. (a) Show that we can define an inner product on ![ $$\\mathrm{{Hom}}_{\\mathbb{F}}\\left \(V,W\\right \)$$ ](A81414_1_En_3_Chapter_IEq108.gif) by ![ $$\\left \(L\\vert K\\right \) =\\mathrm{ tr}\\left \(L{K}^{{_\\ast}}\\right \) =\\mathrm{ tr}\\left \({K}^{{_\\ast}}L\\right \)$$ ](A81414_1_En_3_Chapter_IEq109.gif). (b) Show that ![ $$\\left \(K\\vert L\\right \) = \\left \({L}^{{_\\ast}}\\vert {K}^{{_\\ast}}\\right \)$$ ](A81414_1_En_3_Chapter_IEq110.gif). (c) If e 1,..., e m is an orthonormal basis for V, show that ![ $$\\left \(K\\vert L\\right \) = \\left \(K\\left \({e}_{1}\\right \)\\vert L\\left \({e}_{1}\\right \)\\right \) + \\cdots+ \\left \(K\\left \({e}_{m}\\right \)\\vert L\\left \({e}_{m}\\right \)\\right \).$$ ](A81414_1_En_3_Chapter_Equdx.gif) 2. Assume that V is a complex inner product space. Recall from Exercise 6 in Sect. 1.4 that we have a vector space V ∗ with the same addition as in V but scalar multiplication is altered by conjugating the scalar. Show that the map F : V ∗ -> HomV, ℂ defined by Fx = ⋅ | x is complex linear and an isomorphism when V is finite-dimensional. Use this to give another definition of the adjoint. Here ![ $$F\\left \(x\\right \) = \\left \(\\cdot \\vert x\\right \) \\in \\mathrm{ Hom}\\left \(V, \\mathbb{C}\\right \)$$ ](A81414_1_En_3_Chapter_Equdy.gif) is the linear map such that ![ $$\\left \(F\\left \(x\\right \)\\right \)\\left \(z\\right \) = \\left \(z\\vert x\\right \)$$ ](A81414_1_En_3_Chapter_IEq111.gif). 3. On Mat n ×n ℂ, use the inner product ![ $$\\left \(A\\vert B\\right \) =\\mathrm{ tr}\\left \(A{B}^{{_\\ast}}\\right \)$$ ](A81414_1_En_3_Chapter_IEq112.gif). For A ∈ Mat n ×n ℂ, consider the two linear operators on Mat n ×n ℂ defined by L A X = AX, R A X = XA. Show that ![ $${\\left \({L}_{A}\\right \)}^{{_\\ast}} = {L}_{{A}^{{_\\ast}}}$$ ](A81414_1_En_3_Chapter_IEq113.gif) and ![ $${\\left \({R}_{A}\\right \)}^{{_\\ast}} = {R}_{{A}^{{_\\ast}}}$$ ](A81414_1_En_3_Chapter_IEq114.gif). 4. Let x 1,..., x k ∈ V, where V is a finite-dimensional inner product space. (a) Show that ![ $$G\\left \({x}_{1},\\ldots,{x}_{k}\\right \) ={ \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{k}\\end{array} \\right \]}^{{_\\ast}}\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{k}\\end{array} \\right \],$$ ](A81414_1_En_3_Chapter_Equdz.gif) where Gx 1,..., x k is a k ×k matrix whose ij entry is ![ $$\\left \({x}_{j}\\vert {x}_{i}\\right \)$$ ](A81414_1_En_3_Chapter_IEq115.gif). It is called the Gram matrix or Grammian. (b) Show that G = Gx 1,..., x k is nonnegative in the sense that ![ $$\\left \(Gx\\vert x\\right \) \\geq0$$ ](A81414_1_En_3_Chapter_IEq116.gif) for all ![ $$x \\in{\\mathbb{F}}^{k}$$ ](A81414_1_En_3_Chapter_IEq117.gif). (c) Generalize part (a) to show that the composition ![ $${\\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{k}\\end{array} \\right \]}^{{_\\ast}}\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{k}\\end{array} \\right \]$$ ](A81414_1_En_3_Chapter_Equea.gif) is the matrix whose ij entry is ![ $$\\left \({x}_{j}\\vert {y}_{i}\\right \)$$ ](A81414_1_En_3_Chapter_IEq118.gif). 5. Find image and kernel for A ∈ Mat3 ×3 ℝ, where the ij entry is α ij = − 1 i + j . 6. Find image and kernel for A ∈ Mat3 ×3 ℂ, where the kl entry is α kl = i k + l . 7. Let A ∈ Mat n ×n ℝ be symmetric, i.e., A ∗ = A, and assume A has rank k ≤ n. Show that: (a) If the first k columns are linearly independent, then the principal k ×k minor of A is invertible. The principal k ×k minor of A is the k ×k matrix one obtains by deleting the last n − k columns and rows. Hint: Use a block decomposition ![ $$A = \\left \[\\begin{array}{cc} B &C\\\\ {C}^{t } &D\\end{array} \\right \]$$ ](A81414_1_En_3_Chapter_Equeb.gif) and write ![ $$\\left \[\\begin{array}{c} C\\\\ D\\end{array} \\right \] = \\left \[\\begin{array}{c} B\\\\ {C}^{t}\\end{array} \\right \]X,\\text{ }X \\in \\mathrm{{ Mat}}_{k\\times \\left \(n-k\\right \)}\\left \(\\mathbb{R}\\right \)$$ ](A81414_1_En_3_Chapter_Equec.gif) i.e., the last n − k columns are linear combinations of the first k. (b) If rows i 1,..., i k are linearly independent, then the k ×k minor obtained by deleting all columns and rows not indexed by i 1,..., i k is invertible. Hint: Note that I kl AI kl is symmetric so one can use part a. (c) There are examples showing that (a) need not hold for n ×n matrices in general. 8. Let L : V -> V be a linear operator on a finite-dimensional inner product space. Show that: (a) If M ⊂ V is an L-invariant subspace, then M ⊥ is L ∗ -invariant. (b) Show that there are examples where M is not L ∗ -invariant. 9. Consider two linear operators K, L : V -> V and the commutator ![ $$\\left \[K,L\\right \] = K \\circ L - L \\circ K$$ ](A81414_1_En_3_Chapter_Equed.gif) (a) Show that ![ $$\\left \[K,L\\right \]$$ ](A81414_1_En_3_Chapter_IEq119.gif) is skew-adjoint if K, L are both self-adjoint or both skew-adjoint. (b) Show that ![ $$\\left \[K,L\\right \]$$ ](A81414_1_En_3_Chapter_IEq120.gif) is self-adjoint if one of K, L is self-adjoint and the other skew-adjoint. 10. Let L : V -> W be a linear operator between finite-dimensional vector spaces. Show that (a) L is one-to-one if and only if L ∗ is onto. (b) L ∗ is one-to-one if and only if L is onto. 11. Let M, N ⊂ V be subspaces of a finite-dimensional inner product space and consider L : M ×N -> V defined by Lx, y = x − y. (a) Show that L ∗ z = proj M z, − proj N z. (b) Show that ![ $$\\begin{array}{rcl} \\ker \\left \({L}^{{_\\ast}}\\right \)& =& {M}^{\\perp }\\cap{N}^{\\perp }, \\\\ \\mathrm{im}\\left \(L\\right \)& =& M + N.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ79.gif) (c) Using the Fredholm alternative, show that ![ $${\\left \(M + N\\right \)}^{\\perp } = {M}^{\\perp }\\cap{N}^{\\perp }.$$ ](A81414_1_En_3_Chapter_Equee.gif) (d) Replace M and N by M ⊥ and N ⊥ and conclude ![ $${\\left \(M \\cap N\\right \)}^{\\perp } = {M}^{\\perp } + {N}^{\\perp }.$$ ](A81414_1_En_3_Chapter_Equef.gif) 12. Assume that L : V -> W is a linear map between inner product spaces. Show that: (a) If both vector spaces are finite-dimensional, then ![ $$\\dim \\left \(\\ker \\left \(L\\right \)\\right \) -\\dim {\\left \(\\mathrm{im}\\left \(L\\right \)\\right \)}^{\\perp } =\\dim V -\\dim W.$$ ](A81414_1_En_3_Chapter_Equeg.gif) (b) If V = W = ℓ 2 ℤ, then for each integer n ∈ ℤ, it is possible to find a linear operator L n with finite-dimensional kerL n and ![ $${\\left \(\\mathrm{im}\\left \({L}_{n}\\right \)\\right \)}^{\\perp }$$ ](A81414_1_En_3_Chapter_IEq121.gif) so that ![ $$\\mathrm{Ind}\\left \(L\\right \) =\\dim \\left \(\\ker \\left \(L\\right \)\\right \) -\\dim {\\left \(\\mathrm{im}\\left \(L\\right \)\\right \)}^{\\perp } = n.$$ ](A81414_1_En_3_Chapter_Equeh.gif) Hint: Consider linear maps that take ![ $$\\left \({a}_{k}\\right \)$$ ](A81414_1_En_3_Chapter_IEq122.gif) to ![ $$\\left \({a}_{k+l}\\right \)$$ ](A81414_1_En_3_Chapter_IEq123.gif) for some l ∈ ℤ. An operator with finite-dimensional kerL and ![ $${\\left \(\\mathrm{im}\\left \(L\\right \)\\right \)}^{\\perp }$$ ](A81414_1_En_3_Chapter_IEq124.gif) is called a Fredholm operator. The integer IndL = dimkerL − dimimL ⊥ is the index of the operator and is an important invariant in functional analysis. 13. Let L : V -> V be a linear operator on a finite-dimensional inner product space. Show that ![ $$\\overline{\\mathrm{tr}\\left \(L\\right \)} =\\mathrm{ tr}\\left \({L}^{{_\\ast}}\\right \).$$ ](A81414_1_En_3_Chapter_Equei.gif) 14. Let L : V -> W be a linear map between inner product spaces. Show that ![ $$L :\\ker \\left \({L}^{{_\\ast}}L - \\lambda {1}_{ V }\\right \) \\rightarrow \\ker \\left \(L{L}^{{_\\ast}}- \\lambda {1}_{ V }\\right \)$$ ](A81414_1_En_3_Chapter_Equej.gif) and ![ $${L}^{{_\\ast}} :\\ker \\left \(L{L}^{{_\\ast}}- \\lambda {1}_{ V }\\right \) \\rightarrow \\ker \\left \({L}^{{_\\ast}}L - \\lambda {1}_{ V }\\right \).$$ ](A81414_1_En_3_Chapter_Equek.gif) 15. Let L : V -> V be a linear operator on a finite-dimensional inner product space. Show that if Lx = λx, L ∗ y = μy, and ![ $$\\lambda \\neq \\bar{\\mu }$$ ](A81414_1_En_3_Chapter_IEq125.gif), then x and y are perpendicular. 16. Let V be a subspace of C 00, 1, ℝ and consider the linear functionals ![ $${f}_{{t}_{0}}\\left \(x\\right \) = x\\left \({t}_{0}\\right \)$$ ](A81414_1_En_3_Chapter_IEq126.gif) and f y x = ∫0 1 xtytdt. Show that: (a) If V is finite-dimensional, then ![ $${f}_{{t}_{0}}{\\vert }_{V } = {f}_{y}{\\vert }_{V }$$ ](A81414_1_En_3_Chapter_IEq127.gif) for some y ∈ V. (b) If V = P 2 = polynomials of degree ≤ 2, then there is an explicit y ∈ V as in part (a). (c) If V = C 00, 1, ℝ, then there is no y ∈ C 00, 1, ℝ such that ![ $${f}_{{t}_{0}} = {f}_{y}$$ ](A81414_1_En_3_Chapter_IEq128.gif). The illusory function ![ $${\\delta }_{{t}_{0}}$$ ](A81414_1_En_3_Chapter_IEq129.gif) invented by Dirac to solve this problem is called Dirac's δ-function. It is defined as ![ $$\\begin{array}{rcl} {\\delta }_{{t}_{0}}\\left \(t\\right \)& =& \\left \\{\\begin{array}{cc} 0 & \\text{ if }t\\neq {t}_{0} \\\\ \\infty &\\text{ if }t = {t}_{0}\\end{array} \\right. \\\\ {\\int\\nolimits \\nolimits }_{0}^{1}{\\delta }_{{ t}_{0}}\\left \(t\\right \)\\mathrm{d}t& =& 1\\end{array}$$ ](A81414_1_En_3_Chapter_Equ80.gif) so as to give the impression that ![ $${\\int\\nolimits \\nolimits }_{0}^{1}x\\left \(t\\right \){\\delta }_{{ t}_{0}}\\left \(t\\right \)\\mathrm{d}t = x\\left \({t}_{0}\\right \).$$ ](A81414_1_En_3_Chapter_Equel.gif) 17. Find qt ∈ P 2 such that ![ $$p\\left \(5\\right \) = \\left \(p\\vert q\\right \) ={ \\int\\nolimits \\nolimits }_{0}^{1}p\\left \(t\\right \)\\overline{q\\left \(t\\right \)}\\mathrm{d}t$$ ](A81414_1_En_3_Chapter_Equem.gif) for all p ∈ P 2. 18. Find ft ∈ span1, sint, cost such that ![ $$\\begin{array}{rcl} \\left \(g\\vert f\\right \)& =& \\frac{1} {2\\pi }{\\int\\nolimits \\nolimits }_{0}^{2\\pi }g\\left \(t\\right \)\\overline{f\\left \(t\\right \)}\\mathrm{d}t \\\\ & =& \\frac{1} {2\\pi }{\\int\\nolimits \\nolimits }_{0}^{2\\pi }g\\left \(t\\right \)\\left \(1 + {t}^{2}\\right \)\\mathrm{d}t \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ81.gif) for all g ∈ span1, sint, cost. ## 3.6 Orthogonal Projections Revisited* In this section, we shall give a new formula for an orthogonal projection. Instead of using Gram-Schmidt to create an orthonormal basis for the subspace, it gives a direct formula using an arbitrary basis for the subspace. First, we need a new characterization of orthogonal projections using adjoints. Lemma 3.6.1. (Characterization of Orthogonal Projections) A projection E : V -> V is orthogonal if and only if E = E ∗. Proof. The Fredholm alternative (Theorem 3.5.4) tells us that imE = kerE ∗ ⊥ so if E = E ∗ , we have shown that imE = kerE ⊥ , which implies that E is orthogonal (see Theorem 3.4.5). Conversely, we can assume that imE = kerE ⊥ since E is an orthogonal projection (see again Theorem 3.4.5). Using the Fredholm alternative again then tells us that ![ $$\\begin{array}{rcl} \\mathrm{im}\\left \(E\\right \)& =& \\ker {\\left \(E\\right \)}^{\\perp } =\\mathrm{ im}\\left \({E}^{{_\\ast}}\\right \), \\\\ \\ker {\\left \({E}^{{_\\ast}}\\right \)}^{\\perp }& =& \\mathrm{im}\\left \(E\\right \) =\\ker { \\left \(E\\right \)}^{\\perp }.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ82.gif) As ![ $${\\left \({E}^{{_\\ast}}\\right \)}^{2} ={ \\left \({E}^{2}\\right \)}^{{_\\ast}} = {E}^{{_\\ast}}$$ ](A81414_1_En_3_Chapter_IEq130.gif), it follows that E ∗ is a projection with the same image and kernel as E. Hence, E = E ∗ . □ Using this characterization of orthogonal projections, it is possible to find a formula for proj M using a general basis for M ⊂ V. Let M ⊂ V be finite-dimensional with a basis x 1,..., x m . This yields an isomorphism ![ $$\\left \[\\begin{array}{ccc} {x}_{1} & \\cdot s &{x}_{m} \\end{array} \\right \] : {\\mathbb{F}}^{m} \\rightarrow M$$ ](A81414_1_En_3_Chapter_Equen.gif) which can also be thought of as a one-to-one map ![ $$A : {\\mathbb{F}}^{m} \\rightarrow V$$ ](A81414_1_En_3_Chapter_IEq131.gif) whose image is M. This yields a linear map ![ $${A}^{{_\\ast}}A : {\\mathbb{F}}^{m} \\rightarrow{\\mathbb{F}}^{m}.$$ ](A81414_1_En_3_Chapter_Equeo.gif) Since ![ $$\\begin{array}{rcl} \\left \({A}^{{_\\ast}}Ay\\vert y\\right \)& =& \\left \(Ay\\vert Ay\\right \) \\\\ & =&{ \\left \\Vert Ay\\right \\Vert }^{2}, \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ83.gif) the kernel satisfies ![ $$\\ker \\left \({A}^{{_\\ast}}A\\right \) =\\ker \\left \(A\\right \) = \\left \\{0\\right \\}.$$ ](A81414_1_En_3_Chapter_Equep.gif) In particular, A ∗ A is an isomorphism. This means that ![ $$E = A{\\left \({A}^{{_\\ast}}A\\right \)}^{-1}{A}^{{_\\ast}}$$ ](A81414_1_En_3_Chapter_Equeq.gif) defines linear operator E : V -> V. It is easy to check that E = E ∗ , and since ![ $$\\begin{array}{rcl}{ E}^{2}& =& A{\\left \({A}^{{_\\ast}}A\\right \)}^{-1}{A}^{{_\\ast}}A{\\left \({A}^{{_\\ast}}A\\right \)}^{-1}{A}^{{_\\ast}} \\\\ & =& A{\\left \({A}^{{_\\ast}}A\\right \)}^{-1}{A}^{{_\\ast}} \\\\ & =& E, \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ84.gif) it is a projection. Finally, we must check that imE = M. Since ![ $${\\left \({A}^{{_\\ast}}A\\right \)}^{-1}$$ ](A81414_1_En_3_Chapter_IEq132.gif) is an isomorphism and ![ $$\\mathrm{im}\\left \({A}^{{_\\ast}}\\right \) ={ \\left \(\\ker \\left \(A\\right \)\\right \)}^{\\perp } ={ \\left \(\\left \\{0\\right \\}\\right \)}^{\\perp } = {\\mathbb{F}}^{m},$$ ](A81414_1_En_3_Chapter_Equer.gif) we have ![ $$\\mathrm{im}\\left \(E\\right \) =\\mathrm{ im}\\left \(A\\right \) = M$$ ](A81414_1_En_3_Chapter_Eques.gif) as desired. To better understand this construction, we note that ![ $${A}^{{_\\ast}}\\left \(x\\right \) = \\left \[\\begin{array}{c} \\left \(x\\vert {x}_{1}\\right \)\\\\ \\vdots \\\\ \\left \(x\\vert {x}_{m}\\right \) \\end{array} \\right \].$$ ](A81414_1_En_3_Chapter_Equet.gif) This follows from ![ $$\\begin{array}{rcl} \\left \(\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \]\\left \\vert \\left \[\\begin{array}{c} \\left \(x\\vert {x}_{1}\\right \)\\\\ \\vdots \\\\ \\left \(x\\vert {x}_{m}\\right \) \\end{array} \\right \]\\right.\\right \)& =& {\\alpha }_{1}\\overline{\\left \(x\\vert {x}_{1}\\right \)} + \\cdots+ {\\alpha }_{m}\\overline{\\left \(x\\vert {x}_{m}\\right \)} \\\\ & =& {\\alpha }_{1}\\left \({x}_{1}\\vert x\\right \) + \\cdots+ {\\alpha }_{m}\\left \({x}_{m}\\vert x\\right \) \\\\ & =& \\left \({\\alpha }_{1}{x}_{1} + \\cdots+ {\\alpha }_{m}{x}_{m}\\vert x\\right \) \\\\ & =& \\left \(\\left.A\\left \(\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{m} \\end{array} \\right \]\\right \)\\right \\vert x\\right \)\\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ85.gif) The matrix form of A ∗ A can now be expressed as ![ $$\\begin{array}{rcl}{ A}^{{_\\ast}}A& =& {A}^{{_\\ast}}\\circ \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{m} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {A}^{{_\\ast}}\\left \({x}_{1}\\right \)&\\cdots &{A}^{{_\\ast}}\\left \({x}_{m}\\right \) \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{ccc} \\left \({x}_{1}\\vert {x}_{1}\\right \) &\\cdots & \\left \({x}_{m}\\vert {x}_{1}\\right \)\\\\ \\vdots & \\ddots & \\vdots \\\\ \\left \({x}_{1}\\vert {x}_{m}\\right \)&\\cdots &\\left \({x}_{m}\\vert {x}_{m}\\right \) \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_3_Chapter_Equ86.gif) This is also called the Gram matrix of x 1,..., x m . This information specifies explicitly all of the components of the formula ![ $$E = A{\\left \({A}^{{_\\ast}}A\\right \)}^{-1}{A}^{{_\\ast}}.$$ ](A81414_1_En_3_Chapter_Equeu.gif) The only hard calculation is the inversion of A ∗ A. The calculation of AA ∗ A − 1 A ∗ should also be compared to using the Gram-Schmidt procedure for finding the orthogonal projection onto M. ### 3.6.1 Exercises 1. Using the inner product ![ $${\\int\\nolimits \\nolimits }_{0}^{1}p\\left \(t\\right \)\\bar{q}\\left \(t\\right \)\\mathrm{d}t$$ ](A81414_1_En_3_Chapter_IEq133.gif), find the orthogonal projection from ℂt onto span1, t = P 1. Given any p ∈ ℂt, you should express the orthogonal projection in terms of the coefficients of p. 2. Using the inner product ![ $${\\int\\nolimits \\nolimits }_{0}^{1}p\\left \(t\\right \)\\bar{q}\\left \(t\\right \)\\mathrm{d}t$$ ](A81414_1_En_3_Chapter_IEq134.gif), find the orthogonal projection from ℂt onto span1, t, t 2 = P 2. 3. Compute the orthogonal projection onto the following subspaces: (a) ![ $$\\mathrm{span}\\left \\{\\left \[\\begin{array}{c} 1\\\\ 1 \\\\ 1\\\\ 1\\end{array} \\right \]\\right \\} \\subset{\\mathbb{R}}^{4}$$ ](A81414_1_En_3_Chapter_Equev.gif) (b) ![ $$\\mathrm{span}\\left \\{\\left \[\\begin{array}{c} 1\\\\ -1 \\\\ 0\\\\ 1\\end{array} \\right \],\\left \[\\begin{array}{c} 1\\\\ 1 \\\\ 1\\\\ 0\\end{array} \\right \],\\left \[\\begin{array}{c} 2\\\\ 0 \\\\ 1\\\\ 1\\end{array} \\right \]\\right \\} \\subset{\\mathbb{R}}^{4}$$ ](A81414_1_En_3_Chapter_Equew.gif) (c) ![ $$\\mathrm{span}\\left \\{\\left \[\\begin{array}{c} 1\\\\ i \\\\ 0\\\\ 0\\end{array} \\right \],\\left \[\\begin{array}{c} - i\\\\ 1 \\\\ 0\\\\ 0\\end{array} \\right \],\\left \[\\begin{array}{c} 0\\\\ 1 \\\\ i\\\\ 0\\end{array} \\right \]\\right \\} \\subset{\\mathbb{C}}^{4}$$ ](A81414_1_En_3_Chapter_Equex.gif) 4. Given an orthonormal basis e 1,..., e k for the subspace M ⊂ V, show that the orthogonal projection onto M can be computed as ![ $$\\mathrm{{proj}}_{M} = \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{k}\\end{array} \\right \]{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{k}\\end{array} \\right \]}^{{_\\ast}}.$$ ](A81414_1_En_3_Chapter_Equey.gif) Hint: Show that ![ $${\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{k}\\end{array} \\right \]}^{{_\\ast}}\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{k}\\end{array} \\right \] = {1}_{{ \\mathbb{F}}^{k}}.$$ ](A81414_1_En_3_Chapter_Equez.gif) 5. Show that if M ⊂ V is an L-invariant subspace, then ![ $${\\left \(L{\\vert }_{M}\\right \)}^{{_\\ast}} =\\mathrm{{ proj}}_{ M} \\circ{L}^{{_\\ast}}{\\vert }_{ M}.$$ ](A81414_1_En_3_Chapter_Equfa.gif) ## 3.7 Matrix Exponentials* In this section, we shall show that the initial value problem: ![ $$\\dot{x} = Ax$$ ](A81414_1_En_3_Chapter_IEq135.gif), xt 0 = x 0 where A is a square matrix with real or complex scalars as entries can be solved using matrix exponentials. More algebraic approaches are also available by using the Frobenius canonical form (Theorem 2.7.1) and the Jordan canonical form (Theorem 2.8.3). Later, we shall see how Schur's Theorem (4.8.1) also gives a very effective way of solving such systems. Recall that in the one-dimensional situation, the solution is ![ $$x\\left \(t\\right \) = {x}_{0}\\exp \\left \(A\\left \(t - {t}_{0}\\right \)\\right \).$$ ](A81414_1_En_3_Chapter_Equfb.gif) If we could make sense of this for square matrices A as well, we would have a possible way of writing down the solutions. The concept of operator norms introduced in Sect. 3.3 naturally leads to a norm of matrices as well. One key observation about this norm is that if A = α ij , then ![ $$\\left \\vert {\\alpha }_{ij}\\right \\vert \\leq \\left \\Vert A\\right \\Vert$$ ](A81414_1_En_3_Chapter_IEq136.gif), i.e., the entries are bounded by the norm. Moreover, we also have that ![ $$\\begin{array}{rcl} \\left \\Vert AB\\right \\Vert & \\leq & \\left \\Vert A\\right \\Vert \\left \\Vert B\\right \\Vert, \\\\ \\left \\Vert A + B\\right \\Vert & \\leq & \\left \\Vert A\\right \\Vert + \\left \\Vert B\\right \\Vert \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ87.gif) as ![ $$\\begin{array}{rcl} \\left \\Vert AB\\left \(x\\right \)\\right \\Vert & \\leq & \\left \\Vert A\\right \\Vert \\left \\Vert B\\left \(x\\right \)\\right \\Vert \\\\ & \\leq & \\left \\Vert A\\right \\Vert \\left \\Vert B\\right \\Vert \\left \\Vert x\\right \\Vert \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ88.gif) and ![ $$\\left \\Vert \\left \(A + B\\right \)\\left \(x\\right \)\\right \\Vert \\leq \\left \\Vert A\\left \(x\\right \)\\right \\Vert + \\left \\Vert B\\left \(x\\right \)\\right \\Vert.$$ ](A81414_1_En_3_Chapter_Equfc.gif) Now, consider the series ![ $${\\sum\\nolimits }_{n=0}^{\\infty }\\frac{{A}^{n}} {n!}.$$ ](A81414_1_En_3_Chapter_Equfd.gif) Since ![ $$\\left \\Vert \\frac{{A}^{n}} {n!} \\right \\Vert \\leq\\frac{{\\left \\Vert A\\right \\Vert }^{n}} {n!}$$ ](A81414_1_En_3_Chapter_Equfe.gif) and ![ $${\\sum\\nolimits }_{n=0}^{\\infty }\\frac{{\\left \\Vert A\\right \\Vert }^{n}} {n!}$$ ](A81414_1_En_3_Chapter_Equff.gif) is convergent, it follows that any given entry in ![ $${\\sum\\nolimits }_{n=0}^{\\infty }\\frac{{A}^{n}} {n!}$$ ](A81414_1_En_3_Chapter_Equfg.gif) is bounded by a convergent series. Thus, the matrix series also converges. This means we can define ![ $$\\exp \\left \(A\\right \) ={ \\sum\\nolimits }_{n=0}^{\\infty }\\frac{{A}^{n}} {n!}.$$ ](A81414_1_En_3_Chapter_Equfh.gif) It is not hard to check that if L ∈ HomV, V, where V is a finite-dimensional inner product space, then we can similarly define ![ $$\\exp \\left \(L\\right \) ={ \\sum\\nolimits }_{n=0}^{\\infty }\\frac{{L}^{n}} {n!}.$$ ](A81414_1_En_3_Chapter_Equfi.gif) Now, consider the matrix-valued function ![ $$\\exp \\left \(At\\right \) ={ \\sum\\nolimits }_{n=0}^{\\infty }\\frac{{A}^{n}{t}^{n}} {n!}$$ ](A81414_1_En_3_Chapter_Equfj.gif) and with it the vector-valued function ![ $$x\\left \(t\\right \) =\\exp \\left \(A\\left \(t - {t}_{0}\\right \)\\right \){x}_{0}.$$ ](A81414_1_En_3_Chapter_Equfk.gif) It still remains to be seen that this defines a differentiable function that solves ![ $$\\dot{x} = Ax$$ ](A81414_1_En_3_Chapter_IEq137.gif). But it follows directly from the definition it has the correct initial value since ![ $$\\exp \\left \(0\\right \) = {1}_{{\\mathbb{F}}^{n}}$$ ](A81414_1_En_3_Chapter_IEq138.gif). To check differentiability, we consider the matrix function t -> expAt and study expAt + h. In fact, we claim that ![ $$\\exp \\left \(A\\left \(t + h\\right \)\\right \) =\\exp \\left \(At\\right \)\\exp \\left \(Ah\\right \).$$ ](A81414_1_En_3_Chapter_Equfl.gif) To establish this, we prove a more general version together with another useful fact. Proposition 3.7.1. Let L,K : V -> V be linear operators on a finite-dimensional inner product space. (1) If KL = LK, then exp K + L = exp K ∘ exp L. (2) If K is invertible, then exp K ∘ L ∘ K −1 = K ∘ exp L ∘ K −1. Proof. 1. This formula hinges on proving the binomial formula for commuting operators: ![ $$\\begin{array}{rcl}{ \\left \(L + K\\right \)}^{n}& =& {\\sum\\nolimits }_{k=0}^{n}\\left\({n}\\atop{k}\\right\){L}^{k}{K}^{n-k}, \\\\ \\left\({n}\\atop{k}\\right\)& =& \\frac{n!} {\\left \(n - k\\right \)!k!}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ89.gif) This formula is obvious for n = 1. Suppose that the formula holds for n. Using the conventions ![ $$\\begin{array}{rcl} \\left\({n}\\atop{n + 1}\\right\)& =& 0, \\\\ \\left\({n}\\atop{-1}\\right\)& =& 0, \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ90.gif) together with the formula from Pascal's triangle ![ $$\\left\({n}\\atop{k - 1}\\right\) + \\left\({n}\\atop{k}\\right\) = \\left\({n+1}\\atop{k}\\right\),$$ ](A81414_1_En_3_Chapter_Equfm.gif) it follows that ![ $$\\begin{array}{rcl}{ \\left \(L + K\\right \)}^{n+1}& =&{ \\left \(L + K\\right \)}^{n}\\left \(L + K\\right \) \\\\ & =& \\left \({\\sum\\nolimits }_{k=0}^{n}\\left\({n}\\atop{k}\\right\){L}^{k}{K}^{n-k}\\right \)\\left \(L + K\\right \) \\\\ & =& {\\sum\\nolimits }_{k=0}^{n}\\left\({n}\\atop{k}\\right\){L}^{k}{K}^{n-k}L +{ \\sum\\nolimits }_{k=0}^{n}\\left\({n}\\atop{k}\\right\){L}^{k}{K}^{n-k}K \\\\ & =& {\\sum\\nolimits }_{k=0}^{n}\\left\({n}\\atop{k}\\right\){L}^{k+1}{K}^{n-k} +{ \\sum\\nolimits }_{k=0}^{n}\\left\({n}\\atop{k}\\right\){L}^{k}{K}^{n-k+1} \\\\ & =& {\\sum\\nolimits }_{k=0}^{n+1}\\left\({n}\\atop{k - 1}\\right\){L}^{k}{K}^{n+1-k} +{ \\sum\\nolimits }_{k=0}^{n+1}\\left\({n}\\atop{k}\\right\){L}^{k}{K}^{n+1-k} \\\\ & =& {\\sum\\nolimits }_{k=0}^{n+1}\\left \(\\left\({n}\\atop{k - 1}\\right\) + \\left\({n}\\atop{k}\\right\)\\right \){L}^{k}{K}^{n+1-k} \\\\ & =& {\\sum\\nolimits }_{k=0}^{n+1}\\left\({n + 1}\\atop{k}\\right\){L}^{k}{K}^{n+1-k}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ91.gif) We can then compute ![ $$\\begin{array}{rcl} {\\sum\\nolimits }_{n=0}^{N}\\frac{{\\left \(K + L\\right \)}^{n}} {n!} & =& {\\sum\\nolimits }_{n=0}^{N}{ \\sum\\nolimits }_{k=0}^{n} \\frac{1} {n!}\\left\({n}\\atop{k}\\right\){L}^{k}{K}^{n-k} \\\\ & =& {\\sum\\nolimits }_{n=0}^{N}{ \\sum\\nolimits }_{k=0}^{n} \\frac{1} {\\left \(n - k\\right \)!k!}{L}^{k}{K}^{n-k} \\\\ & =& {\\sum\\nolimits }_{n=0}^{N}{ \\sum\\nolimits }_{k=0}^{n}\\left \( \\frac{1} {k!}{L}^{k}\\right \)\\left \( \\frac{1} {\\left \(n - k\\right \)!}{K}^{n-k}\\right \) \\\\ & =& {\\sum\\nolimits }_{k,l=0,k+l\\leq N}^{N}\\left \( \\frac{1} {k!}{L}^{k}\\right \)\\left \( \\frac{1} {l!}{K}^{l}\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ92.gif) The last term is unfortunately not quite the same as the product ![ $${\\sum\\nolimits }_{k,l=0}^{N}\\left \( \\frac{1} {k!}{L}^{k}\\right \)\\left \( \\frac{1} {l!}{K}^{l}\\right \) = \\left \({\\sum\\nolimits }_{k=0}^{N} \\frac{1} {k!}{L}^{k}\\right \)\\left \({\\sum\\nolimits }_{l=0}^{N} \\frac{1} {l!}{K}^{l}\\right \).$$ ](A81414_1_En_3_Chapter_Equfn.gif) However, the difference between these two sums can be estimated the following way: ![ $$\\begin{array}{rcl} & \\left \\Vert {\\sum\\nolimits }_{k,l=0}^{N}\\left \( \\frac{1} {k!}{L}^{k}\\right \)\\left \(\\frac{1} {l!}{K}^{l}\\right \) -{\\sum \\nolimits }_{ k,l=0,k+l\\leq N}^{N}\\left \( \\frac{1} {k!}{L}^{k}\\right \)\\left \(\\frac{1} {l!}{K}^{l}\\right \)\\right \\Vert & \\\\ & \\quad = \\left \\Vert {\\sum\\nolimits }_{k,l=0,k+l>N}^{N}\\left \( \\frac{1} {k!}{L}^{k}\\right \)\\left \(\\frac{1} {l!}{K}^{l}\\right \)\\right \\Vert & \\\\ & \\quad \\leq {\\sum\\nolimits }_{k,l=0,k+l>N}^{N}\\left \( \\frac{1} {k!}{\\left \\Vert L\\right \\Vert }^{k}\\right \)\\left \(\\frac{1} {l!}{\\left \\Vert K\\right \\Vert }^{l}\\right \) & \\\\ & \\quad \\leq {\\sum\\nolimits }_{k=0,l=N/2}^{N}\\left \( \\frac{1} {k!}{\\left \\Vert L\\right \\Vert }^{k}\\right \)\\left \(\\frac{1} {l!}{\\left \\Vert K\\right \\Vert }^{l}\\right \) +{ \\sum \\nolimits }_{ l=0,k=N/2}^{N}\\left \( \\frac{1} {k!}{\\left \\Vert L\\right \\Vert }^{k}\\right \)\\left \(\\frac{1} {l!}{\\left \\Vert K\\right \\Vert }^{l}\\right \) & \\\\ & \\quad = \\left \({\\sum\\nolimits }_{k=0}^{N} \\frac{1} {k!}{\\left \\Vert L\\right \\Vert }^{k}\\right \)\\left \({\\sum \\nolimits }_{ l=N/2}^{N}\\frac{1} {l!}{\\left \\Vert K\\right \\Vert }^{l}\\right \) + \\left \({\\sum \\nolimits }_{ k=N/2}^{N} \\frac{1} {k!}{\\left \\Vert L\\right \\Vert }^{k}\\right \)\\left \({\\sum \\nolimits }_{ l=0}^{N}\\frac{1} {l!}{\\left \\Vert K\\right \\Vert }^{l}\\right \)& \\\\ & \\quad \\leq \\exp \\left \(\\left \\Vert L\\right \\Vert \\right \)\\left \({\\sum\\nolimits }_{l=N/2}^{N}\\frac{1} {l!}{\\left \\Vert K\\right \\Vert }^{l}\\right \) +\\exp \\left \(\\left \\Vert K\\right \\Vert \\right \)\\left \({\\sum \\nolimits }_{ k=N/2}^{N} \\frac{1} {k!}{\\left \\Vert L\\right \\Vert }^{k}\\right \). & \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ93.gif) Since ![ $$ \\begin{array}{rcl}{ \\lim }_{N\\rightarrow \\infty }{\\sum\\nolimits }_{l=N/2}^{N} \\frac{1} {l!}{\\left \\Vert K\\right \\Vert }^{l}& =& 0, \\\\ {\\lim }_{N\\rightarrow \\infty }{\\sum\\nolimits }_{k=N/2}^{N} \\frac{1} {k!}{\\left \\Vert L\\right \\Vert }^{k}& =& 0, \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ94.gif) it follows that ![ $${\\lim }_{N\\rightarrow \\infty }\\left \\Vert {\\sum\\nolimits }_{n=0}^{N}\\frac{{\\left \(K + L\\right \)}^{n}} {n!} -\\left \({\\sum\\nolimits }_{k=0}^{N} \\frac{1} {k!}{L}^{k}\\right \)\\left \({\\sum\\nolimits }_{l=0}^{N} \\frac{1} {l!}{K}^{l}\\right \)\\right \\Vert = 0.$$ ](A81414_1_En_3_Chapter_Equfo.gif) Thus, ![ $${\\sum\\nolimits }_{n=0}^{\\infty }\\frac{{\\left \(K + L\\right \)}^{n}} {n!} ={ \\sum\\nolimits }_{k=0}^{\\infty }\\left \( \\frac{1} {k!}{L}^{k}\\right \){\\sum\\nolimits }_{l=0}^{\\infty }\\left \( \\frac{1} {l!}{K}^{l}\\right \)$$ ](A81414_1_En_3_Chapter_Equfp.gif) as desired. 2. This is considerably simpler and uses that ![ $${\\left \(K \\circ L \\circ{K}^{-1}\\right \)}^{n} = K \\circ{L}^{n} \\circ{K}^{-1}.$$ ](A81414_1_En_3_Chapter_Equfq.gif) This is again proven by induction. First, observe it is trivial for n = 1 and then that ![ $$\\begin{array}{rcl}{ \\left \(K \\circ L \\circ{K}^{-1}\\right \)}^{n+1}& =&{ \\left \(K \\circ L \\circ{K}^{-1}\\right \)}^{n} \\circ K \\circ L \\circ{K}^{-1} \\\\ & =& K \\circ{L}^{n} \\circ{K}^{-1} \\circ K \\circ L \\circ{K}^{-1} \\\\ & =& K \\circ{L}^{n} \\circ L \\circ{K}^{-1} \\\\ & =& K \\circ{L}^{n+1} \\circ{K}^{-1}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ95.gif) Thus, ![ $$\\begin{array}{rcl} {\\sum\\nolimits }_{n=0}^{N}\\frac{{\\left \(K \\circ L \\circ{K}^{-1}\\right \)}^{n}} {n!} & =& {\\sum\\nolimits }_{n=0}^{N}\\frac{K \\circ{L}^{n} \\circ{K}^{-1}} {n!} \\\\ & =& K \\circ \\left \({\\sum\\nolimits }_{n=0}^{N}\\frac{{L}^{n}} {n!} \\right \) \\circ{K}^{-1}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ96.gif) By letting N -> ∞, we get the desired formula. □ To calculate the derivative of expAt, we observe that ![ $$\\begin{array}{rcl} \\frac{\\exp \\left \(A\\left \(t + h\\right \)\\right \) -\\exp \\left \(At\\right \)} {h} & =& \\frac{\\exp \\left \(Ah\\right \)\\exp \\left \(At\\right \) -\\exp \\left \(At\\right \)} {h} \\\\ & =& \\left \(\\frac{\\exp \\left \(Ah\\right \) - {1}_{{\\mathbb{F}}^{n}}} {h} \\right \)\\exp \\left \(At\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ97.gif) Using the definition of expAh, it follows that ![ $$\\begin{array}{rcl} \\frac{\\exp \\left \(Ah\\right \) - {1}_{{\\mathbb{F}}^{n}}} {h} & =& {\\sum\\nolimits }_{n=1}^{\\infty }\\frac{1} {h} \\frac{{A}^{n}{h}^{n}} {n!} \\\\ & =& {\\sum\\nolimits }_{n=1}^{\\infty }\\frac{{A}^{n}{h}^{n-1}} {n!} \\\\ & =& A +{ \\sum\\nolimits }_{n=2}^{\\infty }\\frac{{A}^{n}{h}^{n-1}} {n!}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ98.gif) Since ![ $$\\begin{array}{rcl} \\left \\Vert {\\sum\\nolimits }_{n=2}^{\\infty }\\frac{{A}^{n}{h}^{n-1}} {n!} \\right \\Vert & \\leq & {\\sum\\nolimits }_{n=2}^{\\infty }\\frac{{\\left \\Vert A\\right \\Vert }^{n}{\\left \\vert h\\right \\vert }^{n-1}} {n!} \\\\ & =& \\left \\Vert A\\right \\Vert {\\sum\\nolimits }_{n=2}^{\\infty }\\frac{{\\left \\Vert A\\right \\Vert }^{n-1}{\\left \\vert h\\right \\vert }^{n-1}} {n!} \\\\ & =& \\left \\Vert A\\right \\Vert {\\sum\\nolimits }_{n=2}^{\\infty }\\frac{{\\left \\Vert Ah\\right \\Vert }^{n-1}} {n!} \\\\ & \\leq & \\left \\Vert A\\right \\Vert {\\sum\\nolimits }_{n=1}^{\\infty }{\\left \\Vert Ah\\right \\Vert }^{n} \\\\ & =& \\left \\Vert A\\right \\Vert \\left \\Vert Ah\\right \\Vert \\frac{1} {1 -\\left \\Vert Ah\\right \\Vert } \\\\ & \\rightarrow & 0\\text{ as }\\left \\vert h\\right \\vert \\rightarrow0, \\\\ \\end{array}$$ ](A81414_1_En_3_Chapter_Equ99.gif) we get that ![ $$ \\begin{array}{rcl} {\\lim }_{\\left \\vert h\\right \\vert \\rightarrow 0}\\frac{\\exp \\left \(A\\left \(t + h\\right \)\\right \) -\\exp \\left \(At\\right \)} {h} & =& \\left \({\\lim }_{\\left \\vert h\\right \\vert \\rightarrow 0}\\frac{\\exp \\left \(Ah\\right \) - {1}_{{\\mathbb{F}}^{n}}} {h} \\right \)\\exp \\left \(At\\right \) \\\\ & =& A\\exp \\left \(At\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ100.gif) Therefore, if we define ![ $$x\\left \(t\\right \) =\\exp \\left \(A\\left \(t - {t}_{0}\\right \)\\right \){x}_{0},$$ ](A81414_1_En_3_Chapter_Equfr.gif) then ![ $$\\begin{array}{rcl} \\dot{x}& =& A\\exp \\left \(A\\left \(t - {t}_{0}\\right \)\\right \){x}_{0} \\\\ & =& Ax.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ101.gif) The other problem we should solve at this point is uniqueness of solutions. To be more precise, if we have that both x and y solve the initial value problem ![ $$\\dot{x} = Ax$$ ](A81414_1_En_3_Chapter_IEq139.gif), xt 0 = x 0, then we wish to prove that x = y. Inner products can be used quite effectively to prove this as well. We consider the nonnegative function ![ $$\\begin{array}{rcl} \\phi \\left \(t\\right \)& =&{ \\left \\Vert x\\left \(t\\right \) - y\\left \(t\\right \)\\right \\Vert }^{2} \\\\ & =&{ \\left \({x}_{1} - {y}_{1}\\right \)}^{2} + \\cdots+{ \\left \({x}_{ n} - {y}_{n}\\right \)}^{2}.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ102.gif) In the complex situation, simply identify ℂ n = ℝ 2n and use the 2n real coordinates to define this norm. Recall that this norm comes from the usual inner product on Euclidean space. The derivative satisfies ![ $$\\begin{array}{rcl} \\frac{\\mathrm{d}\\phi } {\\mathrm{d}t} \\left \(t\\right \)& =& 2\\left \(\\dot{{x}}_{1} -\\dot{ {y}}_{1}\\right \)\\left \({x}_{1} - {y}_{1}\\right \) + \\cdots+ 2\\left \(\\dot{{x}}_{n} -\\dot{ {y}}_{n}\\right \)\\left \({x}_{n} - {y}_{n}\\right \) \\\\ & =& 2\\left \(\\left \(\\dot{x} -\\dot{ y}\\right \)\\vert \\left \(x - y\\right \)\\right \) \\\\ & =& 2\\left \(A\\left \(x - y\\right \)\\vert \\left \(x - y\\right \)\\right \) \\\\ & \\leq & 2\\left \\Vert A\\left \(x - y\\right \)\\right \\Vert \\left \\Vert x - y\\right \\Vert \\\\ & \\leq & 2\\left \\Vert A\\right \\Vert {\\left \\Vert x - y\\right \\Vert }^{2} \\\\ & =& 2\\left \\Vert A\\right \\Vert \\phi \\left \(t\\right \).\\end{array}$$ ](A81414_1_En_3_Chapter_Equ103.gif) Thus, we have ![ $$\\frac{\\mathrm{d}\\phi } {\\mathrm{d}t} \\left \(t\\right \) - 2\\left \\Vert A\\right \\Vert \\phi \\left \(t\\right \) \\leq0.$$ ](A81414_1_En_3_Chapter_Equfs.gif) If we multiply this by the positive integrating factor exp − 2At − t 0 and use Leibniz' rule in reverse, we obtain ![ $$\\frac{\\mathrm{d}} {\\mathrm{d}t}\\left \(\\phi \\left \(t\\right \)\\exp \\left \(-2\\left \\Vert A\\right \\Vert \\left \(t - {t}_{0}\\right \)\\right \)\\right \) \\leq0.$$ ](A81414_1_En_3_Chapter_Equft.gif) Together with the initial condition ϕt 0 = 0, this yields ![ $$\\phi \\left \(t\\right \)\\exp \\left \(-2\\left \\Vert A\\right \\Vert \\left \(t - {t}_{0}\\right \)\\right \) \\leq0,\\text{ for }t \\geq{t}_{0}.$$ ](A81414_1_En_3_Chapter_Equfu.gif) Since the integrating factor is positive and ϕ is nonnegative, it must follow that ϕt = 0 for t ≥ t 0. A similar argument using − exp − 2At − t 0 can be used to show that ϕt = 0 for t ≤ t 0. Altogether, we have established that the initial value problem ![ $$\\dot{x} = Ax$$ ](A81414_1_En_3_Chapter_IEq140.gif), xt 0 = x 0 always has a unique solution for matrices A with real (or complex) scalars as entries. To explicitly solve these linear differential equations, it is often best to understand higher order equations first and then use the cyclic subspace decomposition from Sect. 2.6 to reduce systems to higher order equations. In Sect. 4.8.1, we shall give another method for solving systems of equations that does not use higher order equations. ### 3.7.1 Exercises 1. Let fz = ∑ n = 0 ∞ a n z n define a power series and ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_3_Chapter_IEq141.gif). Show that one can define fA as long as ![ $$\\left \\Vert A\\right \\Vert <$$ ](A81414_1_En_3_Chapter_IEq142.gif) radius of convergence. 2. Let L : V -> V be an operator on a finite-dimensional inner product space. Show the following statements: (a) If ![ $$\\left \\Vert L\\right \\Vert < 1$$ ](A81414_1_En_3_Chapter_IEq143.gif), then 1 V + L has an inverse. Hint: ![ $${\\left \({1}_{V } + L\\right \)}^{-1} ={ \\sum\\nolimits }_{n=1}^{\\infty }{\\left \(-1\\right \)}^{n}{L}^{n}.$$ ](A81414_1_En_3_Chapter_Equfv.gif) (b) With L as above, show ![ $$\\begin{array}{rcl} \\left \\Vert {L}^{-1}\\right \\Vert & \\leq & \\frac{1} {1 -\\left \\Vert L\\right \\Vert }, \\\\ \\left \\Vert {\\left \({1}_{V } + L\\right \)}^{-1} - {1}_{ V }\\right \\Vert & \\leq & \\frac{\\left \\Vert L\\right \\Vert } {1 -\\left \\Vert L\\right \\Vert }.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ104.gif) (c) If ![ $$\\left \\Vert {L}^{-1}\\right \\Vert \\leq{\\epsilon }^{-1}$$ ](A81414_1_En_3_Chapter_IEq144.gif) and ![ $$\\left \\Vert L - K\\right \\Vert < \\epsilon $$ ](A81414_1_En_3_Chapter_IEq145.gif), then K is invertible and ![ $$\\begin{array}{rcl} \\left \\Vert {K}^{-1}\\right \\Vert & \\leq & \\frac{\\left \\Vert {L}^{-1}\\right \\Vert } {1 -\\left \\Vert {L}^{-1}\\left \(K - L\\right \)\\right \\Vert }, \\\\ \\left \\Vert {L}^{-1} - {K}^{-1}\\right \\Vert & \\leq & \\frac{{\\left \\Vert {L}^{-1}\\right \\Vert }^{2}} {{\\left \(1 -\\left \\Vert {L}^{-1}\\right \\Vert \\left \\Vert L - K\\right \\Vert \\right \)}^{2}}\\left \\Vert L - K\\right \\Vert.\\end{array}$$ ](A81414_1_En_3_Chapter_Equ105.gif) 3. Let L : V -> V be an operator on a finite-dimensional inner product space. (a) Show that if λ is an eigenvalue for L, then ![ $$\\left \\vert \\lambda \\right \\vert \\leq \\left \\Vert L\\right \\Vert.$$ ](A81414_1_En_3_Chapter_Equfw.gif) (b) Give examples of 2 ×2 matrices where strict inequality always holds. 4. Show that ![ $$x\\left \(t\\right \) = \\left \(\\exp \\left \(A\\left \(t - {t}_{0}\\right \)\\right \){\\int\\nolimits \\nolimits }_{{t}_{0}}^{t}\\exp \\left \(-A\\left \(s - {t}_{ 0}\\right \)\\right \)f\\left \(s\\right \)\\mathrm{d}s\\right \){x}_{0}$$ ](A81414_1_En_3_Chapter_Equfx.gif) solves the initial value problem ![ $$\\dot{x} = Ax + f\\left \(t\\right \)$$ ](A81414_1_En_3_Chapter_IEq146.gif), xt 0 = x 0. 5. Let A = B + C ∈ Mat n ×n ℝ where B is invertible and ![ $$\\left \\Vert C\\right \\Vert$$ ](A81414_1_En_3_Chapter_IEq147.gif) is very small compared to ![ $$\\left \\Vert B\\right \\Vert$$ ](A81414_1_En_3_Chapter_IEq148.gif). (a) Show that B − 1 − B − 1 CB − 1 is a good approximation to A − 1. (b) Use this to approximate the inverse to ![ $$\\left \[\\begin{array}{cccc} 1 & 0 &1000& 1\\\\ 0 & - 1 & 1 &1000 \\\\ 2 &1000& - 1 & 0\\\\ 1000 & 3 & 2 & 0\\end{array} \\right \].$$ ](A81414_1_En_3_Chapter_Equfy.gif) References Axler. Axler, S.: Linear Algebra Done Right. Springer-Verlag, New York (1997) Bretscher. Bretscher, O.: Linear Algebra with Applications, 2nd edn. Prentice-Hall, Upper Saddle River (2001) Curtis. Curtis, C.W.: Linear Algebra: An Introductory Approach. Springer-Verlag, New York (1984) Greub. Greub, W.: Linear Algebra, 4th edn. Springer-Verlag, New York (1981) Halmos. Halmos, P.R.: Finite-Dimensional Vector Spaces. Springer-Verlag, New York (1987) Hoffman-Kunze. Hoffman, K., Kunze, R.: Linear Algebra. Prentice-Hall, Upper Saddle River (1961) Lang. Lang, S.: Linear Algebra, 3rd edn. Springer-Verlag, New York (1987) Roman. Roman, S.: Advanced Linear Algebra, 2nd edn. Springer-Verlag, New York (2005) Serre. Serre, D.: Matrices, Theory and Applications. Springer-Verlag, New York (2002) Peter PetersenUndergraduate Texts in MathematicsLinear Algebra201210.1007/978-1-4614-3612-6_4(C) Springer Science+Business Media New York 2012 # 4. Linear Operators on Inner Product Spaces Peter Petersen1 (1) Department of Mathematics, University of California, Los Angeles, CA, USA Abstract In this chapter, we are going to study linear operators on finite-dimensional inner product spaces. In the last chapter, we introduced adjoints of linear maps between possibly different inner product spaces. Here we shall see how the adjoint can be used to understand linear operators on a fixed inner product space. The important operators we study here are the self-adjoint, skew-adjoint, normal, orthogonal, and unitary operators. We shall spend several sections on the existence of eigenvalues, diagonalizability, and canonical forms for these special but important linear operators. Having done that, we go back to the study of general linear maps and operators and establish the singular value and polar decompositions. We also show Schur's theorem to the effect that complex linear operators have upper triangular matrix representations. This result does not depend on the spectral theorem. It is also possible to give a quick proof of the spectral theorem using only the material covered in Sect. 4.1. The chapter finishes with a section on quadratic forms and how they tie in with the theory of self-adjoint operators. In this chapter, we are going to study linear operators on finite-dimensional inner product spaces. In the last chapter, we introduced adjoints of linear maps between possibly different inner product spaces. Here we shall see how the adjoint can be used to understand linear operators on a fixed inner product space. The important operators we study here are the self-adjoint, skew-adjoint, normal, orthogonal, and unitary operators. We shall spend several sections on the existence of eigenvalues, diagonalizability, and canonical forms for these special but important linear operators. Having done that, we go back to the study of general linear maps and operators and establish the singular value and polar decompositions. We also show Schur's theorem to the effect that complex linear operators have upper triangular matrix representations. This result does not depend on the spectral theorem. It is also possible to give a quick proof of the spectral theorem using only the material covered in Sect. 4.1. The chapter finishes with a section on quadratic forms and how they tie in with the theory of self-adjoint operators. ## 4.1 Self-Adjoint Maps Definition 4.1.1. A linear operator L : V -> V on a finite-dimensional vector space is called self-adjoint if L ∗ = L. Note that a real m ×m matrix A is self-adjoint precisely when it is symmetric, i.e., A = A t . The opposite of being self-adjoint is skew-adjoint: L ∗ = − L. When V is a real inner product space, we also say that the operator is symmetric or skew-symmetric. In case the inner product is complex, these operators are also calledHermitian or skew-Hermitian. Example 4.1.2. The following 2 ×2 matrices satisfy: ![ $$\\left \[\\begin{array}{cc} 0 &\\quad - \\beta \\\\ \\beta& \\quad 0 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equa.gif) is skew-adjoint if β is real. ![ $$\\left \[\\begin{array}{cc} \\alpha&\\quad -\\mathit{i}\\beta \\\\ \\mathit{i} \\beta& \\quad \\alpha\\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equb.gif) is self-adjoint if α and β are real. ![ $$\\left \[\\begin{array}{cc} \\mathit{i}\\alpha &\\quad - \\beta \\\\ \\beta& \\quad \\mathit{i} \\alpha\\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equc.gif) is skew-adjoint if α and β are real. In general, a complex 2 ×2 self-adjoint matrix looks like ![ $$\\left \[\\begin{array}{cc} \\alpha&\\quad \\beta+ \\mathit{i}\\gamma \\\\ \\beta- \\mathit{i} \\gamma& \\quad \\delta\\end{array} \\right \],\\alpha,\\beta,\\gamma,\\delta\\in\\mathbb{R}.$$ ](A81414_1_En_4_Chapter_Equd.gif) In general, a complex 2 ×2 skew-adjoint matrix looks like ![ $$\\left \[\\begin{array}{cc} \\mathit{i}\\alpha&\\quad \\mathit{i}\\beta- \\gamma \\\\ \\mathit{i} \\beta+ \\gamma& \\quad \\mathit{i} \\delta\\end{array} \\right \],\\alpha,\\beta,\\gamma,\\delta\\in\\mathbb{R}.$$ ](A81414_1_En_4_Chapter_Eque.gif) Example 4.1.3. We saw in Sect. 3.6 that self-adjoint projections are precisely the orthogonal projections. Example 4.1.4. If L : V -> W is a linear map we can create two self-adjoint maps L ∗ L : V -> V and LL ∗ : W -> W. Example 4.1.5. Consider the space of periodic functions ![ $${C}_{2\\pi }^{\\infty }\\left \(\\mathbb{R}, \\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq1.gif) with the inner product ![ $$\\left \(x\\vert y\\right \) = \\frac{1} {2\\pi }{\\int\\nolimits \\nolimits }_{0}^{2\\pi }x\\left \(t\\right \)\\overline{y\\left \(t\\right \)}\\mathrm{d}t.$$ ](A81414_1_En_4_Chapter_Equf.gif) The linear operator ![ $$D\\left \(x\\right \) = \\frac{\\mathrm{d}x} {\\mathrm{d}t}$$ ](A81414_1_En_4_Chapter_Equg.gif) can be seen to be skew-adjoint even though we have not defined the adjoint of maps on infinite-dimensional spaces. In general, we say that a map is self-adjoint or skew-adjoint if ![ $$\\begin{array}{rcl} \\left \(L\\left \(x\\right \)\\vert y\\right \)& =& \\left \(x\\vert L\\left \(y\\right \)\\right \)\\text{ or} \\\\ \\left \(L\\left \(x\\right \)\\vert y\\right \)& =& -\\left \(x\\vert L\\left \(y\\right \)\\right \) \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ1.gif) for all x, y. Using that definition, we note that integration by parts and the fact that the functions are periodic imply our claim: ![ $$\\begin{array}{rcl} \\left \(D\\left \(x\\right \)\\vert y\\right \)& =& \\frac{1} {2\\pi }{\\int\\nolimits \\nolimits }_{0}^{2\\pi }\\left \(\\frac{\\mathrm{d}x} {\\mathrm{d}t} \\left \(t\\right \)\\right \)\\overline{y\\left \(t\\right \)}\\mathrm{d}t \\\\ & =& \\frac{1} {2\\pi }x\\left \(t\\right \)\\overline{y\\left \(t\\right \)}{\\vert }_{0}^{2\\pi } - \\frac{1} {2\\pi }{\\int\\nolimits \\nolimits }_{0}^{2\\pi }x\\left \(t\\right \)\\overline{\\frac{\\mathrm{d}y} {\\mathrm{d}t} \\left \(t\\right \)}\\mathrm{d}t \\\\ & =& - \\frac{1} {2\\pi }{\\int\\nolimits \\nolimits }_{0}^{2\\pi }x\\left \(t\\right \)\\overline{\\frac{\\mathrm{d}y} {\\mathrm{d}t} \\left \(t\\right \)}\\mathrm{d}t \\\\ & =& -\\left \(x\\vert D\\left \(y\\right \)\\right \).\\end{array}$$ ](A81414_1_En_4_Chapter_Equ2.gif) In quantum mechanics, one often makes D self-adjoint by instead considering iD. In analogy with the formulae ![ $$\\begin{array}{rcl} \\exp \\left \(x\\right \)& =& \\frac{\\exp \\left \(x\\right \) +\\exp \\left \(-x\\right \)} {2} + \\frac{\\exp \\left \(x\\right \) -\\exp \\left \(-x\\right \)} {2} \\\\ & =& \\cosh \\left \(x\\right \) +\\sinh \\left \(x\\right \), \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ3.gif) we have ![ $$\\begin{array}{rcl} L& =& \\frac{1} {2}\\left \(L + {L}^{{_\\ast}}\\right \) + \\frac{1} {2}\\left \(L - {L}^{{_\\ast}}\\right \), \\\\ {L}^{{_\\ast}}& =& \\frac{1} {2}\\left \(L + {L}^{{_\\ast}}\\right \) -\\frac{1} {2}\\left \(L - {L}^{{_\\ast}}\\right \), \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ4.gif) where ![ $$\\frac{1} {2}\\left \(L + {L}^{{_\\ast}}\\right \)$$ ](A81414_1_En_4_Chapter_IEq2.gif) is self-adjoint and ![ $$\\frac{1} {2}\\left \(L - {L}^{{_\\ast}}\\right \)$$ ](A81414_1_En_4_Chapter_IEq3.gif) is skew-adjoint. In the complex case, we also have ![ $$\\begin{array}{rcl} \\exp \\left \(ix\\right \)& =& \\frac{\\exp \\left \(ix\\right \) +\\exp \\left \(-ix\\right \)} {2} + \\frac{\\exp \\left \(ix\\right \) -\\exp \\left \(-ix\\right \)} {2} \\\\ & =& \\frac{\\exp \\left \(ix\\right \) +\\exp \\left \(-ix\\right \)} {2} + \\mathit{i}\\frac{\\exp \\left \(ix\\right \) -\\exp \\left \(-ix\\right \)} {2i} \\\\ & =& \\cos \\left \(x\\right \) + \\mathit{i}\\sin \\left \(x\\right \), \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ5.gif) which is a nice analogy for ![ $$\\begin{array}{rcl} L& =& \\frac{1} {2}\\left \(L + {L}^{{_\\ast}}\\right \) + \\mathit{i} \\frac{1} {2i}\\left \(L - {L}^{{_\\ast}}\\right \), \\\\ {L}^{{_\\ast}}& =& \\frac{1} {2}\\left \(L + {L}^{{_\\ast}}\\right \) -\\mathit{i} \\frac{1} {2i}\\left \(L - {L}^{{_\\ast}}\\right \), \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ6.gif) where now also ![ $$\\frac{1} {2i}\\left \(L - {L}^{{_\\ast}}\\right \)$$ ](A81414_1_En_4_Chapter_IEq4.gif) is self-adjoint. The idea behind this formula is that multiplication by i takes skew-adjoint maps to self-adjoints maps and vice versa. Self- and skew-adjoint maps are clearly quite special by virtue of their definitions. The above decomposition which has quite a lot in common with dividing functions into odd and even parts or dividing complex numbers into real and imaginary parts seems to give some sort of indication that these maps could be central to the understanding of general linear maps. This is not quite true, but we shall be able to get a grasp on quite a lot of different maps. Aside from these suggestive properties, self- and skew-adjoint maps are completely reducible. Definition 4.1.6. A linear map L : V -> V is said to be completely reducible if every invariant subspace has a complementary invariant subspace. Recall that maps like ![ $$L = \\left \[\\begin{array}{cc} 0&1\\\\ 0 &0 \\end{array} \\right \] : {\\mathbb{R}}^{2} \\rightarrow{\\mathbb{R}}^{2}$$ ](A81414_1_En_4_Chapter_Equh.gif) can have invariant subspaces without having complementary subspaces that are invariant. Proposition 4.1.7. (Reducibility of Self- or Skew-Adjoint Operators) Let L : V -> V be a linear operator on a finite-dimensional inner product space. If L is self- or skew-adjoint, then for each invariant subspace M ⊂ V the orthogonal complement is also invariant, i.e., if LM ⊂ M, then LM ⊥ ⊂ M ⊥. Proof. Assume that LM ⊂ M. Let x ∈ M and z ∈ M ⊥ . Since Lx ∈ M, we have ![ $$\\begin{array}{rcl} 0& =& \\left \(z\\vert L\\left \(x\\right \)\\right \) \\\\ & =& \\left \({L}^{{_\\ast}}\\left \(z\\right \)\\vert x\\right \) \\\\ & =& \\pm \\left \(L\\left \(z\\right \)\\vert x\\right \).\\end{array}$$ ](A81414_1_En_4_Chapter_Equ7.gif) As this holds for all x ∈ M, it follows that Lz ∈ M ⊥ . □ Remark 4.1.8. This property almost tells us that these operators are diagonalizable. Certainly in the case where we have complex scalars, we can use induction on dimension to show that such maps are diagonalizable. In the case of real scalars, the problem is that it is not clear that self- and/or skew-adjoint maps have any invariant subspaces whatsoever. The map which is rotation by 90 ∘ in the plane is clearly skew-symmetric, but it has no non-trivial invariant subspaces. Thus, we cannot make the map any simpler. We shall see below that this is basically the worst case scenario for such maps. ### 4.1.1 Exercises 1. Let L : P n -> P n be a linear map on the space of real polynomials of degree ≤ n such that L with respect to the standard basis 1, t,..., t n is self-adjoint. Is L self-adjoint if we use the inner product ![ $$\\left \(p\\vert q\\right \) ={ \\int\\nolimits \\nolimits }_{a}^{b}p\\left \(t\\right \)q\\left \(t\\right \)\\mathrm{d}t\\text{ }?$$ ](A81414_1_En_4_Chapter_Equi.gif) 2. If V is finite-dimensional, show that the three subsets of HomV, V defined by ![ $$\\begin{array}{rcl}{ M}_{1}& =& \\mathrm{span}\\left \\{{1}_{V }\\right \\}, \\\\ {M}_{2}& =& \\left \\{L : L\\text{ is skew-adjoint}\\right \\}, \\\\ {M}_{3}& =& \\left \\{L :\\mathrm{ tr}L = 0\\mathrm{ and }L\\mathrm{ is self-adjoint}\\right \\} \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ8.gif) are subspaces over ![ $$\\mathbb{R}$$ ](A81414_1_En_4_Chapter_IEq5.gif), are mutually orthogonal with respect to the real inner product ReL, K = RetrL ∗ K, and yield a direct sum decomposition of HomV, V. 3. Let E be an orthogonal projection and L a linear operator. Recall from Exercise 11 in Sect. 1.11 and Exercise 5 in Sect. 3.4 that L leaves M = imE invariant if and only if ELE = LE and that M ⊕ M ⊥ reduces L if and only if EL = LE. Show that if L is skew- or self-adjoint and ELE = LE, then EL = LE. 4. Let V be a finite-dimensional complex inner product space. Show that both the space of self-adjoint and skew-adjoint maps form a real vector space. Show that multiplication by i yields an ![ $$\\mathbb{R}$$ ](A81414_1_En_4_Chapter_IEq6.gif)-linear isomorphism between these spaces. 5. Show that ![ $${D}^{2k} : {C}_{2\\pi }^{\\infty }\\left \(\\mathbb{R}, \\mathbb{C}\\right \) \\rightarrow{C}_{2\\pi }^{\\infty }\\left \(\\mathbb{R}, \\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq7.gif) is self-adjoint and that ![ $${D}^{2k+1} : {C}_{2\\pi }^{\\infty }\\left \(\\mathbb{R}, \\mathbb{C}\\right \) \\rightarrow{C}_{2\\pi }^{\\infty }\\left \(\\mathbb{R}, \\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq8.gif) is skew-adjoint. 6. Let x 1,..., x k be vectors in an inner product space V. Show that the k ×k matrix Gx 1,..., x k whose ij entry is x j | x i is self-adjoint and that all its eigenvalues are nonnegative. 7. Let L : V -> V be a self-adjoint operator on a finite-dimensional inner product space and ![ $$p \\in\\mathbb{R}\\left \[t\\right \]$$ ](A81414_1_En_4_Chapter_IEq9.gif) a real polynomial. Show that pL is also self-adjoint. 8. Assume that L : V -> V is self-adjoint and ![ $$\\lambda\\in\\mathbb{R}$$ ](A81414_1_En_4_Chapter_IEq10.gif). Show: (a) kerL = kerL k for any k ≥ 1. Hint: Start with k = 2. (b) imL = imL k for any k ≥ 1. (c) kerL − λ1 V = kerL − λ1 V k for any k ≥ 1. (d) Show that the eigenvalues of L are real. (d) Show that μ L t has no multiple roots. 9. Let L : V -> V be a self-adjoint operator on a finite-dimensional vector space. (a) Show that the eigenvalues of L are real. (b) In case V is complex, show that L has an eigenvalue. (c) In case V is real, show that L has an eigenvalue. Hint: Choose an orthonormal basis and observe that ![ $$\\left \[L\\right \] \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{R}\\right \) \\subset $$ ](A81414_1_En_4_Chapter_IEq11.gif) ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq12.gif) is also self-adjoint as a complex matrix. Thus, all roots of χ L t must be real by (a). 10. Assume that L 1, L 2 : V -> V are both self-adjoint or skew-adjoint. (a) Show that L 1 L 2 is skew-adjoint if and only if L 1 L 2 + L 2 L 1 = 0. (b) Show that L 1 L 2 is self-adjoint if and only if L 1 L 2 = L 2 L 1. (c) Give an example where L 1 L 2 is neither self-adjoint nor skew-adjoint. ## 4.2 Polarization and Isometries The idea of polarization is that many bilinear expressions such as x | y can be expressed as a sum of quadratic terms z 2 = z | z for suitable z. Let us start with a real inner product on V. Then, ![ $$\\left \(x + y\\vert x + y\\right \) = \\left \(x\\vert x\\right \) + 2\\left \(x\\vert y\\right \) + \\left \(y\\vert y\\right \),$$ ](A81414_1_En_4_Chapter_Equj.gif) so ![ $$\\begin{array}{rcl} \\left \(x\\vert y\\right \)& =& \\frac{1} {2}\\left \(\\left \(x + y\\vert x + y\\right \) -\\left \(x\\vert x\\right \) -\\left \(y\\vert y\\right \)\\right \) \\\\ & =& \\frac{1} {2}\\left \({\\left \\Vert x + y\\right \\Vert }^{2} -{\\left \\Vert x\\right \\Vert }^{2} -{\\left \\Vert y\\right \\Vert }^{2}\\right \).\\end{array}$$ ](A81414_1_En_4_Chapter_Equ9.gif) Since complex inner products are only conjugate symmetric, we only get ![ $$\\left \(x + y\\vert x + y\\right \) = \\left \(x\\vert x\\right \) + 2\\mathrm{Re}\\left \(x\\vert y\\right \) + \\left \(y\\vert y\\right \),$$ ](A81414_1_En_4_Chapter_Equk.gif) which implies ![ $$\\mathrm{Re}\\left \(x\\vert y\\right \) = \\frac{1} {2}\\left \({\\left \\Vert x + y\\right \\Vert }^{2} -{\\left \\Vert x\\right \\Vert }^{2} -{\\left \\Vert y\\right \\Vert }^{2}\\right \).$$ ](A81414_1_En_4_Chapter_Equl.gif) Nevertheless, the real part of the complex inner product determines the entire inner product as ![ $$\\begin{array}{rcl} \\mathrm{Re}\\left \(x\\vert iy\\right \)& =& \\mathrm{Re}\\left \(-\\mathit{i}\\left \(x\\vert y\\right \)\\right \) \\\\ & =& \\mathrm{Im}\\left \(x\\vert y\\right \).\\end{array}$$ ](A81414_1_En_4_Chapter_Equ10.gif) In particular, we have ![ $$\\mathrm{Im}\\left \(x\\vert y\\right \) = \\frac{1} {2}\\left \({\\left \\Vert x + iy\\right \\Vert }^{2} -{\\left \\Vert x\\right \\Vert }^{2} -{\\left \\Vert iy\\right \\Vert }^{2}\\right \).$$ ](A81414_1_En_4_Chapter_Equm.gif) We can use these ideas to check when linear operators L : V -> V are zero. First, we note that L = 0 if and only if Lx | y = 0 for all x, y ∈ V. To check the "if" part, just let y = Lx to see that Lx 2 = 0 for all x ∈ V. When L is self-adjoint, this can be improved. Proposition 4.2.1. Let L : V -> V be self-adjoint on an inner product space. Then, L = 0 if and only if Lx|x = 0 for all x ∈ V. Proof. There is nothing to prove when L = 0. Conversely, assume that Lx | x = 0 for all x ∈ V. The polarization trick from above implies ![ $$\\begin{array}{rcl} 0& =& \\left \(L\\left \(x + y\\right \)\\vert x + y\\right \) \\\\ & =& \\left \(L\\left \(x\\right \)\\vert x\\right \) + \\left \(L\\left \(x\\right \)\\vert y\\right \) + \\left \(L\\left \(y\\right \)\\vert x\\right \) + \\left \(L\\left \(y\\right \)\\vert y\\right \) \\\\ & =& \\left \(L\\left \(x\\right \)\\vert y\\right \) + \\left \(y\\vert {L}^{{_\\ast}}\\left \(x\\right \)\\right \) \\\\ & =& \\left \(L\\left \(x\\right \)\\vert y\\right \) + \\left \(y\\vert L\\left \(x\\right \)\\right \) \\\\ & =& 2\\mathrm{Re}\\left \(L\\left \(x\\right \)\\vert y\\right \).\\end{array}$$ ](A81414_1_En_4_Chapter_Equ11.gif) Next, insert y = Lx to see that ![ $$\\begin{array}{rcl} 0& =& \\mathrm{Re}\\left \(L\\left \(x\\right \)\\vert L\\left \(x\\right \)\\right \) \\\\ & =&{ \\left \\Vert L\\left \(x\\right \)\\right \\Vert }^{2} \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ12.gif) as desired. □ If L is not self-adjoint, there is no reason to think that such a result should hold. For instance, when V is a real inner product space and L is skew-symmetric, then we have ![ $$\\begin{array}{rcl} \\left \(L\\left \(x\\right \)\\vert x\\right \)& =& -\\left \(x\\vert L\\left \(x\\right \)\\right \) \\\\ & =& -\\left \(L\\left \(x\\right \)\\vert x\\right \) \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ13.gif) so Lx | x = 0 for all x. Therefore, it is somewhat surprising that we can use the complex polarization trick to prove the next result. Proposition 4.2.2. Let L : V -> V be a linear operator on a complex inner product space. Then, L = 0 if and only if Lx|x = 0 for all x ∈ V. Proof. There is nothing to prove when L = 0. Conversely, assume that Lx | x = 0 for all x ∈ V. We use the complex polarization trick from above for fixed x, y ∈ V : ![ $$\\begin{array}{rcl} 0& =& \\left \(L\\left \(x + y\\right \)\\vert x + y\\right \) \\\\ & =& \\left \(L\\left \(x\\right \)\\vert x\\right \) + \\left \(L\\left \(x\\right \)\\vert y\\right \) + \\left \(L\\left \(y\\right \)\\vert x\\right \) + \\left \(L\\left \(y\\right \)\\vert y\\right \) \\\\ & =& \\left \(L\\left \(x\\right \)\\vert y\\right \) + \\left \(L\\left \(y\\right \)\\vert x\\right \) \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ14.gif) ![ $$\\begin{array}{rcl} 0& =& \\left \(L\\left \(x + iy\\right \)\\vert x + iy\\right \) \\\\ & =& \\left \(L\\left \(x\\right \)\\vert x\\right \) + \\left \(L\\left \(x\\right \)\\vert iy\\right \) + \\left \(L\\left \(iy\\right \)\\vert x\\right \) + \\left \(L\\left \(iy\\right \)\\vert iy\\right \) \\\\ & =& -\\mathit{i}\\left \(L\\left \(x\\right \)\\vert y\\right \) + \\mathit{i}\\left \(L\\left \(y\\right \)\\vert x\\right \).\\end{array}$$ ](A81414_1_En_4_Chapter_Equ15.gif) This yields a system ![ $$\\left \[\\begin{array}{cc} 1 &1\\\\ - i & i \\end{array} \\right \]\\left \[\\begin{array}{c} \\left \(L\\left \(x\\right \)\\vert y\\right \)\\\\ \\left \(L\\left \(y\\right \) \\vert x \\right \) \\end{array} \\right \] = \\left \[\\begin{array}{c} 0\\\\ 0 \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equn.gif) Since the columns of ![ $$\[\\begin{matrix}\\scriptstyle 1 &\\scriptstyle 1 \\\\ \\scriptstyle -i&\\scriptstyle i\\end{matrix}\]$$ ](A81414_1_En_4_Chapter_IEq13.gif) are linearly independent, the only solution is the trivial one. In particular, Lx | y = 0. □ Polarization can also be used to give a nice characterization of isometries (see also Sect. 3.3). These properties tie in nicely with our observation that ![ $${\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}} ={ \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{-1}$$ ](A81414_1_En_4_Chapter_Equo.gif) when e 1,..., e n is an orthonormal basis. Proposition 4.2.3. (Characterization of Isometries) Let L : V -> W be a linear map between finite-dimensional inner product spaces, then the following are equivalent: (1) Lx = x for all x ∈ V. (2) Lx|Ly = x|y for all x,y ∈ V. (3) L ∗ L = 1 V (4) L takes orthonormal sets of vectors to orthonormal sets of vectors. Proof. 1 ⇒ 2 : Depending on whether we are in the complex or real case, simply write Lx | Ly and x | y in terms of norms and use (1) to see that both terms are the same. 2 ⇒ 3 : Just use that L ∗ Lx | y = Lx | Ly = x | y for all x, y ∈ V. 3 ⇒ 4 : We are assuming x | y = L ∗ Lx | y = Lx | Ly, which immediately implies 4. 4 ⇒ 1 : Evidently, L takes unit vectors to unit vectors. So 1 holds if x = 1. Now, use the scaling property of norms to finish the argument. □ Recall the definition of the operator norm for linear maps L : V -> W ![ $$\\left \\Vert L\\right \\Vert {=\\max }_{\\left \\Vert x\\right \\Vert =1}\\left \\Vert L\\left \(x\\right \)\\right \\Vert.$$ ](A81414_1_En_4_Chapter_Equp.gif) It was shown in Theorem 3.3.8 that this norm is finite when V is finite-dimensional. It is important to realize that this operator norm is not the same as the norm we get from the inner product L | K = trLK ∗ defined on HomV, W. To see this, it suffices to consider 1 V . Clearly, 1 V = 1, but 1 V | 1 V = tr1 V 1 V = dimV. Remark 4.2.4. Note that if L : V -> W satisfies the conditions in Proposition 4.2.3, then L = 1. We also obtain Corollary 4.2.5. (Characterization of Isometries) Let L : V -> W be an isomorphism between finite-dimensional inner product spaces, then L is an isometry if and only if L ∗ = L −1. Proof. If L is an isometry, then it satisfies all of the above four conditions. In particular, L ∗ L = 1 V . Since L is invertible, it must follow that L − 1 = L ∗ . Conversely, if L − 1 = L ∗ , then L ∗ L = 1 V , and it follows from Proposition 4.2.3 that Lx | Ly = x | y so L is an isometry. □ Just as for self-adjoint and skew-adjoint operators, we have that isometries are completely reducible. Corollary 4.2.6. (Reducibility of Isometries) Let L : V -> V be a linear operator on a finite-dimensional inner product space that is also an isometry. If M ⊂ V is L-invariant, then so is M ⊥. Proof. If x ∈ M and y ∈ M ⊥ , then we note that ![ $$0 = \\left \(L\\left \(x\\right \)\\vert y\\right \) = \\left \(x\\vert {L}^{{_\\ast}}\\left \(y\\right \)\\right \).$$ ](A81414_1_En_4_Chapter_Equq.gif) Therefore, L ∗ y = L − 1 y ∈ M ⊥ for all y ∈ M ⊥ . Now observe that ![ $${L}^{-1}{\\vert }_{{M}^{\\perp }} : {M}^{\\perp }\\rightarrow{M}^{\\perp }$$ ](A81414_1_En_4_Chapter_IEq14.gif) must be an isomorphism as its kernel is trivial. This implies that each z ∈ M ⊥ is of the form z = L − 1 y for y ∈ M ⊥ . Thus, Lz = y ∈ M ⊥ , and hence, M ⊥ is L-invariant. □ Definition 4.2.7. In the special case where ![ $$V = W = {\\mathbb{R}}^{n}$$ ](A81414_1_En_4_Chapter_IEq15.gif), we call the linear isometries orthogonal matrices. The collection of orthogonal matrices is denoted O n . Note that these matrices are a subgroup of ![ $$G{l}_{n}\\left \(\\mathbb{R}\\right \)$$ ](A81414_1_En_4_Chapter_IEq16.gif), i.e., if A, B ∈ O n , then AB ∈ O n . In particular, we see that O n is itself a group.Similarly, when ![ $$V = W = {\\mathbb{C}}^{n}$$ ](A81414_1_En_4_Chapter_IEq17.gif), we have the subgroup of unitary matrices ![ $${U}_{n} \\subset G{l}_{n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq18.gif) consisting of complex matrices that are also isometries. ### 4.2.1 Exercises 1. On ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{R}\\right \)$$ ](A81414_1_En_4_Chapter_IEq19.gif), use the inner product A | B = trAB t . Consider the linear operator LX = X t . Show that L is orthogonal. Is it skew- or self-adjoint? 2. On ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq20.gif), use the inner product A | B = trAB ∗ . For ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq21.gif), consider the two linear operators on ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq22.gif) defined by L A X = AX, R A X = XA (see also Exercise 3 in Sect. 3.5 ) Show that (a) L A and R A are unitary if A is unitary. (b) L A and R A are self- or skew-adjoint if A is self- or skew-adjoint. 3. Show that the operator D defines an isometry on both ![ $$\\mathrm{{span}}_{\\mathbb{C}}\\left \\{\\exp \\left \(\\mathit{it}\\right \),\\exp \\left \(-\\mathit{it}\\right \)\\right \\}$$ ](A81414_1_En_4_Chapter_Equr.gif) and ![ $$\\mathrm{{span}}_{\\mathbb{R}}\\left \\{\\cos \\left \(t\\right \),\\sin \\left \(t\\right \)\\right \\}$$ ](A81414_1_En_4_Chapter_Equs.gif) provided we use the inner product ![ $$\\left \(f\\vert g\\right \) = \\frac{1} {2\\pi }{\\int\\nolimits \\nolimits }_{-\\pi }^{\\pi }f\\left \(t\\right \)g\\left \(t\\right \)\\mathrm{d}t$$ ](A81414_1_En_4_Chapter_Equt.gif) inherited from ![ $${C}_{2\\pi }^{\\infty }\\left \(\\mathbb{R}, \\mathbb{C}\\right \).$$ ](A81414_1_En_4_Chapter_IEq23.gif) 4. Let L : V -> V be a linear operator on a complex inner product space. Show that L is self-adjoint if and only if Lx | x is real for all x ∈ V. 5. Let L : V -> V be a linear operator on a real inner product space. Show that L is skew-adjoint if and only if Lx | x = 0 for all x ∈ V. 6. Let e 1,..., e n be an orthonormal basis for V and assume that L : V -> W has the property that Le 1,..., Le n is an orthonormal basis for W. Show that L is an isometry. 7. Let L : V -> V be a linear operator on a finite-dimensional inner product space. Show that if L ∘ K = K ∘ L for all isometries K : V -> V, then L = λ1 V . 8. Let L : V -> V be a linear operator on an inner product space such that Lx | Ly = 0 if x | y = 0. (a) Show that if x = y and x | y = 0, then Lx = Ly. Hint: Use and show that x + y and x − y are perpendicular. (b) Show that L = λU, where U is an isometry. 9. Let V be a finite-dimensional real inner product space and F : V -> V be a bijective map that preserves distances, i.e., for all x, y ∈ V ![ $$\\left \\Vert F\\left \(x\\right \) - F\\left \(y\\right \)\\right \\Vert = \\left \\Vert x - y\\right \\Vert.$$ ](A81414_1_En_4_Chapter_Equu.gif) (a) Show that Gx = Fx − F0 also preserves distances and that G0 = 0. (b) Show that Gx = x for all x ∈ V. (c) Using polarization to show that Gx | Gy = x | y for all x, y ∈ V. (See also next the exercise for what can happen in the complex case.) (d) If e 1,..., e n is an orthonormal basis, then show that Ge 1,..., Ge n is also an orthonormal basis. (e) Show that ![ $$G\\left \(x\\right \) = \\left \(x\\vert {e}_{1}\\right \)G\\left \({e}_{1}\\right \) + \\cdots+ \\left \(x\\vert {e}_{n}\\right \)G\\left \({e}_{n}\\right \),$$ ](A81414_1_En_4_Chapter_Equv.gif) and conclude that G is linear. (f) Conclude that Fx = Lx + F0 for a linear isometry L. 10. On ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq24.gif), use the inner product A | B = trAB ∗ . Consider the map LX = X ∗ . (a) Show that L is real linear but not complex linear. (b) Show that ![ $$\\left \\Vert L\\left \(X\\right \) - L\\left \(Y \\right \)\\right \\Vert = \\left \\Vert X - Y \\right \\Vert$$ ](A81414_1_En_4_Chapter_Equw.gif) for all X, Y but that ![ $$\\left \(L\\left \(X\\right \)\\vert L\\left \(Y \\right \)\\right \)\\neq \\left \(X\\vert Y \\right \)$$ ](A81414_1_En_4_Chapter_Equx.gif) for some choices of X, Y. ## 4.3 The Spectral Theorem We are now ready to present and prove the most important theorem about when it is possible to find a basis that diagonalizes a special class of operators. This is the spectral theorem and it states that a self-adjoint linear operator on a finite-dimensional inner product space is diagonalizable with respect to an orthonormal basis. There are several reasons for why this particular result is important. Firstly, it forms the foundation for all of our other results for linear maps between inner product spaces, including isometries, skew-adjoint maps, and general linear maps between inner product spaces. Secondly, it is the one result of its type that has a truly satisfying generalization to infinite-dimensional spaces. In the infinite-dimensional setting, it becomes a cornerstone for several developments in analysis, functional analysis, partial differential equations, representation theory, and much more. First, we revisit some material from Sect. 2.5. Our general goal for linear operators L : V -> V is to find a basis such that the matrix representation for L is as simple as possible. Since the simplest matrices are the diagonal matrices, one might well ask if it is always possible to find a basis x 1,..., x m that diagonalizes L, i.e., Lx 1 = λ1 x 1,..., Lx m = λ m x m ? The central idea behind finding such a basis is quite simple and reappears in several proofs in this chapter. Given some special information about the linear operator L on V, we show that L ∗ has an eigenvector x≠0 and that the orthogonal complement to x in V is L-invariant. The existence of this invariant subspace of V then indicates that the procedure for establishing a particular result about exhibiting a nice matrix representation for L is a simple induction on the dimension of the vector space. Example 4.3.1. A rotation by 90 ∘ in ![ $${\\mathbb{R}}^{2}$$ ](A81414_1_En_4_Chapter_IEq25.gif) does not have a basis of eigenvectors. However, if we interpret it as a complex map on ![ $$\\mathbb{C}$$ ](A81414_1_En_4_Chapter_IEq26.gif), it is just multiplication by i and therefore already diagonalized. We could also view the 2 ×2 matrix as a map on ![ $${\\mathbb{C}}^{2}.$$ ](A81414_1_En_4_Chapter_IEq27.gif) As such, we can also diagonalize it by using x 1 = i, 1 and x 2 = − i, 1 so that x 1 is mapped to ix 1 and x 2 to − ix 2. Example 4.3.2. A much worse example is the linear map represented by ![ $$A = \\left \[\\begin{array}{cc} 0&1\\\\ 0 &0 \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equy.gif) Here x 1 = 1, 0 does have the property that Ax 1 = 0, but it is not possible to find x 2 linearly independent from x 1 so that Ax 2 = λx 2. In case λ = 0, we would just have A = 0 which is not true. So λ≠0, but then x 2 ∈ imA = spanx 1. Note that using complex scalars cannot alleviate this situation due to the very general nature of the argument. At this point, it should be more or less clear that the first goal is to show that self-adjoint operators have eigenvalues. Recall that in Sects. 2.3 and also 2.7, we constructed a characteristic polynomial for L with the property that any eigenvalue must be a root of this polynomial. This is fine if we work with complex scalars as we can then appeal to the Fundamental theorem of algebra 2.18 in order to find roots. But this is less satisfactory if we use real scalars. Although it is in fact not hard to deal with by passing to suitable matrix representations (see Exercise 9 in Sect. 4.1), Lagrange gave a very elegant proof (and most likely the first proof) that self-adjoint operators have real eigenvalues using Lagrange multipliers. We shall give a similar proof here that does not require quite as much knowledge of multivariable calculus. Theorem 4.3.3. (Existence of Eigenvalues for Self-Adjoint Operators) Let L : V -> V be self-adjoint on a finite-dimensional inner product space. Then, L has a real eigenvalue. Proof. We use the compact set S = x ∈ V : x | x = 1 and the real-valued function x -> Lx | x on S. Select x 1 ∈ S so that ![ $$\\left \(Lx\\vert x\\right \) \\leq \\left \(L{x}_{1}\\vert {x}_{1}\\right \)$$ ](A81414_1_En_4_Chapter_Equz.gif) for all x ∈ S. If we define λ1 = Lx 1 | x 1, then this implies that ![ $$\\left \(Lx\\vert x\\right \) \\leq{\\lambda }_{1},\\text{ for all }x \\in S.$$ ](A81414_1_En_4_Chapter_Equaa.gif) Consequently, ![ $$\\left \(Lx\\vert x\\right \) \\leq{\\lambda }_{1}\\left \(x\\vert x\\right \) = {\\lambda }_{1}{\\left \\Vert x\\right \\Vert }^{2},\\text{ for all }x \\in V.$$ ](A81414_1_En_4_Chapter_Equab.gif) This shows that the real-valued function ![ $$f\\left \(x\\right \) = \\frac{\\left \(Lx\\vert x\\right \)} {{\\left \\Vert x\\right \\Vert }^{2}}$$ ](A81414_1_En_4_Chapter_Equac.gif) has a maximum at x = x 1 and that the value there is λ1. This implies that for any y ∈ V, the function ϕt = fx 1 + ty has a maximum at t = 0 and hence the derivative at t = 0 is zero. To be able to use this, we need to compute the derivative of ![ $$\\phi \\left \(t\\right \) = \\frac{\\left \(L\\left \({x}_{1} + ty\\right \)\\vert {x}_{1} + ty\\right \)} {{\\left \\Vert {x}_{1} + ty\\right \\Vert }^{2}}$$ ](A81414_1_En_4_Chapter_Equad.gif) at t = 0. We start by computing the derivative of the numerator at t = 0 using the definition of a derivative ![ $$ \\begin{array}{rcl} & {\\lim }_{h\\rightarrow 0}\\frac{\\left \(L\\left \({x}_{1}+hy\\right \)\\vert {x}_{1}+hy\\right \)-\\left \(L\\left \({x}_{1}\\right \)\\vert {x}_{1}\\right \)} {h} & \\\\ & \\quad {=\\lim }_{h\\rightarrow 0}\\frac{\\left \(L\\left \(hy\\right \)\\vert {x}_{1}\\right \)+\\left \(L\\left \({x}_{1}\\right \)\\vert hy\\right \)+\\left \(L\\left \(hy\\right \)\\vert hy\\right \)} {h} & \\\\ & \\quad {=\\lim }_{h\\rightarrow 0}\\frac{\\left \(hy\\vert L\\left \({x}_{1}\\right \)\\right \)+\\left \(L\\left \({x}_{1}\\right \)\\vert hy\\right \)+\\left \(L\\left \(hy\\right \)\\vert hy\\right \)} {h} & \\\\ & \\quad = \\left \(y\\vert L\\left \({x}_{1}\\right \)\\right \) + \\left \(L\\left \({x}_{1}\\right \)\\vert y\\right \) {+\\lim }_{h\\rightarrow 0}\\left \(L\\left \(y\\right \)\\vert hy\\right \)& \\\\ & \\quad = 2\\mathrm{Re}\\left \(L\\left \({x}_{1}\\right \)\\vert y\\right \). & \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ16.gif) The derivative of the denominator is computed the same way simply observing that we can let L = 1 V . The derivative of the quotient at t = 0 can now be calculated using that x 1 = 1, λ1 = Lx 1 | x 1, and the fact that ![ $${\\lambda }_{1} \\in\\mathbb{R}$$ ](A81414_1_En_4_Chapter_IEq28.gif) ![ $$\\begin{array}{rcl}{ \\phi }^{{\\prime}}\\left \(0\\right \)& =& \\frac{2\\mathrm{Re}\\left \(L\\left \({x}_{1}\\right \)\\vert y\\right \){\\left \\Vert {x}_{1}\\right \\Vert }^{2} - 2\\mathrm{Re}\\left \({x}_{ 1}\\vert y\\right \)\\left \(L\\left \({x}_{1}\\right \)\\vert {x}_{1}\\right \)} {{\\left \\Vert {x}_{1}\\right \\Vert }^{4}} \\\\ & =& 2\\mathrm{Re}\\left \(L\\left \({x}_{1}\\right \)\\vert y\\right \) - 2\\mathrm{Re}\\left \({x}_{1}\\vert y\\right \){\\lambda }_{1} \\\\ & =& 2\\mathrm{Re}\\left \(L\\left \({x}_{1}\\right \) - {\\lambda }_{1}{x}_{1}\\vert y\\right \).\\end{array}$$ ](A81414_1_En_4_Chapter_Equ17.gif) Since ϕ ′ 0 = 0 for any choice of y, we note that by using y = Lx 1 − λ1 x 1, we obtain ![ $$0 = {\\phi }^{{\\prime}}\\left \(0\\right \) = 2{\\left \\Vert L\\left \({x}_{ 1}\\right \) - {\\lambda }_{1}{x}_{1}\\right \\Vert }^{2}.$$ ](A81414_1_En_4_Chapter_Equae.gif) This shows that λ1 and x 1 form an eigenvalue/vector pair. □ We can now prove. Theorem 4.3.4. (The Spectral Theorem) Let L : V -> V be a self-adjoint operator on a finite-dimensional inner product space. Then, there exists an orthonormal basis e 1 ,..., e n of eigenvectors, i.e., Le 1 = λ 1 e 1 ,..., Le n = λ n e n . Moreover, all eigenvalues λ 1 ,..., λ n are real. Proof. We just proved that we can find an eigenvalue/vector pair Le 1 = λ1 e 1. Recall that λ1 was real and we can, if necessary, multiply e 1 by a suitable scalar to make it a unit vector. Next, we use again that L is self-adjoint to see that L leaves the orthogonal complement to e 1 invariant (see also Proposition 4.1.7), i.e., LM ⊂ M, where M = x ∈ V : x | e 1 = 0. To show this, let x ∈ M and calculate ![ $$\\begin{array}{rcl} \\left \(L\\left \(x\\right \)\\vert {e}_{1}\\right \)& =& \\left \(x\\vert {L}^{{_\\ast}}\\left \({e}_{ 1}\\right \)\\right \) \\\\ & =& \\left \(x\\vert L\\left \({e}_{1}\\right \)\\right \) \\\\ & =& \\left \(x\\vert {\\lambda }_{1}{e}_{1}\\right \) \\\\ & =& \\bar{{\\lambda }}_{1}\\left \(x\\vert {e}_{1}\\right \) \\\\ & =& 0.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ18.gif) Now, we have a new operator L : M -> M on a space of dimension dimM = dimV − 1. We note that this operator is also self-adjoint. Thus, we can use induction on dimV to prove the theorem. Alternatively, we can extract an eigenvalue/vector pair Le 2 = λ2 e 2, where e 2 ∈ M is a unit vector and then pass down to the orthogonal complement of e 2 inside M. This procedure will end in dimV steps and will also generate an orthonormal basis of eigenvectors as the vectors are chosen successively to be orthogonal to each other. □ In terms of matrix representations (see Sects. 1.7 and 3.5), we have proven the following: Corollary 4.3.5. Let L : V -> V be a self-adjoint operator on a finite-dimensional inner product space. Then, there exists an orthonormal basis e 1 ,...,e n of eigenvectors and a real n × n diagonal matrix D such that ![ $$\\begin{array}{rcl} L& =& \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]D{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}} \\\\ & =& \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \[\\begin{array}{ccc} {\\lambda }_{1} & \\cdots & 0\\\\ \\vdots & \\ddots & \\vdots \\\\ 0 &\\cdots &{\\lambda }_{n} \\end{array} \\right \]{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}}.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ19.gif) The same eigenvalue can apparently occur several times, just think of the operator 1 V . Recall that the geometric multiplicity of an eigenvalue is dimkerL − λ1 V . This is clearly the same as the number of times it occurs in the above diagonal form of the operator. Thus, the basis vectors that correspond to λ in the diagonalization yield a basis for kerL − λ1 V . With this in mind, we can rephrase the spectral theorem. In the form stated below, it is also known as the spectral resolution of L with respect to 1 V as both of these operators are resolved according to the eigenspaces for L. Theorem 4.3.6. (The Spectral Resolution of Self-Adjoint Operators) Let L : V -> V be a self-adjoint operator on a finite-dimensional inner product space and λ 1 ,...,λ k the distinct eigenvalues for L. Then ![ $${1}_{V } =\\mathrm{{ proj}}_{\\ker \\left \(L-{\\lambda }_{1}{1}_{V }\\right \)} + \\cdots+\\mathrm{{ proj}}_{\\ker \\left \(L-{\\lambda }_{k}{1}_{V }\\right \)}$$ ](A81414_1_En_4_Chapter_Equaf.gif) and ![ $$L = {\\lambda }_{1}\\mathrm{{proj}}_{\\ker \\left \(L-{\\lambda }_{1}{1}_{V }\\right \)} + \\cdots+ {\\lambda }_{k}\\mathrm{{proj}}_{\\ker \\left \(L-{\\lambda }_{k}{1}_{V }\\right \)}.$$ ](A81414_1_En_4_Chapter_Equag.gif) Proof. The proof relies on showing that eigenspaces are mutually orthogonal to each other. This actually follows from our constructions in the proof of the spectral theorem. Nevertheless, it is desirable to have a direct proof of this. Let Lx = λx and Ly = μy, then ![ $$\\begin{array}{rcl} \\lambda \\left \(x\\vert y\\right \)& =& \\left \(L\\left \(x\\right \)\\vert y\\right \) \\\\ & =& \\left \(x\\vert L\\left \(y\\right \)\\right \) \\\\ & =& \\left \(x\\vert \\mu y\\right \) \\\\ & =& \\mu \\left \(x\\vert y\\right \)\\text{ since }\\mu \\text{ is real.} \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ20.gif) If λ≠μ, then we get ![ $$\\left \(\\lambda- \\mu \\right \)\\left \(x\\vert y\\right \) = 0,$$ ](A81414_1_En_4_Chapter_Equah.gif) which implies x | y = 0. With this in mind, we can now see that if x i ∈ kerL − λ i 1 V , then ![ $$\\mathrm{{proj}}_{\\ker \\left \(L-{\\lambda }_{j}{1}_{V }\\right \)}\\left \({x}_{i}\\right \) = \\left \\{\\begin{array}{cc} {x}_{j}&\\mathrm{ if }i = j \\\\ 0 & \\mathrm{ if }\\mathit{i}\\neq j \\end{array} \\right.$$ ](A81414_1_En_4_Chapter_Equai.gif) as x i is perpendicular to kerL − λ j 1 V in case i≠j. Since we can write x = x 1 + ⋯ + x k , where x i ∈ kerL − λ i 1 V , we have ![ $$\\mathrm{{proj}}_{\\ker \\left \(L-{\\lambda }_{i}{1}_{V }\\right \)}\\left \(x\\right \) = {x}_{i}.$$ ](A81414_1_En_4_Chapter_Equaj.gif) This shows that ![ $$x =\\mathrm{{ proj}}_{\\ker \\left \(L-{\\lambda }_{1}{1}_{V }\\right \)}\\left \(x\\right \) + \\cdots+\\mathrm{{ proj}}_{\\ker \\left \(L-{\\lambda }_{k}{1}_{V }\\right \)}\\left \(x\\right \)$$ ](A81414_1_En_4_Chapter_Equak.gif) as well as ![ $$L\\left \(x\\right \) = \\left \({\\lambda }_{1}\\mathrm{{proj}}_{\\ker \\left \(L-{\\lambda }_{1}{1}_{V }\\right \)} + \\cdots+ {\\lambda }_{k}\\mathrm{{proj}}_{\\ker \\left \(L-{\\lambda }_{k}{1}_{V }\\right \)}\\right \)\\left \(x\\right \).$$ ](A81414_1_En_4_Chapter_Equal.gif) □ The fact that we can diagonalize self-adjoint operators has an immediate consequence for complex skew-adjoint operators as they become self-adjoint after multiplying them by ![ $$i = \\sqrt{-1}.$$ ](A81414_1_En_4_Chapter_IEq29.gif) Thus, we have Corollary 4.3.7 (The Spectral Theorem for Complex Skew-Adjoint Operators). Let L : V -> V be a skew-adjoint linear operator on a complex finite-dimensional inner product space. Then, we can find an orthonormal basis such that Le 1 = iμ1e1,...,Len =iμ n e n , where ![ $${\\mu }_{1},\\ldots,{\\mu }_{n} \\in\\mathbb{R}.$$ ](A81414_1_En_4_Chapter_IEq30.gif) It is worth pondering this statement. Apparently, we have not said anything about skew-adjoint real linear operators. The statement, however, does cover both real and complex matrices as long as we view them as maps on ![ $${\\mathbb{C}}^{n}$$ ](A81414_1_En_4_Chapter_IEq31.gif). It just so happens that the corresponding diagonal matrix has purely imaginary entries, unless they are 0, and hence is forced to be complex. Before doing some examples, it is worthwhile trying to find a way of remembering the formula ![ $$L = \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]D{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}}.$$ ](A81414_1_En_4_Chapter_Equam.gif) If we solve it for D instead, it reads ![ $$D ={ \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}}L\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equan.gif) This is quite natural as ![ $$L\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {\\lambda }_{1}{e}_{1} & \\cdots &{\\lambda }_{n}{e}_{n} \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equao.gif) and then observing that ![ $${\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}}\\left \[\\begin{array}{ccc} {\\lambda }_{ 1}{e}_{1} & \\cdots &{\\lambda }_{n}{e}_{n} \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equap.gif) is the matrix whose ij entry is λ j e j | e i as the rows ![ $${\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}}$$ ](A81414_1_En_4_Chapter_IEq32.gif) correspond to the columns in ![ $$\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_IEq33.gif) This gives a quick check for whether we have the change of basis matrices in the right places. Example 4.3.8. Let ![ $$A = \\left \[\\begin{array}{cc} 0& -\\mathit{i}\\\\ i & 0 \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equaq.gif) Then, A is both self-adjoint and unitary. This shows that ± 1 are the only possible eigenvalues. We can easily find nontrivial solutions to both equations ![ $$\\left \(A \\mp{1}_{{\\mathbb{C}}^{2}}\\right \)\\left \(x\\right \) = 0$$ ](A81414_1_En_4_Chapter_IEq34.gif) by observing that ![ $$\\begin{array}{rcl} \\left \(A - {1}_{{\\mathbb{C}}^{2}}\\right \)\\left \[\\begin{array}{c} -\\mathit{i}\\\\ 1 \\end{array} \\right \]& =& \\left \[\\begin{array}{cc} - 1& -\\mathit{i}\\\\ i & - 1 \\end{array} \\right \]\\left \[\\begin{array}{c} -\\mathit{i}\\\\ 1 \\end{array} \\right \] = 0 \\\\ \\left \(A + {1}_{{\\mathbb{C}}^{2}}\\right \)\\left \[\\begin{array}{c} 1\\\\ i \\end{array} \\right \]& =& \\left \[\\begin{array}{cc} 1& -\\mathit{i}\\\\ i & 1 \\end{array} \\right \]\\left \[\\begin{array}{c} \\mathit{i}\\\\ 1 \\end{array} \\right \] = 0\\end{array}$$ ](A81414_1_En_4_Chapter_Equ21.gif) The vectors ![ $${z}_{1} = \\left \[\\begin{array}{c} -\\mathit{i}\\\\ 1 \\end{array} \\right \],{z}_{2} = \\left \[\\begin{array}{c} \\mathit{i}\\\\ 1 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equar.gif) form an orthogonal set that we can normalize to an orthonormal basis of eigenvectors ![ $${x}_{1} = \\left \[\\begin{array}{c} \\frac{-i} {\\sqrt{2}} \\\\ \\frac{1} {\\sqrt{2}} \\end{array} \\right \],{x}_{2} = \\left \[\\begin{array}{c} \\frac{i} {\\sqrt{2}} \\\\ \\frac{1} {\\sqrt{2}} \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equas.gif) This means that ![ $$A = \\left \[\\begin{array}{cc} {x}_{1} & {x}_{2} \\end{array} \\right \]\\left \[\\begin{array}{cc} 1& 0\\\\ 0 & -1 \\end{array} \\right \]{\\left \[\\begin{array}{cc} {x}_{1} & {x}_{2} \\end{array} \\right \]}^{-1}$$ ](A81414_1_En_4_Chapter_Equat.gif) or more concretely that ![ $$\\left \[\\begin{array}{cc} 0& -\\mathit{i}\\\\ i & 0 \\end{array} \\right \] = \\left \[\\begin{array}{cc} \\frac{-i} {\\sqrt{2}} & \\frac{i} {\\sqrt{2}} \\\\ \\frac{1} {\\sqrt{2}} & \\frac{1} {\\sqrt{2}} \\end{array} \\right \]\\left \[\\begin{array}{cc} 1& 0\\\\ 0 & -1 \\end{array} \\right \]\\left \[\\begin{array}{cc} \\frac{i} {\\sqrt{2}} & \\frac{1} {\\sqrt{2}} \\\\ \\frac{-i} {\\sqrt{2}} & \\frac{1} {\\sqrt{2}} \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equau.gif) Example 4.3.9. Let ![ $$B = \\left \[\\begin{array}{cc} 0& - 1\\\\ 1 & 0 \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equav.gif) The corresponding self-adjoint matrix is ![ $$\\left \[\\begin{array}{cc} 0& -\\mathit{i}\\\\ i & 0 \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equaw.gif) Using the identity ![ $$\\left \[\\begin{array}{cc} 0& -\\mathit{i}\\\\ i & 0 \\end{array} \\right \] = \\left \[\\begin{array}{cc} \\frac{-i} {\\sqrt{2}} & \\frac{i} {\\sqrt{2}} \\\\ \\frac{1} {\\sqrt{2}} & \\frac{1} {\\sqrt{2}} \\end{array} \\right \]\\left \[\\begin{array}{cc} 1& 0\\\\ 0 & -1 \\end{array} \\right \]\\left \[\\begin{array}{cc} \\frac{i} {\\sqrt{2}} & \\frac{1} {\\sqrt{2}} \\\\ \\frac{-i} {\\sqrt{2}} & \\frac{1} {\\sqrt{2}} \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equax.gif) and then multiplying by − i to get back to ![ $$\\left \[\\begin{array}{cc} 0& - 1\\\\ 1 & 0 \\end{array} \\right \],$$ ](A81414_1_En_4_Chapter_Equay.gif) we obtain ![ $$\\left \[\\begin{array}{cc} 0& - 1\\\\ 1 & 0 \\end{array} \\right \] = \\left \[\\begin{array}{cc} \\frac{-i} {\\sqrt{2}} & \\frac{i} {\\sqrt{2}} \\\\ \\frac{1} {\\sqrt{2}} & \\frac{1} {\\sqrt{2}} \\end{array} \\right \]\\left \[\\begin{array}{cc} - i&0\\\\ 0 & i \\end{array} \\right \]\\left \[\\begin{array}{cc} \\frac{i} {\\sqrt{2}} & \\frac{1} {\\sqrt{2}} \\\\ \\frac{-i} {\\sqrt{2}} & \\frac{1} {\\sqrt{2}} \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equaz.gif) It is often more convenient to find the eigenvalues using the characteristic polynomial; to see why this is, let us consider some more complicated examples. Example 4.3.10. We consider the real symmetric operator ![ $$A = \\left \[\\begin{array}{cc} \\alpha &\\beta \\\\ \\beta&\\alpha\\end{array} \\right \],\\alpha,\\beta\\in\\mathbb{R}.$$ ](A81414_1_En_4_Chapter_Equba.gif) This time, one can more or less readily see that ![ $${x}_{1} = \\left \[\\begin{array}{c} 1\\\\ 1 \\end{array} \\right \],{x}_{2} = \\left \[\\begin{array}{c} 1\\\\ -1 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equbb.gif) are eigenvectors and that the corresponding eigenvalues are α ± β. But if one did not guess that, then computing the characteristic polynomial is clearly the way to go. Even with relatively simple examples such as ![ $$A = \\left \[\\begin{array}{cc} 1&1\\\\ 1 &2 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equbc.gif) things quickly get out of hand. Clearly, the method of using Gauss elimination on the system ![ $$A - \\lambda {1}_{{\\mathbb{C}}^{n}}$$ ](A81414_1_En_4_Chapter_IEq35.gif) and then finding conditions on λ that ensure that we have nontrivial solutions is more useful in finding all eigenvalues/vectors. Example 4.3.11. Let us try this with ![ $$A = \\left \[\\begin{array}{cc} 1&1\\\\ 1 &2 \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equbd.gif) Thus, we consider ![ $$\\begin{array}{rcl} & \\left \[\\begin{array}{cc} 1 - \\lambda & 1\\\\ 1 &2 - \\lambda\\end{array} \\begin{array}{c} 0 \\\\ 0 \\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{cc} 1 &2 - \\lambda \\\\ 1 - \\lambda& 1 \\end{array} \\begin{array}{c} 0 \\\\ 0 \\end{array} \\right \]& \\\\ & \\left \[\\begin{array}{cc} 1& \\left \(2 - \\lambda \\right \)\\\\ 0 & - \\left \(1 - \\lambda\\right \) \\left \(2 - \\lambda \\right \) + 1 \\end{array} \\begin{array}{c} 0 \\\\ 0 \\end{array} \\right \]& \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ22.gif) Thus, there is a nontrivial solution precisely when ![ $$-\\left \(1 - \\lambda \\right \)\\left \(2 - \\lambda \\right \) + 1 = -1 + 3\\lambda- {\\lambda }^{2} = 0.$$ ](A81414_1_En_4_Chapter_Eqube.gif) The roots of this polynomial are ![ $${\\lambda }_{1,2} = \\frac{3} {2} \\pm \\frac{1} {2}\\sqrt{5}$$ ](A81414_1_En_4_Chapter_IEq36.gif). The corresponding eigenvectors are found by inserting the root and then finding a nontrivial solution. Thus, we are trying to solve ![ $$\\left \[\\begin{array}{cc} 1&\\left \(2 - {\\lambda }_{1,2}\\right \)\\\\ 0 & 0 \\end{array} \\begin{array}{c} 0 \\\\ 0 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equbf.gif) which means that ![ $${x}_{1,2} = \\left \[\\begin{array}{c} {\\lambda }_{1,2} - 2 \\\\ 1 \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equbg.gif) We should normalize this to get a unit vector ![ $$\\begin{array}{rcl}{ e}_{1,2}& =& \\frac{1} {\\sqrt{5 - 4{\\lambda }_{1,2 } +{ \\left \({\\lambda }_{1,2 } \\right \) }^{2}}}\\left \[\\begin{array}{c} {\\lambda }_{1,2} - 2 \\\\ 1 \\end{array} \\right \] \\\\ & =& \\frac{1} {\\sqrt{\\left \(\\frac{5} {2} \\mp \\frac{1} {2}\\sqrt{5}\\right \)}}\\left \[\\begin{array}{c} - 1 \\pm \\sqrt{5}\\\\ 1 \\end{array} \\right \] \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ23.gif) ### 4.3.1 Exercises 1. Let L be self- or skew-adjoint on a complex finite-dimensional inner product space. (a) Show that L = K 2 for some K : V -> V. (b) Show by example that K need not be self-adjoint if L is self-adjoint. (c) Show by example that K need not be skew-adjoint if L is skew-adjoint. 2. Diagonalize the matrix that is zero everywhere except for 1s on the anti-diagonal. ![ $$\\left \[\\begin{array}{cccc} 0&\\cdots &0 &1\\\\ \\vdots & & 1 &0 \\\\ 0& & & \\vdots\\\\ 1 & 0 &\\cdots&0\\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equbh.gif) 3. Diagonalize the real matrix that has αs on the diagonal and βs everywhere else. ![ $$\\left \[\\begin{array}{cccc} \\alpha &\\beta&\\cdots &\\beta\\\\ \\beta&\\alpha & &\\beta \\\\ \\vdots & & \\ddots & \\vdots \\\\ \\beta&\\beta&\\cdots &\\alpha \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equbi.gif) 4. Let K, L : V -> V be self-adjoint operators on a finite-dimensional vector space. If KL = LK, then show that there is an orthonormal basis diagonalizing both K and L. 5. Let L : V -> V be self-adjoint. If there is a unit vector x ∈ V such that ![ $$\\left \\Vert L\\left \(x\\right \) - \\mu x\\right \\Vert \\leq\\epsilon,$$ ](A81414_1_En_4_Chapter_Equbj.gif) then L has an eigenvalue λ so that λ − μ ≤ ε. 6. Let L : V -> V be self-adjoint on a finite-dimensional inner product space. Show that either L or − L are eigenvalues for L. 7. If an operator L : V -> V on a finite-dimensional inner product space satisfies one of the following four conditions, then it is said to be positive. Show that these conditions are equivalent: (a) L is self-adjoint with positive eigenvalues. (b) L is self-adjoint and Lx | x > 0 for all x ∈ V − 0. (c) L = K ∗ ∘ K for an injective operator K : V -> W, where W is also an inner product space. (d) L = K ∘ K for an invertible self-adjoint operator K : V -> V. 8. Let P, Q be two positive operators on a finite-dimensional inner product space. If P 2 = Q 2, then show that P = Q. 9. Let P be a nonnegative operator on a finite-dimensional inner product space, i.e., self-adjoint with nonnegative eigenvalues. (a) Show that trP ≥ 0. (b) Show that P = 0 if and only if trP = 0. 10. Let L : V -> V be a linear operator on a finite-dimensional inner product space. (a) If L is self-adjoint, show that L 2 is self-adjoint and has nonnegative eigenvalues. (b) If L is skew-adjoint, show that L 2 is self-adjoint and has nonpositive eigenvalues. 11. Consider the Killing form on HomV, V, where V is a finite-dimensional vector space of dimension > 1, defined by ![ $$\\mathrm{K}\\left \(L,K\\right \) =\\mathrm{ tr}L\\mathrm{tr}K -\\mathrm{ tr}\\left \(LK\\right \).$$ ](A81414_1_En_4_Chapter_Equbk.gif) (a) Show that KL, K = KK, L. (b) Show that K -> KL, K is linear. (c) Assume in addition that V is an inner product space. Show that KL, L > 0 if L is skew-adjoint and L≠0. (d) Show that KL, L < 0 if L is self-adjoint and L≠0. (e) Show that K is nondegenerate, i.e., if L≠0, then we can find K≠0, so that KL, K≠0. Hint: Let ![ $$K = \\frac{1} {2}\\left \(L + {L}^{{_\\ast}}\\right \)$$ ](A81414_1_En_4_Chapter_IEq37.gif) or ![ $$K = \\frac{1} {2}\\left \(L - {L}^{{_\\ast}}\\right \)$$ ](A81414_1_En_4_Chapter_IEq38.gif) depending on the value of ![ $$\\mathrm{tr}\\left \(\\frac{1} {2}\\left \(L + {L}^{{_\\ast}}\\right \)\\frac{1} {2}\\left \(L - {L}^{{_\\ast}}\\right \)\\right \)$$ ](A81414_1_En_4_Chapter_IEq39.gif). ## 4.4 Normal Operators The concept of a normal operator is somewhat more general than the previous special types of operators we have encountered. The definition is quite simple and will be motivated below. Definition 4.4.1. We say that a linear operator L : V -> V on a finite-dimensional inner product space is normal if LL ∗ = L ∗ L. With this definition, it is clear that all self-adjoint, skew-adjoint, and isometric operators are normal. First, let us show that any operator that is diagonalizable with respect to an orthonormal basis must be normal. Suppose that L is diagonalized in the orthonormal basis e 1,..., e n and that D is the diagonal matrix representation in this basis, then ![ $$\\begin{array}{rcl} L& =& \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]D{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}} \\\\ & =& \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \[\\begin{array}{ccc} {\\lambda }_{1} & \\cdots & 0\\\\ \\vdots & \\ddots & \\vdots \\\\ 0 &\\cdots &{\\lambda }_{n} \\end{array} \\right \]{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}},\\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ24.gif) and ![ $$\\begin{array}{rcl}{ L}^{{_\\ast}}& =& \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]{D}^{{_\\ast}}{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}} \\\\ & =& \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \[\\begin{array}{ccc} \\bar{{\\lambda }}_{1} & \\cdots & 0\\\\ \\vdots & \\ddots & \\vdots \\\\ 0 &\\cdots &\\bar{{\\lambda }}_{n} \\end{array} \\right \]{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}}.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ25.gif) Thus, ![ $$\\begin{array}{rcl} L{L}^{{_\\ast}}& = \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \[\\begin{array}{ccc} {\\lambda }_{1} & \\cdots & 0\\\\ \\vdots & \\ddots & \\vdots \\\\ 0 &\\cdots &{\\lambda }_{n} \\end{array} \\right \]\\left \[\\begin{array}{ccc} \\bar{{\\lambda }}_{1} & \\cdots & 0\\\\ \\vdots & \\ddots & \\vdots \\\\ 0 &\\cdots &\\bar{{\\lambda }}_{n} \\end{array} \\right \]{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}}& \\\\ & = \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \[\\begin{array}{ccc} {\\left \\vert {\\lambda }_{1}\\right \\vert }^{2} & \\cdots & 0\\\\ \\vdots & \\ddots & \\vdots \\\\ 0 &\\cdots &{\\left \\vert {\\lambda }_{n}\\right \\vert }^{2} \\end{array} \\right \]{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}} & \\\\ & = {L}^{{_\\ast}}L & \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ26.gif) since DD ∗ = D ∗ D. For real operators, the spectral theorem tells us that they must be self-adjoint in order to be diagonalizable with respect to an orthonormal basis. For complex operators, things are a little different as also skew-adjoint operators are diagonalizable with respect to an orthonormal basis. Below we shall generalize the spectral theorem to normal operators and show that in the complex case, these are precisely the operators that can be diagonalized with respect to an orthonormal basis. Another very simple normal operator that is not necessarily of those three types is the homothety λ1 V for all ![ $$\\lambda\\in\\mathbb{C}.$$ ](A81414_1_En_4_Chapter_IEq40.gif) The canonical form for real normal operators is somewhat more complicated and will be studied in Sect. 4.6. Example 4.4.2. We note that ![ $$\\left \[\\begin{array}{cc} 1&1\\\\ 0 &2 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equbl.gif) is not normal as ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{cc} 1&1\\\\ 0 &2 \\end{array} \\right \]\\left \[\\begin{array}{cc} 1&0\\\\ 1 &2 \\end{array} \\right \]& =& \\left \[\\begin{array}{cc} 2&2\\\\ 2 &4 \\end{array} \\right \], \\\\ \\left \[\\begin{array}{cc} 1&0\\\\ 1 &2 \\end{array} \\right \]\\left \[\\begin{array}{cc} 1&1\\\\ 0 &2 \\end{array} \\right \]& =& \\left \[\\begin{array}{cc} 1&1\\\\ 1 &5 \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_4_Chapter_Equ27.gif) Nevertheless, it is diagonalizable with respect to the basis ![ $${x}_{1} = \\left \[\\begin{array}{c} 1\\\\ 0 \\end{array} \\right \],{x}_{2} = \\left \[\\begin{array}{c} 1\\\\ 1 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equbm.gif) as ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{cc} 1&1\\\\ 0 &2 \\end{array} \\right \]\\left \[\\begin{array}{c} 1\\\\ 0 \\end{array} \\right \]& =& \\left \[\\begin{array}{c} 1\\\\ 0 \\end{array} \\right \], \\\\ \\left \[\\begin{array}{cc} 1&1\\\\ 0 &2 \\end{array} \\right \]\\left \[\\begin{array}{c} 1\\\\ 1 \\end{array} \\right \]& =& \\left \[\\begin{array}{c} 2\\\\ 2 \\end{array} \\right \] = 2\\left \[\\begin{array}{c} 1\\\\ 1 \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_4_Chapter_Equ28.gif) While we can normalize x 2 to be a unit vector, there is nothing we can do about x 1 and x 2 not being perpendicular. Example 4.4.3. Let ![ $$A = \\left \[\\begin{array}{cc} \\alpha &\\gamma\\\\ \\beta&\\delta\\end{array} \\right \] : {\\mathbb{C}}^{2} \\rightarrow{\\mathbb{C}}^{2}.$$ ](A81414_1_En_4_Chapter_Equbn.gif) Then, ![ $$\\begin{array}{rcl} A{A}^{{_\\ast}}& =& \\left \[\\begin{array}{cc} \\alpha &\\gamma \\\\ \\beta& \\delta\\end{array} \\right \]\\left \[\\begin{array}{cc} \\bar{\\alpha }&\\bar{\\beta } \\\\ \\bar{\\gamma }& \\bar{\\delta } \\end{array} \\right \] = \\left \[\\begin{array}{cc} {\\left \\vert \\alpha \\right \\vert }^{2} +{ \\left \\vert \\gamma \\right \\vert }^{2} & \\alpha \\bar{\\beta } + \\gamma \\bar{\\delta } \\\\ \\bar{\\alpha }\\beta+\\bar{ \\gamma }\\delta & {\\left \\vert \\beta \\right \\vert }^{2} +{ \\left \\vert \\delta \\right \\vert }^{2} \\end{array} \\right \] \\\\ {A}^{{_\\ast}}A& =& \\left \[\\begin{array}{cc} \\bar{\\alpha }&\\bar{\\beta } \\\\ \\bar{\\gamma }& \\bar{\\delta } \\end{array} \\right \]\\left \[\\begin{array}{cc} \\alpha &\\gamma\\\\ \\beta&\\delta\\end{array} \\right \] = \\left \[\\begin{array}{cc} {\\left \\vert \\alpha \\right \\vert }^{2} +{ \\left \\vert \\beta \\right \\vert }^{2} & \\bar{\\alpha }\\gamma+\\bar{ \\beta }\\delta\\\\ \\alpha \\bar{\\gamma } + \\beta \\bar{\\delta }& {\\left \\vert \\gamma \\right \\vert }^{2} +{ \\left \\vert \\delta \\right \\vert }^{2} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_4_Chapter_Equ29.gif) So the conditions for A to be normal are ![ $$\\begin{array}{rcl}{ \\left \\vert \\beta \\right \\vert }^{2}& =&{ \\left \\vert \\gamma \\right \\vert }^{2}, \\\\ \\alpha \\bar{\\gamma } + \\beta \\bar{\\delta }& =& \\bar{\\alpha }\\beta+\\bar{ \\gamma }\\delta.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ30.gif) The last equation is easier to remember if we note that it means that the columns of A must have the same inner product as the columns of A ∗ . Proposition 4.4.4. (Characterization of Normal Operators) Let L : V -> V be an operator on a finite-dimensional inner product space. Then, the following conditions are equivalent: (1) LL ∗ = L ∗ L. (2) Lx = L ∗ x for all x ∈ V. (3) AB = BA, where ![ $$A = \\frac{1} {2}\\left \(L + {L}^{{_\\ast}}\\right \)$$ ](A81414_1_En_4_Chapter_IEq41.gif) and ![ $$B = \\frac{1} {2}\\left \(L - {L}^{{_\\ast}}\\right \)$$ ](A81414_1_En_4_Chapter_IEq42.gif). Proof. 1 ⇔ 2 : Note that for all x ∈ V, we have ![ $$\\begin{array}{@{}l@{\\,}l} \\left \\Vert L\\left \(x\\right \)\\right \\Vert = \\left \\Vert {L}^{{_\\ast}}\\left \(x\\right \)\\right \\Vert \\,& \\Leftrightarrow {\\left \\Vert L\\left \(x\\right \)\\right \\Vert }^{2} ={ \\left \\Vert {L}^{{_\\ast}}\\left \(x\\right \)\\right \\Vert }^{2} \\\\ \\,& \\Leftrightarrow \\left \(L\\left \(x\\right \)\\vert L\\left \(x\\right \)\\right \) = \\left \({L}^{{_\\ast}}\\left \(x\\right \)\\vert {L}^{{_\\ast}}\\left \(x\\right \)\\right \) \\\\ \\,& \\Leftrightarrow \\left \(x\\vert {L}^{{_\\ast}}L\\left \(x\\right \)\\right \) = \\left \(x\\vert L{L}^{{_\\ast}}\\left \(x\\right \)\\right \) \\\\ \\,& \\Leftrightarrow \\left \(x\\vert \\left \({L}^{{_\\ast}}L - L{L}^{{_\\ast}}\\right \)\\left \(x\\right \)\\right \) = 0 \\\\ \\,& \\Leftrightarrow{L}^{{_\\ast}}L - L{L}^{{_\\ast}} =0 \\end{array}$$ ](A81414_1_En_4_Chapter_Equbo.gif) The last implication is a consequence of the fact that L ∗ L − LL ∗ is self-adjoint (see Proposition 4.2.1). 3 ⇔ 1 : We note that ![ $$\\begin{array}{rcl} AB& =& \\frac{1} {2}\\left \(L + {L}^{{_\\ast}}\\right \)\\frac{1} {2}\\left \(L - {L}^{{_\\ast}}\\right \) \\\\ & =& \\frac{1} {4}\\left \(L + {L}^{{_\\ast}}\\right \)\\left \(L - {L}^{{_\\ast}}\\right \) \\\\ & =& \\frac{1} {4}\\left \({L}^{2} -{\\left \({L}^{{_\\ast}}\\right \)}^{2} + {L}^{{_\\ast}}L - L{L}^{{_\\ast}}\\right \), \\\\ BA& =& \\frac{1} {4}\\left \(L - {L}^{{_\\ast}}\\right \)\\left \(L + {L}^{{_\\ast}}\\right \) \\\\ & =& \\frac{1} {4}\\left \({L}^{2} -{\\left \({L}^{{_\\ast}}\\right \)}^{2} - {L}^{{_\\ast}}L + L{L}^{{_\\ast}}\\right \).\\end{array}$$ ](A81414_1_En_4_Chapter_Equ31.gif) So AB = BA if and only if L ∗ L − LL ∗ = − L ∗ L + LL ∗ which is the same as saying that LL ∗ = L ∗ L. □ We also need a general result about invariant subspaces. Lemma 4.4.5. Let L : V -> V be a linear operator on a finite-dimensional inner product space. If M ⊂ V is an L- and L ∗ -invariant subspace, then M ⊥ is also L- and L ∗ -invariant. In particular ![ $${\\left \(L{\\vert }_{{M}^{\\perp }}\\right \)}^{{_\\ast}} = {L}^{{_\\ast}}{\\vert }_{{ M}^{\\perp }}.$$ ](A81414_1_En_4_Chapter_Equbp.gif) Proof. Let x ∈ M and y ∈ M ⊥ . We have to show that ![ $$\\begin{array}{rcl} \\left \(x\\vert L\\left \(y\\right \)\\right \)& =& 0, \\\\ \\left \(x\\vert {L}^{{_\\ast}}\\left \(y\\right \)\\right \)& =&0. \\end{array}$$ ](A81414_1_En_4_Chapter_Equ32.gif) For the first identity, use that ![ $$\\left \(x\\vert L\\left \(y\\right \)\\right \) = \\left \({L}^{{_\\ast}}\\left \(x\\right \)\\vert y\\right \) = 0$$ ](A81414_1_En_4_Chapter_Equbq.gif) as L ∗ x ∈ M. Similarly, for the second use that ![ $$\\left \(x\\vert {L}^{{_\\ast}}\\left \(y\\right \)\\right \) = \\left \(L\\left \(x\\right \)\\vert y\\right \) = 0$$ ](A81414_1_En_4_Chapter_Equbr.gif) as Lx ∈ M. □ We are now ready to prove the spectral theorem for normal operators. Theorem 4.4.6 (The Spectral Theorem for Normal Operators). Let L : V -> V be a normal operator on a finite-dimensional complex inner product space; then, there is an orthonormal basis e 1 ,...,e n of eigenvectors, i.e., Le 1 = λ 1 e 1 ,...,Le n = λ n e n. Proof. As with the spectral theorem (see Theorem 4.3.4), the proof depends on showing that we can find an eigenvalue and that the orthogonal complement to an eigenvector is invariant. Rather than appealing to the Fundamental Theorem of Algebra 2.1.8 in order to find an eigenvalue for L, we shall use what we know about self-adjoint operators. This has the advantage of also yielding a proof that works in the real case (see Sect. 4.6). First, decompose L = A + iC where ![ $$A = \\frac{1} {2}\\left \(L + {L}^{{_\\ast}}\\right \)$$ ](A81414_1_En_4_Chapter_IEq43.gif) and ![ $$C = \\frac{1} {i} B = \\frac{1} {2i}\\left \(L - {L}^{{_\\ast}}\\right \)$$ ](A81414_1_En_4_Chapter_IEq44.gif) are both self-adjoint (compare with Proposition 4.4.4). Then, use Theorem 4.3.3 to find ![ $$\\alpha\\in\\mathbb{R}$$ ](A81414_1_En_4_Chapter_IEq45.gif) such that kerA − α1 V ≠0. If x ∈ kerA − α1 V , then ![ $$\\begin{array}{rcl} \\left \(A - \\alpha {1}_{V }\\right \)\\left \(C\\left \(x\\right \)\\right \)& =& AC\\left \(x\\right \) - \\alpha C\\left \(x\\right \) \\\\ & =& CA\\left \(x\\right \) - C\\left \(\\alpha x\\right \) \\\\ & =& C\\left \(\\left \(A - \\alpha {1}_{V }\\right \)\\left \(x\\right \)\\right \) \\\\ & =& 0.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ33.gif) Thus, kerA − α1 V is a C-invariant subspace. Using that C and hence also its restriction to kerA − α1 V is self-adjoint, we can find x ∈ kerA − α1 V so that Cx = βx, with ![ $$\\beta\\in\\mathbb{R}$$ ](A81414_1_En_4_Chapter_IEq46.gif) (see Theorem 4.3.3). This means that ![ $$\\begin{array}{rcl} L\\left \(x\\right \)& =& A\\left \(x\\right \) + iC\\left \(x\\right \) \\\\ & =& \\alpha x + \\mathit{i}\\beta x \\\\ & =& \\left \(\\alpha+ \\mathit{i}\\beta \\right \)x.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ34.gif) Hence, we have found an eigenvalue α + iβ for L with a corresponding eigenvector x. It follows from Proposition 3.5.2 that ![ $$\\begin{array}{rcl}{ L}^{{_\\ast}}\\left \(x\\right \)& =& A\\left \(x\\right \) - iC\\left \(x\\right \) \\\\ & =& \\left \(\\alpha-\\mathit{i}\\beta \\right \)x.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ35.gif) Thus, spanx is both L\- and L ∗ -invariant. Lemma 4.4.5 then shows that M = spanx ⊥ is also L\- and L ∗ -invariant. Hence, L | M ∗ = L ∗ | M showing that L | M : M -> M is also normal. We can then use induction as in the spectral theorem to finish the proof. □ As an immediate consequence, we get a result for unitary operators. Theorem 4.4.7 (The Spectral Theorem for Unitary Operators). Let L : V -> V be unitary; then, there is an orthonormal basis e 1 ,...,e n such that Le 1 = e iθ 1 e 1 ,...,Le n = e iθ n e n , where ![ $${\\theta }_{1},\\ldots,{\\theta }_{n} \\in\\mathbb{R}.$$ ](A81414_1_En_4_Chapter_IEq47.gif) We also have the resolution version of the spectral theorem. Theorem 4.4.8 (The Spectral Resolution of Normal Operators). Let L : V -> V be a normal operator on a complex finite-dimensional inner product space and λ 1 ,...,λ k the distinct eigenvalues for L. Then ![ $${1}_{V } =\\mathrm{{ proj}}_{\\ker \\left \(L-{\\lambda }_{1}{1}_{V }\\right \)} + \\cdots+\\mathrm{{ proj}}_{\\ker \\left \(L-{\\lambda }_{k}{1}_{V }\\right \)}$$ ](A81414_1_En_4_Chapter_Equbs.gif) and ![ $$L = {\\lambda }_{1}\\mathrm{{proj}}_{\\ker \\left \(L-{\\lambda }_{1}{1}_{V }\\right \)} + \\cdots+ {\\lambda }_{k}\\mathrm{{proj}}_{\\ker \\left \(L-{\\lambda }_{k}{1}_{V }\\right \)}.$$ ](A81414_1_En_4_Chapter_Equbt.gif) Let us see what happens in some examples. Example 4.4.9. Let ![ $$L = \\left \[\\begin{array}{cc} \\alpha&\\beta \\\\ - \\beta&\\alpha\\end{array} \\right \],\\alpha,\\beta\\in\\mathbb{R};$$ ](A81414_1_En_4_Chapter_Equbu.gif) then L is normal. When α = 0, it is skew-adjoint; when β = 0, it is self-adjoint; and when α2 + β2 = 1, it is an orthogonal transformation. The decomposition L = A + iC looks like ![ $$\\left \[\\begin{array}{cc} \\alpha&\\beta \\\\ - \\beta&\\alpha\\end{array} \\right \] = \\left \[\\begin{array}{cc} \\alpha & 0\\\\ 0 &\\alpha\\end{array} \\right \]+\\mathit{i}\\left \[\\begin{array}{cc} 0 & -\\mathit{i}\\beta \\\\ \\mathit{i} \\beta& 0 \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equbv.gif) Here ![ $$\\left \[\\begin{array}{cc} \\alpha & 0\\\\ 0 &\\alpha\\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equbw.gif) has α as an eigenvalue and ![ $$\\left \[\\begin{array}{cc} 0 & -\\mathit{i}\\beta \\\\ \\mathit{i} \\beta& 0 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equbx.gif) has ± β as eigenvalues. Thus, L has eigenvalues α ± iβ. Example 4.4.10. The matrix ![ $$\\left \[\\begin{array}{ccc} 0 &1&0\\\\ - 1 &0 &0 \\\\ 0 &0&1 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equby.gif) is normal and has 1 as an eigenvalue. We are then reduced to looking at ![ $$\\left \[\\begin{array}{cc} 0 &1\\\\ - 1 &0 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equbz.gif) which has ± i as eigenvalues. ### 4.4.1 Exercises 1. Consider L A X = AX and R A X = XA as linear operators on ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \).$$ ](A81414_1_En_4_Chapter_IEq48.gif) What conditions do you need on A in order for these maps to be normal (see also Exercise 3 in Sect. 3.5)? 2. Assume that L : V -> V is normal and that ![ $$p \\in\\mathbb{F}\\left \[t\\right \].$$ ](A81414_1_En_4_Chapter_IEq49.gif) Show that pL is also normal. 3. Assume that L : V -> V is normal. Without using the spectral theorem, show: (a) kerL = kerL ∗ . (b) ![ $$\\ker \\left \(L - \\lambda {1}_{V }\\right \) =\\ker \\left \({L}^{{_\\ast}}-\\bar{ \\lambda }{1}_{V }\\right \)$$ ](A81414_1_En_4_Chapter_IEq50.gif). (c) imL = imL ∗ . (d) kerL ⊥ = imL. 4. Assume that L : V -> V is normal. Without using the spectral theorem, show: (a) kerL = kerL k for any k ≥ 1. Hint: Use the self-adjoint operator K = L ∗ L. (b) imL = imL k for any k ≥ 1. (c) kerL − λ1 V = kerL − λ1 V k for any k ≥ 1. (d) Show that the minimal polynomial of L has no multiple roots. 5. (Characterization of Normal Operators) Let L : V -> V be a linear operator on a finite-dimensional inner product space. Show that L is normal if and only if L ∘ E | L ∘ E = L ∗ ∘ E | L ∗ ∘ E for all orthogonal projections E : V -> V. Hint: Use the formula ![ $$\\left \({L}_{1}\\vert {L}_{2}\\right \) ={ \\sum\\nolimits }_{i=1}^{n}\\left \({L}_{ 1}\\left \({e}_{i}\\right \)\\vert {L}_{2}\\left \({e}_{i}\\right \)\\right \)$$ ](A81414_1_En_4_Chapter_Equca.gif) from Exercise 4 in Sect. 3.5 for suitable choices of orthonormal bases e 1,..., e n for V. 6. (Reducibility of Normal Operators) Let L : V -> V be an operator on a finite-dimensional inner product space. Assume that M ⊂ V is an L-invariant subspace and let E : V -> V be the orthogonal projection onto M. (a) Justify all of the steps in the calculation: ![ $$\\begin{array}{rcl} \\left \({L}^{{_\\ast} } \\circ E\\vert {L}^{{_\\ast}}\\circ E\\right \)& =& \\left \({E}^{\\perp }\\circ{L}^{{_\\ast}}\\circ E\\vert {E}^{\\perp }\\circ{L}^{{_\\ast}}\\circ E\\right \) + \\left \(E \\circ{L}^{{_\\ast}}\\circ E\\vert E \\circ{L}^{{_\\ast}}\\circ E\\right \) \\\\ & =& \\left \({E}^{\\perp }\\circ{L}^{{_\\ast}}\\circ E\\vert {E}^{\\perp }\\circ{L}^{{_\\ast}}\\circ E\\right \) + \\left \(E \\circ L \\circ E\\vert E \\circ L \\circ E\\right \) \\\\ & =& \\left \({E}^{\\perp }\\circ{L}^{{_\\ast}}\\circ E\\vert {E}^{\\perp }\\circ{L}^{{_\\ast}}\\circ E\\right \) + \\left \(L \\circ E\\vert L \\circ E\\right \).\\end{array}$$ ](A81414_1_En_4_Chapter_Equ36.gif) Hint: Use the result that E ∗ = E from Sect. 3.6 and that LM ⊂ M implies E ∘ L ∘ E = L ∘ E and Exercise 4 in Sect. 3.5. (b) If L is normal, use the previous exercise to conclude that M is L ∗ -invariant and M ⊥ is L-invariant, i.e., normal operators are completely reducible. 7. (Characterization of Normal Operators) Let L : V -> V be a linear map on a finite-dimensional inner product space. Assume that L has the property that all L-invariant subspaces are also L ∗ -invariant. (a) Show that L is completely reducible (see Proposition 4.1.7). (b) Show that the matrix representation with respect to an orthonormal basis is diagonalizable when viewed as complex matrix. (c) Show that L is normal. 8. Assume that L : V -> V satisfies L ∗ L = λ1 V , for some ![ $$\\lambda\\in\\mathbb{C}.$$ ](A81414_1_En_4_Chapter_IEq51.gif) Show that L is normal. 9. Show that if a projection is normal, then it is an orthogonal projection. 10. Show that if L : V -> V is normal and ![ $$p \\in\\mathbb{F}\\left \[t\\right \],$$ ](A81414_1_En_4_Chapter_IEq52.gif) then pL is also normal. Moreover, if ![ $$\\mathbb{F} = \\mathbb{C}$$ ](A81414_1_En_4_Chapter_IEq53.gif), then the spectral resolution is given by ![ $$p\\left \(L\\right \) = p\\left \({\\lambda }_{1}\\right \)\\mathrm{{proj}}_{\\ker \\left \(L-{\\lambda }_{1}{1}_{V }\\right \)} + \\cdots+ p\\left \({\\lambda }_{k}\\right \)\\mathrm{{proj}}_{\\ker \\left \(L-{\\lambda }_{k}{1}_{V }\\right \)}.$$ ](A81414_1_En_4_Chapter_Equcb.gif) 11. Let L, K : V -> V be normal. Show by example that neither L + K nor LK need be normal. 12. Let A be an upper triangular matrix. Show that A is normal if and only if it is diagonal. Hint: Compute and compare the diagonal entries in AA ∗ and A ∗ A. 13. (Characterization of Normal Operators) Let L : V -> V be an operator on a finite-dimensional complex inner product space. Show that L is normal if and only if L ∗ = pL for some polynomial p. 14. (Characterization of Normal Operators) Let L : V -> V be an operator on a finite-dimensional complex inner product space. Show that L is normal if and only if L ∗ = LU for some unitary operator U : V -> V. 15. Let L : V -> V be normal on a finite-dimensional complex inner product space. Show that L = K 2 for some normal operator K. 16. Give the canonical form for the linear operators that are both self-adjoint and unitary. 17. Give the canonical form for the linear operators that are both skew-adjoint and unitary. ## 4.5 Unitary Equivalence In the special case where ![ $$V = {\\mathbb{F}}^{n}$$ ](A81414_1_En_4_Chapter_IEq54.gif), the spectral theorem can be rephrased in terms of change of basis. Recall from Sect. 1.9 that if we pick a different basis x 1,..., x n for ![ $${\\mathbb{F}}^{n},$$ ](A81414_1_En_4_Chapter_IEq55.gif) then the matrix representations for a linear map represented by A in the standard basis and B in the new basis are related by ![ $$A = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]B{\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]}^{-1}.$$ ](A81414_1_En_4_Chapter_Equcc.gif) In case x 1,..., x n is an orthonormal basis, this simplifies to ![ $$A = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]B{\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]}^{{_\\ast}},$$ ](A81414_1_En_4_Chapter_Equcd.gif) where ![ $$\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_IEq56.gif) is a unitary or orthogonal operator. Definition 4.5.1. Two n ×n matrices A and B are said to be unitarily equivalent if A = UBU ∗ , where U ∈ U n , i.e., U is an n ×n matrix such that ![ $${U}^{{_\\ast}}U = U{U}^{{_\\ast}} = {1}_{{\\mathbb{F}}^{n}}.$$ ](A81414_1_En_4_Chapter_IEq57.gif) In case U ∈ O n ⊂ U n , we also say that the matrices are orthogonally equivalent. The results from the previous two sections can now be paraphrased in the following way. Corollary 4.5.2. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq58.gif) (1) If A is normal, then A is unitarily equivalent to a diagonal matrix. (2) If A is self-adjoint, then A is unitarily (or orthogonally) equivalent to a real diagonal matrix. (3) If A is skew-adjoint, then A is unitarily equivalent to a diagonal matrix whose entries are purely imaginary. (4) If A is unitary, then A is unitarily equivalent to a diagonal matrix whose diagonal entries are unit scalars. Using the group properties of unitary matrices, one can easily show the next two results. Proposition 4.5.3. If A and B are unitarily equivalent, then (1) A is normal if and only if B is normal. (2) A is self-adjoint if and only if B is self-adjoint. (3) A is skew-adjoint if and only if B is skew-adjoint. (4) A is unitary if and only if B is unitary. Corollary 4.5.4. Two normal operators are unitarily equivalent if and only if they have the same eigenvalues (counted with multiplicities). Example 4.5.5. The Pauli matrices are defined by ![ $$\\left \[\\begin{array}{cc} 0&1\\\\ 1 &0 \\end{array} \\right \],\\left \[\\begin{array}{cc} 1& 0\\\\ 0 & -1 \\end{array} \\right \],\\left \[\\begin{array}{cc} 0& -\\mathit{i}\\\\ i & 0 \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equce.gif) They are all self-adjoint and unitary. Moreover, all have eigenvalues ± 1, so they are all unitarily equivalent. If we multiply the Pauli matrices by i, we get three skew-adjoint and unitary matrices with eigenvalues ± i : ![ $$\\left \[\\begin{array}{cc} 0 &1\\\\ - 1 &0 \\end{array} \\right \],\\left \[\\begin{array}{cc} i & 0\\\\ 0 & -i \\end{array} \\right \],\\left \[\\begin{array}{cc} 0& \\mathit{i}\\\\ i &0 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equcf.gif) that are also all unitarily equivalent. The eight matrices ![ $$\\pm \\left \[\\begin{array}{cc} 1&0\\\\ 0 &1 \\end{array} \\right \],\\pm \\left \[\\begin{array}{cc} i & 0\\\\ 0 & -i \\end{array} \\right \],\\pm \\left \[\\begin{array}{cc} 0 &1\\\\ - 1 &0 \\end{array} \\right \],\\pm \\left \[\\begin{array}{cc} 0& \\mathit{i}\\\\ i &0 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equcg.gif) form a group that corresponds to the quaternions ± 1, ± i, ± j, ± k. Example 4.5.6. The matrices ![ $$\\left \[\\begin{array}{cc} 1&1\\\\ 0 &2 \\end{array} \\right \],\\left \[\\begin{array}{cc} 1&0\\\\ 0 &2 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equch.gif) are not unitarily equivalent as the first is not normal while the second is normal. Note, however, that both are diagonalizable with the same eigenvalues. ### 4.5.1 Exercises 1. Decide which of the following matrices are unitarily equivalent: ![ $$\\begin{array}{rcl} A& =& \\left \[\\begin{array}{cc} 1&1\\\\ 1 &1\\end{array} \\right \], \\\\ B& =& \\left \[\\begin{array}{cc} 2&2\\\\ 0 &0\\end{array} \\right \], \\\\ C& =& \\left \[\\begin{array}{cc} 2&0\\\\ 0 &0\\end{array} \\right \], \\\\ D& =& \\left \[\\begin{array}{cc} 1& -\\mathit{i}\\\\ i & 1\\end{array} \\right \].\\end{array}$$ ](A81414_1_En_4_Chapter_Equ37.gif) 2. Decide which of the following matrices are unitarily equivalent: ![ $$\\begin{array}{rcl} A& =& \\left \[\\begin{array}{ccc} i &0&0\\\\ 0 &1 &0 \\\\ 0&0&1\\end{array} \\right \], \\\\ B& =& \\left \[\\begin{array}{ccc} 1& - 1&0 \\\\ i & i &1\\\\ 0 & 1 &1\\end{array} \\right \], \\\\ C& =& \\left \[\\begin{array}{ccc} 1&0&0\\\\ 1 & i &1 \\\\ 0&0&1\\end{array} \\right \], \\\\ D& =& \\left \[\\begin{array}{ccc} 1 + i & - \\frac{1} {\\sqrt{2}} -\\mathit{i} \\frac{1} {\\sqrt{2}} & 0 \\\\ \\frac{1} {\\sqrt{2}} + \\mathit{i} \\frac{1} {\\sqrt{2}} & 0 &0 \\\\ 0 & 0 &1\\end{array} \\right \].\\end{array}$$ ](A81414_1_En_4_Chapter_Equ38.gif) 3. Assume that ![ $$A,B \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq59.gif) are unitarily equivalent. Show that if A has a square root, i.e., A = C 2 for some ![ $$C \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \),$$ ](A81414_1_En_4_Chapter_IEq60.gif) then B also has a square root. 4. Assume that ![ $$A,B \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq61.gif) are unitarily equivalent. Show that if A is positive, i.e., A is self-adjoint and has positive eigenvalues, then B is also positive. 5. Assume that ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq62.gif) is normal. Show that A is unitarily equivalent to A ∗ if and only if A is self-adjoint. ## 4.6 Real Forms In this section, we are going to explain the canonical forms for normal real linear operators that are not necessarily diagonalizable. The idea is to follow the proof of the spectral theorem for complex normal operators. Thus, we use induction on dimension to obtain the desired canonical forms. To get the induction going, we decompose L = A + B, where AB = BA, ![ $$A = \\frac{1} {2}\\left \(L + {L}^{{_\\ast}}\\right \)$$ ](A81414_1_En_4_Chapter_IEq63.gif) is symmetric and ![ $$B = \\frac{1} {2}\\left \(L - {L}^{{_\\ast}}\\right \)$$ ](A81414_1_En_4_Chapter_IEq64.gif) is skew-symmetric. The spectral theorem can be applied to A so that V has an orthonormal basis of eigenvectors and the eigenspaces for A are B-invariant, since AB = BA. If A≠α1 V , then we can find a nontrivial orthogonal decomposition of V that reduces L. In the case when A = α1 V , all subspaces of V are A-invariant. Thus, we use B to find invariant subspaces for L. To find such subspaces, observe that B 2 is symmetric and select an eigenvector/value pair B 2 x = λx. Since B maps x to Bx and Bx to B 2 x = λx the subspace spanx, Bx is B-invariant. If this subspace is one dimensional, then x is also an eigenvector for B, otherwise the subspace is two dimensional. As these subspaces are contained in the eigenspaces for A, we only need to figure out how B acts on them. In the one-dimensional case, it is spanned by an eigenvector of B. So the only case left to study is when B : M -> M is skew-symmetric and M is two dimensional with no nontrivial invariant subspaces. In this case, we just select a unit vector x ∈ M and note that Bx≠0 as x would otherwise span a one-dimensional invariant subspace. In addition, for all z ∈ V, we have that z and Bz are always perpendicular as ![ $$\\begin{array}{rcl} \\left \(B\\left \(z\\right \)\\vert z\\right \)& =& -\\left \(z\\vert B\\left \(z\\right \)\\right \) \\\\ & =& -\\left \(B\\left \(z\\right \)\\vert z\\right \).\\end{array}$$ ](A81414_1_En_4_Chapter_Equ39.gif) In particular, x and Bx ∕ Bx form an orthonormal basis for M. In this basis, the matrix representation for B is ![ $$\\left \[\\begin{array}{cc} B\\left \(x\\right \)&B\\left \(\\frac{B\\left \(x\\right \)} {\\left \\Vert B\\left \(x\\right \)\\right \\Vert }\\right \) \\end{array} \\right \] = \\left \[\\begin{array}{cc} x&\\frac{B\\left \(x\\right \)} {\\left \\Vert B\\left \(x\\right \)\\right \\Vert } \\end{array} \\right \]\\left \[\\begin{array}{cc} 0 &\\gamma\\\\ \\left \\Vert B\\left \(x\\right \)\\right \\Vert & 0 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equci.gif) as ![ $$B\\left \(\\frac{B\\left \(x\\right \)} {\\left \\Vert B\\left \(x\\right \)\\right \\Vert }\\right \)$$ ](A81414_1_En_4_Chapter_IEq65.gif) is perpendicular to Bx and hence a multiple of x. Finally, we get that γ = − Bx since the matrix has to be skew-symmetric. To complete the analysis, we use Proposition 4.1.7 to observe that the orthogonal complement of spanx, Bx in kerA − α1 V is also B-invariant. All in all, this shows that V can be decomposed into one- and/or two-dimensional subspaces that are invariant under both A and B. This shows what the canonical form for a real normal operator looks like. Theorem 4.6.1. (The Canonical Form for Real Normal Operators) Let L : V -> V be a normal operator on a finite-dimensional real inner product space; then, we can find an orthonormal basis e 1 ,...,e k ,x 1 , y 1 ,...,x l ,y l where k + 2l = n and ![ $$\\begin{array}{rcl} L\\left \({e}_{i}\\right \)& =& {\\lambda }_{i}{e}_{i}, \\\\ L\\left \({x}_{j}\\right \)& =& {\\alpha }_{j}{x}_{j} + {\\beta }_{j}{y}_{j}, \\\\ L\\left \({y}_{j}\\right \)& =& -{\\beta }_{j}{x}_{j} + {\\alpha }_{j}{y}_{j}, \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ40.gif) and ![ $${\\lambda }_{i},{\\alpha }_{j},{\\beta }_{j} \\in\\mathbb{R}.$$ ](A81414_1_En_4_Chapter_IEq66.gif) Thus, L has the matrix representation ![ $$\\left \[\\begin{array}{ccccccccc} {\\lambda }_{1} & \\cdots & 0 & 0 & 0 &\\cdots &\\cdots & 0 & 0\\\\ \\vdots & \\ddots & \\vdots & \\vdots & \\vdots \\\\ 0 &\\cdots &{\\lambda }_{k}& 0 & 0 &\\cdots\\\\ 0 &\\cdots & 0 &{\\alpha }_{1} & - {\\beta }_{1} & 0 &\\cdots & & \\vdots \\\\ 0 &\\cdots & 0 &{\\beta }_{1} & {\\alpha }_{1} & 0 &\\cdots\\\\ & & & 0 & 0 & \\ddots\\\\ \\vdots & & & & & & \\ddots & 0 & 0 \\\\ & & & & & & 0 &{\\alpha }_{l}& - {\\beta }_{l} \\\\ 0 & & & \\cdots& & &0 &{\\beta }_{l}& {\\alpha }_{l} \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equcj.gif) with respect to the basis e 1 ,...,e k ,x 1 , y 1 ,...,x l ,y l. This yields two corollaries for skew-symmetric and orthogonal operators. Corollary 4.6.2. (The Canonical Form for Real Skew-Adjoint Operators) Let L : V -> V be a skew-symmetric operator on a finite-dimensional real inner product space, then we can find an orthonormal basis e 1 ,...,e k ,x 1 , y 1 ,...,x l ,y l where k + 2l = n and ![ $$\\begin{array}{rcl} L\\left \({e}_{i}\\right \)& =& 0, \\\\ L\\left \({x}_{j}\\right \)& =& {\\beta }_{j}{y}_{j}, \\\\ L\\left \({y}_{j}\\right \)& =& -{\\beta }_{j}{x}_{j}, \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ41.gif) and ![ $${\\beta }_{j} \\in\\mathbb{R}.$$ ](A81414_1_En_4_Chapter_IEq67.gif) Thus, L has the matrix representation ![ $$\\left \[\\begin{array}{ccccccccc} 0&\\cdots &0& 0 & 0 &\\cdots &\\cdots & 0 & 0\\\\ \\vdots & \\ddots & \\vdots & \\vdots & \\vdots\\\\ 0 &\\cdots&0 & 0 & 0 &\\cdots\\\\ 0&\\cdots &0& 0 & - {\\beta }_{1} & 0 &\\cdots & & \\vdots \\\\ 0&\\cdots &0&{\\beta }_{1} & 0 & 0 &\\cdots\\\\ & & & 0 & 0 & \\ddots\\\\ \\vdots & & & & & & \\ddots & 0 & 0 \\\\ & & & & & & 0 & 0 & - {\\beta }_{l} \\\\ 0& & & \\cdots& & &0 &{\\beta }_{l}& 0 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equck.gif) with respect to the basis e 1 ,...,e k ,x 1 , y 1 ,...,x l ,y l. Corollary 4.6.3. (The Canonical Form for Orthogonal Operators) Let O : V -> V be an orthogonal operator, then we can find an orthonormal basis e 1 ,...,e k ,x 1 , y 1 ,...,x l ,y l where k + 2l = n and ![ $$\\begin{array}{rcl} O\\left \({e}_{i}\\right \)& =& \\pm {e}_{i}, \\\\ O\\left \({x}_{j}\\right \)& =& \\cos \\left \({\\theta }_{j}\\right \){x}_{j} +\\sin \\left \({\\theta }_{j}\\right \){y}_{j}, \\\\ O\\left \({y}_{j}\\right \)& =& -\\sin \\left \({\\theta }_{j}\\right \){x}_{j} +\\cos \\left \({\\theta }_{j}\\right \){y}_{j}, \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ42.gif) and ![ $${\\lambda }_{i},{\\alpha }_{j},{\\beta }_{j} \\in\\mathbb{R}.$$ ](A81414_1_En_4_Chapter_IEq68.gif) Thus, L has the matrix representation ![ $$\\left \[\\begin{array}{ccccccccc} \\pm1&\\cdots & 0 & 0 & 0 &\\cdots &\\cdots & 0 & 0\\\\ \\vdots & \\ddots & \\vdots & \\vdots & \\vdots\\\\ 0 &\\cdots& \\pm1 & 0 & 0 &\\cdots\\\\ 0 &\\cdots & 0 &\\cos \\left \({\\theta }_{1}\\right \)& -\\sin \\left \({\\theta }_{1}\\right \)& 0 &\\cdots & & \\vdots \\\\ 0 &\\cdots & 0 &\\sin \\left \({\\theta }_{1}\\right \)& \\cos \\left \({\\theta }_{1}\\right \) & 0 &\\cdots\\\\ & & & 0 & 0 & \\ddots\\\\ \\vdots & & & & & & \\ddots & 0 & 0 \\\\ & & & & & & 0 &\\cos \\left \({\\theta }_{l}\\right \)& -\\sin \\left \({\\theta }_{l}\\right \) \\\\ 0 & & & \\cdots& & &0 &\\sin \\left \({\\theta }_{l}\\right \)& \\cos \\left \({\\theta }_{l}\\right \) \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equcl.gif) with respect to the basis e 1 ,..., e k , x 1 , y 1 ,..., x l , y l. Proof. We just need to justify the specific form of the eigenvalues. We know that as a unitary operator, all the eigenvalues look like e iθ. If they are real, they must therefore be ± 1. Otherwise, we use Euler's formula e iθ = cosθ + isinθ to get the desired form since matrices of the form ![ $$\\left \[\\begin{array}{cc} \\alpha & - \\beta \\\\ \\beta& \\alpha\\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equcm.gif) have eigenvalues α ± iβ by Example 4.4.9. □ Note that we can artificially group some of the real eigenvalues in the decomposition of the orthogonal operators by using ![ $$\\left \[\\begin{array}{cc} 1&0\\\\ 0 &1 \\end{array} \\right \] = \\left \[\\begin{array}{cc} \\cos 0& -\\sin 0\\\\ \\sin 0 & \\cos 0 \\end{array} \\right \],$$ ](A81414_1_En_4_Chapter_Equcn.gif) ![ $$\\left \[\\begin{array}{cc} - 1& 0\\\\ 0 & -1 \\end{array} \\right \] = \\left \[\\begin{array}{cc} \\cos \\pi & -\\sin \\pi \\\\ \\sin \\pi& \\cos \\pi\\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equco.gif) By paring off as many eigenvectors for ± 1 as possible, we obtain Corollary 4.6.4. Let ![ $$O : {\\mathbb{R}}^{2n} \\rightarrow{\\mathbb{R}}^{2n}$$ ](A81414_1_En_4_Chapter_IEq69.gif) be an orthogonal operator, then we can find an orthonormal basis where L has one of the following two types of the matrix representations: Type I: ![ $$\\left \[\\begin{array}{cccccc} \\cos \\left \({\\theta }_{1}\\right \)& -\\sin \\left \({\\theta }_{1}\\right \)&0&\\cdots & 0 & 0 \\\\ \\sin \\left \({\\theta }_{1}\\right \)& \\cos \\left \({\\theta }_{1}\\right \) &0&\\cdots & 0 & 0 \\\\ 0 & 0 & \\ddots\\\\ \\vdots & \\vdots & & \\ddots & 0 & 0 \\\\ 0 & 0 & & 0 &\\cos \\left \({\\theta }_{n}\\right \)& -\\sin \\left \({\\theta }_{n}\\right \) \\\\ 0 & 0 & & 0 &\\sin \\left \({\\theta }_{n}\\right \)& \\cos \\left \({\\theta }_{n}\\right \)\\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equcp.gif) Type II: ![ $$\\left \[\\begin{array}{cccccccc} - 1&0& 0 & 0 &\\cdots & & 0 & 0\\\\ 0 &1 & 0 & 0 &\\cdots& & 0 & 0 \\\\ 0 &0&\\cos \\left \({\\theta }_{1}\\right \)& -\\sin \\left \({\\theta }_{1}\\right \)& 0 &\\cdots & & \\vdots \\\\ 0 &0&\\sin \\left \({\\theta }_{1}\\right \)& \\cos \\left \({\\theta }_{1}\\right \) & 0 &\\cdots\\\\ & & 0 & 0 & \\ddots\\\\ \\vdots & \\vdots & & & & \\ddots & 0 & 0 \\\\ 0 &0& & & & 0 &\\cos \\left \({\\theta }_{n-1}\\right \)& -\\sin \\left \({\\theta }_{n-1}\\right \) \\\\ 0 &0& \\cdots& & &0 &\\sin \\left \({\\theta }_{n-1}\\right \)& \\cos \\left \({\\theta }_{n-1}\\right \)\\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equcq.gif) Corollary 4.6.5. Let ![ $$O : {\\mathbb{R}}^{2n+1} \\rightarrow{\\mathbb{R}}^{2n+1}$$ ](A81414_1_En_4_Chapter_IEq70.gif) be an orthogonal operator, then we can find an orthonormal basis where L has one of the following two the matrix representations: Type I: ![ $$\\left \[\\begin{array}{ccccccc} 1& 0 & 0 &0&\\cdots & 0 & 0 \\\\ 0&\\cos \\left \({\\theta }_{1}\\right \)& -\\sin \\left \({\\theta }_{1}\\right \)&0&\\cdots & & \\vdots \\\\ 0&\\sin \\left \({\\theta }_{1}\\right \)& \\cos \\left \({\\theta }_{1}\\right \) &0&\\cdots\\\\ 0& 0 & 0 & \\ddots\\\\ \\vdots & & & & \\ddots & 0 & 0 \\\\ 0& & & & 0 &\\cos \\left \({\\theta }_{n}\\right \)& -\\sin \\left \({\\theta }_{n}\\right \) \\\\ 0& \\cdots& & &0 &\\sin \\left \({\\theta }_{n}\\right \)& \\cos \\left \({\\theta }_{n}\\right \)\\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equcr.gif) Type II: ![ $$\\text{ }\\left \[\\begin{array}{ccccccc} - 1& 0 & 0 &0&\\cdots & 0 & 0 \\\\ 0 &\\cos \\left \({\\theta }_{1}\\right \)& -\\sin \\left \({\\theta }_{1}\\right \)&0&\\cdots & & \\vdots \\\\ 0 &\\sin \\left \({\\theta }_{1}\\right \)& \\cos \\left \({\\theta }_{1}\\right \) &0&\\cdots\\\\ 0 & 0 & 0 & \\ddots\\\\ \\vdots & & & & \\ddots & 0 & 0 \\\\ 0 & & & & 0 &\\cos \\left \({\\theta }_{n}\\right \)& -\\sin \\left \({\\theta }_{n}\\right \) \\\\ 0 & \\cdots& & &0 &\\sin \\left \({\\theta }_{n}\\right \)& \\cos \\left \({\\theta }_{n}\\right \)\\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equcs.gif) Like with unitary equivalence (see Sect. 4.5), we also have the concept of orthogonal equivalence. One can with the appropriate modifications prove similar results about when matrices are orthogonally equivalent. The above results apparently give us the simplest types of matrices that real normal, skew-symmetric, and orthogonal operators are orthogonally equivalent to. Note that type I operators have the property that − 1 has even multiplicity, while for type II, − 1 has odd multiplicity. In particular, we note that type I is the same as saying that the determinant is 1 while type II means that the determinant is − 1. The collection of orthogonal transformations of type I is denoted SO n . This set is a subgroup of O n , i.e., if A, B ∈ SO n , then AB ∈ SO n . This is not obvious given what we know now, but the proof is quite simple using determinants. ### 4.6.1 Exercises 1. Explain what the canonical form is for real linear maps that are both orthogonal and skew-symmetric. 2. Let L : V -> V be orthogonal on a finite-dimensional real inner product space and assume that dimkerL + 1 V is even. Show that L = K 2 for some orthogonal K. 3. Use the canonical forms to show (a) If U ∈ U n , then U = expA where A is skew-adjoint. (b) If O ∈ O n is of type I, then O = expA where A is skew-symmetric. 4. Let L : V -> V be skew-symmetric on a real inner product space. Show that L = K 2 for some K. Can you solve this using a skew-symmetric K ? 5. Let A ∈ O n . Show that the following conditions are equivalent: (a) A has type I. (b) The product of the real eigenvalues is 1. (c) The product of all real and complex eigenvalues is 1. (d) ![ $$\\dim \\left \(\\ker \\left \(L + {1}_{{\\mathbb{R}}^{n}}\\right \)\\right \)$$ ](A81414_1_En_4_Chapter_IEq71.gif) is even. (e) χ A t = t n + ⋯ + α1 t + − 1 n , i.e., the constant term is − 1 n . 6. Assume that ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{R}\\right \)$$ ](A81414_1_En_4_Chapter_IEq72.gif) satisfies AO = OA for all O ∈ SO n . Show that (a) If n = 2, then ![ $$A = \\left \[\\begin{array}{cc} \\alpha & - \\beta \\\\ \\beta& \\alpha \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equct.gif) (b) If n ≥ 3, then ![ $$A = \\lambda {1}_{{\\mathbb{R}}^{n}}.$$ ](A81414_1_En_4_Chapter_IEq73.gif) 7. Let ![ $$L : {\\mathbb{R}}^{3} \\rightarrow{\\mathbb{R}}^{3}$$ ](A81414_1_En_4_Chapter_IEq74.gif) be skew-symmetric. (a) Show that there is a unique vector ![ $$w \\in{\\mathbb{R}}^{3}$$ ](A81414_1_En_4_Chapter_IEq75.gif) such that Lx = w ×x. w is known as the Darboux vector for L. (b) Show that the assignment L -> w gives a linear isomorphism between skew-symmetric 3 ×3 matrices and vectors in ![ $${\\mathbb{R}}^{3}.$$ ](A81414_1_En_4_Chapter_IEq76.gif) (c) Show that if L 1 x = w 1 ×x and L 2 x = w 2 ×x, then the commutator ![ $$\\left \[{L}_{1},{L}_{2}\\right \] = {L}_{1} \\circ{L}_{2} - {L}_{2} \\circ{L}_{1}$$ ](A81414_1_En_4_Chapter_Equcu.gif) satisfies ![ $$\\left \[{L}_{1},{L}_{2}\\right \]\\left \(x\\right \) = \\left \({w}_{1} \\times{w}_{2}\\right \) \\times x$$ ](A81414_1_En_4_Chapter_Equcv.gif) Hint: This is equivalent to proving the so-called Jacobi identity: ![ $$\\left \(x \\times y\\right \) \\times z + \\left \(z \\times x\\right \) \\times y + \\left \(y \\times z\\right \) \\times x = 0.$$ ](A81414_1_En_4_Chapter_Equcw.gif) (d) Show that ![ $$L\\left \(x\\right \) = {w}_{2}\\left \({w}_{1}\\vert x\\right \) - {w}_{1}\\left \({w}_{2}\\vert x\\right \)$$ ](A81414_1_En_4_Chapter_Equcx.gif) is skew-symmetric and that ![ $$\\left \({w}_{1} \\times{w}_{2}\\right \) \\times x = {w}_{2}\\left \({w}_{1}\\vert x\\right \) - {w}_{1}\\left \({w}_{2}\\vert x\\right \).$$ ](A81414_1_En_4_Chapter_Equcy.gif) (e) Conclude that all skew-symmetric ![ $$L : {\\mathbb{R}}^{3} \\rightarrow{\\mathbb{R}}^{3}$$ ](A81414_1_En_4_Chapter_IEq77.gif) are of the form ![ $$L\\left \(x\\right \) = {w}_{2}\\left \({w}_{1}\\vert x\\right \) - {w}_{1}\\left \({w}_{2}\\vert x\\right \).$$ ](A81414_1_En_4_Chapter_Equcz.gif) 8. For ![ $${u}_{1},{u}_{2} \\in{\\mathbb{R}}^{n}$$ ](A81414_1_En_4_Chapter_IEq78.gif): (a) Show that ![ $$L\\left \(x\\right \) = \\left \({u}_{1} \\wedge{u}_{2}\\right \)\\left \(x\\right \) = \\left \({u}_{1}\\vert x\\right \){u}_{2} -\\left \({u}_{2}\\vert x\\right \){u}_{1}$$ ](A81414_1_En_4_Chapter_Equda.gif) defines a skew-symmetric operator. (b) Show that ![ $$\\begin{array}{rcl} {u}_{1} \\wedge{u}_{2}& =& -{u}_{2} \\wedge{u}_{1} \\\\ \\left \(\\alpha {u}_{1} + \\beta {v}_{1}\\right \) \\wedge{u}_{2}& =& \\alpha \\left \({u}_{1} \\wedge{u}_{2}\\right \) + \\beta \\left \({v}_{1} \\wedge{u}_{2}\\right \) \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ43.gif) (c) ShowBianchi's identity: For all ![ $$x,y,z \\in{\\mathbb{R}}^{n}$$ ](A81414_1_En_4_Chapter_IEq79.gif), we have ![ $$\\left \(x \\wedge y\\right \)\\left \(z\\right \) + \\left \(z \\wedge x\\right \)\\left \(y\\right \) + \\left \(y \\wedge z\\right \)\\left \(x\\right \) = 0.$$ ](A81414_1_En_4_Chapter_Equdb.gif) (d) When n ≥ 4, show that not all skew-symmetric ![ $$L : {\\mathbb{R}}^{n} \\rightarrow{\\mathbb{R}}^{n}$$ ](A81414_1_En_4_Chapter_IEq80.gif) are of the form Lx = u 1 ∧ u 2. Hint: Let u 1,..., u 4 be linearly independent and consider ![ $$L = {u}_{1} \\wedge{u}_{2} + {u}_{3} \\wedge{u}_{4}.$$ ](A81414_1_En_4_Chapter_Equdc.gif) (e) Show that the skew-symmetric operators e i ∧ e j , where i < j, form a basis for the space of skew-symmetric operators. ## 4.7 Orthogonal Transformations* In this section, we are going to try to get a better grasp on orthogonal transformations. We start by specializing the above canonical forms for orthogonal transformations to the two situations where things can be visualized, namely, in dimensions two and three. Corollary 4.7.1. Any orthogonal operator ![ $$O : {\\mathbb{R}}^{2} \\rightarrow{\\mathbb{R}}^{2}$$ ](A81414_1_En_4_Chapter_IEq81.gif) has one of the following two forms in the standard basis: Either it is a rotation by θ and is of the form Type I: ![ $$\\left \[\\begin{array}{cc} \\cos \\left \(\\theta \\right \)& -\\sin \\left \(\\theta \\right \)\\\\ \\sin \\left \(\\theta \\right \) & \\cos \\left \(\\theta \\right \) \\end{array} \\right \],$$ ](A81414_1_En_4_Chapter_Equdd.gif) or it is a reflection in the line spanned by cos α,sin α and has the form Type II: ![ $$\\text{ }\\left \[\\begin{array}{cc} \\cos \\left \(2\\alpha \\right \)& \\sin \\left \(2\\alpha \\right \)\\\\ \\sin \\left \(2\\alpha\\right \) & -\\cos \\left \(2\\alpha \\right \) \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equde.gif) Moreover, O is a rotation if χ O t = t 2 − 2cos θt + 1, and θ is given by ![ $$\\cos \\theta= \\frac{1} {2}\\mathrm{tr}O,$$ ](A81414_1_En_4_Chapter_IEq82.gif) while O is a reflection if tr O = 0 and χ O t = t 2 − 1. Proof. We know that there is an orthonormal basis x 1, x 2 that puts O into one of the two forms ![ $$\\left \[\\begin{array}{cc} \\cos \\left \(\\theta \\right \)& -\\sin \\left \(\\theta \\right \)\\\\ \\sin \\left \(\\theta \\right \) & \\cos \\left \(\\theta \\right \) \\end{array} \\right \],\\left \[\\begin{array}{cc} 1& 0\\\\ 0 & -1 \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equdf.gif) We can write ![ $${x}_{1} = \\left \[\\begin{array}{c} \\cos \\left \(\\alpha \\right \)\\\\ \\sin \\left \(\\alpha \\right \) \\end{array} \\right \],{x}_{2} = \\pm \\left \[\\begin{array}{c} -\\sin \\left \(\\alpha \\right \)\\\\ \\cos \\left \(\\alpha \\right \) \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equdg.gif) The sign on x 2 can have an effect on the matrix representation as we shall see. In the case of the rotation, it means a sign change in the angle; in the reflection case, it does not change the form at all. To find the form of the matrix in the usual basis, we use the change of basis formula for matrix representations. Before doing this, let us note that the law of exponents ![ $$\\exp \\left \(\\mathit{i}\\left \(\\theta+ \\alpha \\right \)\\right \) =\\exp \\left \(\\mathit{i}\\theta \\right \)\\exp \\left \(\\mathit{i}\\alpha \\right \)$$ ](A81414_1_En_4_Chapter_Equdh.gif) tells us that the corresponding real 2 ×2 matrices satisfy ![ $$\\begin{array}{rcl} & \\left \[\\begin{array}{cc} \\cos \\left \(\\alpha \\right \)& -\\sin \\left \(\\alpha \\right \)\\\\ \\sin \\left \(\\alpha \\right \) & \\cos \\left \(\\alpha \\right \) \\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos \\left \(\\theta \\right \)& -\\sin \\left \(\\theta \\right \)\\\\ \\sin \\left \(\\theta \\right \) & \\cos \\left \(\\theta \\right \) \\end{array} \\right \]& \\\\ & \\quad = \\left \[\\begin{array}{cc} \\cos \\left \(\\alpha+ \\theta \\right \)& -\\sin \\left \(\\alpha+ \\theta \\right \)\\\\ \\sin \\left \(\\alpha+ \\theta\\right \) & \\cos \\left \(\\alpha+ \\theta\\right \) \\end{array} \\right \]. & \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ44.gif) Thus, ![ $$\\begin{array}{rcl} O& = \\left \[\\begin{array}{cc} \\cos \\left \(\\alpha \\right \)& -\\sin \\left \(\\alpha \\right \)\\\\ \\sin \\left \(\\alpha \\right \) & \\cos \\left \(\\alpha \\right \) \\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos \\left \(\\theta \\right \)& -\\sin \\left \(\\theta \\right \)\\\\ \\sin \\left \(\\theta \\right \) & \\cos \\left \(\\theta \\right \) \\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos \\left \(\\alpha \\right \) &\\sin \\left \(\\alpha \\right \)\\\\ -\\sin \\left \(\\alpha \\right \) &\\cos \\left \(\\alpha \\right \) \\end{array} \\right \]& \\\\ & = \\left \[\\begin{array}{cc} \\cos \\left \(\\alpha \\right \)& -\\sin \\left \(\\alpha \\right \)\\\\ \\sin \\left \(\\alpha \\right \) & \\cos \\left \(\\alpha \\right \) \\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos \\left \(\\theta \\right \)& -\\sin \\left \(\\theta \\right \)\\\\ \\sin \\left \(\\theta \\right \) & \\cos \\left \(\\theta \\right \) \\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos \\left \(-\\alpha \\right \)& -\\sin \\left \(-\\alpha \\right \)\\\\ \\sin \\left \(-\\alpha\\right \) & \\cos \\left \(-\\alpha\\right \) \\end{array} \\right \]& \\\\ & = \\left \[\\begin{array}{cc} \\cos \\left \(\\theta \\right \)& -\\sin \\left \(\\theta \\right \)\\\\ \\sin \\left \(\\theta \\right \) & \\cos \\left \(\\theta \\right \) \\end{array} \\right \] & \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ45.gif) as expected. If x 2 is changed to − x 2, we have ![ $$\\begin{array}{rcl} O& = \\left \[\\begin{array}{cc} \\cos \\left \(\\alpha \\right \)& \\sin \\left \(\\alpha \\right \)\\\\ \\sin \\left \(\\alpha \\right \) & -\\cos \\left \(\\alpha \\right \) \\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos \\left \(\\theta \\right \)& -\\sin \\left \(\\theta \\right \)\\\\ \\sin \\left \(\\theta \\right \) & \\cos \\left \(\\theta \\right \) \\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos \\left \(\\alpha \\right \)& \\sin \\left \(\\alpha \\right \)\\\\ \\sin \\left \(\\alpha \\right \) & -\\cos \\left \(\\alpha \\right \) \\end{array} \\right \]& \\\\ & = \\left \[\\begin{array}{cc} \\cos \\left \(\\alpha \\right \)& \\sin \\left \(\\alpha \\right \)\\\\ \\sin \\left \(\\alpha \\right \) & -\\cos \\left \(\\alpha \\right \) \\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos \\left \(-\\theta \\right \) &\\sin \\left \(-\\theta \\right \)\\\\ -\\sin \\left \(-\\theta\\right \) &\\cos \\left \(-\\theta\\right \) \\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos \\left \(\\alpha \\right \)& \\sin \\left \(\\alpha \\right \)\\\\ \\sin \\left \(\\alpha \\right \) & -\\cos \\left \(\\alpha \\right \) \\end{array} \\right \]& \\\\ & = \\left \[\\begin{array}{cc} \\cos \\left \(\\alpha- \\theta \\right \)& \\sin \\left \(\\alpha- \\theta \\right \)\\\\ \\sin \\left \(\\alpha- \\theta\\right \) & -\\cos \\left \(\\alpha- \\theta \\right \) \\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos \\left \(-\\alpha \\right \) & -\\sin \\left \(-\\alpha \\right \)\\\\ -\\sin \\left \(-\\alpha\\right \) & -\\cos \\left \(-\\alpha\\right \) \\end{array} \\right \] & \\\\ & = \\left \[\\begin{array}{cc} \\cos \\left \(-\\theta \\right \)& -\\sin \\left \(-\\theta \\right \)\\\\ \\sin \\left \(-\\theta\\right \) & \\cos \\left \(-\\theta\\right \) \\end{array} \\right \]. & \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ46.gif) Finally, the reflection has the form ![ $$\\begin{array}{rcl} O& =& \\left \[\\begin{array}{cc} \\cos \\left \(\\alpha \\right \)& -\\sin \\left \(\\alpha \\right \)\\\\ \\sin \\left \(\\alpha \\right \) & \\cos \\left \(\\alpha \\right \) \\end{array} \\right \]\\left \[\\begin{array}{cc} 1& 0\\\\ 0 & -1 \\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos \\left \(\\alpha \\right \) &\\sin \\left \(\\alpha \\right \)\\\\ -\\sin \\left \(\\alpha \\right \) &\\cos \\left \(\\alpha \\right \) \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{cc} \\cos \\left \(\\alpha \\right \)& \\sin \\left \(\\alpha \\right \)\\\\ \\sin \\left \(\\alpha \\right \) & -\\cos \\left \(\\alpha \\right \) \\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos \\left \(\\alpha \\right \) &\\sin \\left \(\\alpha \\right \)\\\\ -\\sin \\left \(\\alpha \\right \) &\\cos \\left \(\\alpha \\right \) \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{cc} \\cos \\left \(2\\alpha \\right \)& \\sin \\left \(2\\alpha \\right \)\\\\ \\sin \\left \(2\\alpha\\right \) & -\\cos \\left \(2\\alpha \\right \) \\end{array} \\right \]. \\\\ & & \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ47.gif) □ Note that there is clearly an ambiguity in what it should mean to be a rotation by θ as either of the two matrices ![ $$\\left \[\\begin{array}{cc} \\cos \\left \(\\pm \\theta \\right \)& -\\sin \\left \(\\pm \\theta \\right \)\\\\ \\sin \\left \(\\pm \\theta\\right \) & \\cos \\left \(\\pm \\theta\\right \) \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equdi.gif) describe such a rotation. What is more, the same orthogonal transformation can have different canonical forms depending on what basis we choose as we just saw in the proof of the above theorem. Unfortunately, it is not possible to sort this out without being very careful about the choice of basis, specifically one needs the additional concept of orientation which in turn uses determinants. We now turn to the three-dimensional situation. Corollary 4.7.2. Any orthogonal operator ![ $$O : {\\mathbb{R}}^{3} \\rightarrow{\\mathbb{R}}^{3}$$ ](A81414_1_En_4_Chapter_IEq83.gif) is either Type I: It is a rotation in the plane that is perpendicular to the line representing the + 1 eigenspace. Type II: It is a rotation in the plane that is perpendicular to the − 1 eigenspace followed by a reflection in that plane, corresponding to multiplying by − 1 in the − 1 eigenspace. As in the two-dimensional situation, we can also discover which case we are in by calculating the characteristic polynomial. For a rotation O in an axis, we have ![ $$\\begin{array}{rcl}{ \\chi }_{O}\\left \(t\\right \)& =& \\left \(t - 1\\right \)\\left \({t}^{2} -\\left \(2\\cos \\theta \\right \)t + 1\\right \) \\\\ & =& {t}^{3} -\\left \(1 + 2\\cos \\theta \\right \){t}^{2} + \\left \(1 + 2\\cos \\theta \\right \)t - 1 \\\\ & =& {t}^{3} -\\left \(\\mathrm{tr}O\\right \){t}^{2} + \\left \(\\mathrm{tr}O\\right \)t - 1, \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ48.gif) while the case involving a reflection ![ $$\\begin{array}{rcl}{ \\chi }_{O}\\left \(t\\right \)& =& \\left \(t + 1\\right \)\\left \({t}^{2} -\\left \(2\\cos \\theta \\right \)t + 1\\right \) \\\\ & =& {t}^{3} -\\left \(-1 + 2\\cos \\theta \\right \){t}^{2} -\\left \(-1 + 2\\cos \\theta \\right \)t + 1 \\\\ & =& {t}^{3} -\\left \(\\mathrm{tr}O\\right \){t}^{2} -\\left \(\\mathrm{tr}O\\right \)t +1. \\end{array}$$ ](A81414_1_En_4_Chapter_Equ49.gif) Example 4.7.3. Imagine a cube that is centered at the origin and so that the edges and sides are parallel to coordinate axes and planes. We note that all of the orthogonal transformations that either reflect in a coordinate plane or form 90 ∘ , 180 ∘ , and 270 ∘ rotations around the coordinate axes are symmetries of the cube. Thus, the cube is mapped to itself via each of these isometries. In fact, the collection of all isometries that preserve the cube in this fashion is a (finite) group. It is evidently a subgroup of O 3. There are more symmetries than those already mentioned, namely, if we pick two antipodal vertices, then we can rotate the cube into itself by 120 ∘ and 240 ∘ rotations around the line going through these two points. What is even more surprising perhaps is that these rotations can be obtained by composing the already mentioned 90 ∘ rotations. To see this, let ![ $${O}_{x} = \\left \[\\begin{array}{ccc} 1&0& 0\\\\ 0 &0 & -1 \\\\ 0&1& 0 \\end{array} \\right \],{O}_{y} = \\left \[\\begin{array}{ccc} 0&0& - 1\\\\ 0 &1 & 0 \\\\ 1&0& 0 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equdj.gif) be 90 ∘ rotations around the x\- and y-axes, respectively. Then, ![ $$\\begin{array}{rcl}{ O}_{x}{O}_{y}& =& \\left \[\\begin{array}{ccc} 1&0& 0\\\\ 0 &0 & -1 \\\\ 0&1& 0 \\end{array} \\right \]\\left \[\\begin{array}{ccc} 0&0& - 1\\\\ 0 &1 & 0 \\\\ 1&0& 0 \\end{array} \\right \] = \\left \[\\begin{array}{ccc} 0 &0& - 1\\\\ - 1 &0 & 0 \\\\ 0 &1& 0 \\end{array} \\right \], \\\\ {O}_{y}{O}_{x}& =& \\left \[\\begin{array}{ccc} 0&0& - 1\\\\ 0 &1 & 0 \\\\ 1&0& 0 \\end{array} \\right \]\\left \[\\begin{array}{ccc} 1&0& 0\\\\ 0 &0 & -1 \\\\ 0&1& 0 \\end{array} \\right \] = \\left \[\\begin{array}{ccc} 0& - 1& 0\\\\ 0 & 0 & -1 \\\\ 1& 0 & 0 \\end{array} \\right \],\\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ50.gif) so we see that these two rotations do not commute. We now compute the (complex) eigenvalues via the characteristic polynomials in order to figure out what these new isometries look like. Since both matrices have zero trace, they have characteristic polynomial ![ $$\\chi \\left \(t\\right \) = {t}^{3} - 1.$$ ](A81414_1_En_4_Chapter_Equdk.gif) Thus, they describe rotations where ![ $$\\begin{array}{rcl} \\mathrm{tr}\\left \(O\\right \)& =& 1 + 2\\cos \\left \(\\theta \\right \) = 0\\text{ or} \\\\ \\theta & =& \\pm \\frac{2\\pi } {3} \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ51.gif) around the axis that corresponds to the 1 eigenvector. For O x O y , we have that 1, − 1, − 1 is an eigenvector for 1, while for O y O x , we have − 1, 1, − 1. These two eigenvectors describe the directions for two different diagonals in the cube. Completing, say, 1, − 1, − 1 to an orthonormal basis for ![ $${\\mathbb{R}}^{3},$$ ](A81414_1_En_4_Chapter_IEq84.gif) then tells us that ![ $$\\begin{array}{rcl}{ O}_{x}{O}_{y}& = \\left \[\\begin{array}{lll} \\frac{1} {\\sqrt{3}} & \\frac{1} {\\sqrt{2}} & \\frac{1} {\\sqrt{6}} \\\\ \\frac{-1} {\\sqrt{3}} & \\frac{1} {\\sqrt{2}} & \\frac{-1} {\\sqrt{6}} \\\\ \\frac{-1} {\\sqrt{3}} & 0 & \\frac{2} {\\sqrt{6}} \\end{array} \\right \]\\left \[\\begin{array}{lll} 1&0 &0 \\\\ 0&\\cos \\left \(\\pm \\frac{2\\pi } {3} \\right \)& -\\sin \\left \(\\pm \\frac{2\\pi } {3} \\right \) \\\\ 0&\\sin \\left \(\\pm \\frac{2\\pi } {3} \\right \)&\\cos \\left \(\\pm \\frac{2\\pi } {3} \\right \) \\end{array} \\right \]\\left \[\\begin{array}{lll} \\frac{1} {\\sqrt{3}} & \\frac{-1} {\\sqrt{3}} & \\frac{-1} {\\sqrt{3}} \\\\ \\frac{1} {\\sqrt{2}} & \\frac{1} {\\sqrt{2}} & 0 \\\\ \\frac{1} {\\sqrt{6}} & \\frac{-1} {\\sqrt{6}} & \\frac{2} {\\sqrt{6}} \\end{array} \\right \]& \\\\ & = \\left \[\\begin{array}{lll} \\frac{1} {\\sqrt{3}} & \\frac{1} {\\sqrt{2}} & \\frac{1} {\\sqrt{6}} \\\\ \\frac{-1} {\\sqrt{3}} & \\frac{1} {\\sqrt{2}} & \\frac{-1} {\\sqrt{6}} \\\\ \\frac{-1} {\\sqrt{3}} & 0 & \\frac{2} {\\sqrt{6}} \\end{array} \\right \]\\left \[\\begin{array}{lll} 1&0 &0 \\\\ 0& -\\frac{1} {2} & \\mp \\frac{\\sqrt{3}} {2} \\\\ 0& \\pm \\frac{\\sqrt{3}} {2} & -\\frac{1} {2} \\end{array} \\right \]\\left \[\\begin{array}{lll} \\frac{1} {\\sqrt{3}} & \\frac{-1} {\\sqrt{3}} & \\frac{-1} {\\sqrt{3}} \\\\ \\frac{1} {\\sqrt{2}} & \\frac{1} {\\sqrt{2}} & 0 \\\\ \\frac{1} {\\sqrt{6}} & \\frac{-1} {\\sqrt{6}} & \\frac{2} {\\sqrt{6}} \\end{array} \\right \]& \\\\ & = \\left \[\\begin{array}{lll} \\frac{1} {\\sqrt{3}} & \\frac{1} {\\sqrt{2}} & \\frac{1} {\\sqrt{6}} \\\\ \\frac{-1} {\\sqrt{3}} & \\frac{1} {\\sqrt{2}} & \\frac{-1} {\\sqrt{6}} \\\\ \\frac{-1} {\\sqrt{3}} & 0 & \\frac{2} {\\sqrt{6}} \\end{array} \\right \]\\left \[\\begin{array}{lll} 1&0 &0 \\\\ 0& -\\frac{1} {2} & -\\frac{\\sqrt{3}} {2} \\\\ 0&\\frac{\\sqrt{3}} {2} & -\\frac{1} {2} \\end{array} \\right \]\\left \[\\begin{array}{lll} \\frac{1} {\\sqrt{3}} & \\frac{-1} {\\sqrt{3}} & \\frac{-1} {\\sqrt{3}} \\\\ \\frac{1} {\\sqrt{2}} & \\frac{1} {\\sqrt{2}} & 0 \\\\ \\frac{1} {\\sqrt{6}} & \\frac{-1} {\\sqrt{6}} & \\frac{2} {\\sqrt{6}} \\end{array} \\right \].& \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ52.gif) The fact that we pick + rather than − depends on our orthonormal basis as we can see by changing the basis by a sign in the last column: ![ $${O}_{x}{O}_{y} = \\left \[\\begin{array}{lll} \\frac{1} {\\sqrt{3}} & \\frac{1} {\\sqrt{2}} & \\frac{-1} {\\sqrt{6}} \\\\ \\frac{-1} {\\sqrt{3}} & \\frac{1} {\\sqrt{2}} & \\frac{1} {\\sqrt{6}} \\\\ \\frac{-1} {\\sqrt{3}} & 0 &\\frac{-2} {\\sqrt{6}} \\end{array} \\right \]\\left \[\\begin{array}{lll} 1&0 &0 \\\\ 0& -\\frac{1} {2} & \\frac{\\sqrt{3}} {2} \\\\ 0& -\\frac{\\sqrt{3}} {2} & -\\frac{1} {2} \\end{array} \\right \]\\left \[\\begin{array}{lll} \\frac{1} {\\sqrt{3}} & \\frac{-1} {\\sqrt{3}} & \\frac{-1} {\\sqrt{3}} \\\\ \\frac{1} {\\sqrt{2}} & \\frac{1} {\\sqrt{2}} & 0 \\\\ \\frac{-1} {\\sqrt{6}} & \\frac{1} {\\sqrt{6}} & \\frac{-2} {\\sqrt{6}} \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equdl.gif) We are now ready to discuss how the two types of orthogonal transformations interact with each other when multiplied. Let us start with the two-dimensional situation. One can directly verify that ![ $$\\begin{array}{rcl} & \\left \[\\begin{array}{cc} \\cos {\\theta }_{1} & -\\sin {\\theta }_{1} \\\\ \\sin {\\theta }_{1} & \\cos {\\theta }_{1} \\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos {\\theta }_{2} & -\\sin {\\theta }_{2} \\\\ \\sin {\\theta }_{2} & \\cos {\\theta }_{2} \\end{array} \\right \]& \\\\ & = \\left \[\\begin{array}{cc} \\cos \\left \({\\theta }_{1} + {\\theta }_{2}\\right \)& -\\sin \\left \({\\theta }_{1} + {\\theta }_{2}\\right \) \\\\ \\sin \\left \({\\theta }_{1} + {\\theta }_{2}\\right \)& \\cos \\left \({\\theta }_{1} + {\\theta }_{2}\\right \) \\end{array} \\right \], & \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ53.gif) ![ $$\\left \[\\begin{array}{cc} \\cos \\theta & -\\sin \\theta \\\\ \\sin \\theta& \\cos \\theta\\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos \\alpha & \\sin \\alpha \\\\ \\sin \\alpha& -\\cos \\alpha\\end{array} \\right \] = \\left \[\\begin{array}{cc} \\cos \\left \(\\theta+ \\alpha \\right \)& \\sin \\left \(\\theta+ \\alpha \\right \)\\\\ \\sin \\left \(\\theta+ \\alpha\\right \) & -\\cos \\left \(\\theta+ \\alpha \\right \) \\end{array} \\right \],$$ ](A81414_1_En_4_Chapter_Equdm.gif) ![ $$\\left \[\\begin{array}{cc} \\cos \\alpha & \\sin \\alpha \\\\ \\sin \\alpha& -\\cos \\alpha\\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos \\theta & -\\sin \\theta \\\\ \\sin \\theta& \\cos \\theta\\end{array} \\right \] = \\left \[\\begin{array}{cc} \\cos \\left \(\\alpha- \\theta \\right \)& \\sin \\left \(\\alpha- \\theta \\right \)\\\\ \\sin \\left \(\\alpha- \\theta\\right \) & -\\cos \\left \(\\alpha- \\theta \\right \) \\end{array} \\right \],$$ ](A81414_1_En_4_Chapter_Equdn.gif) ![ $$\\begin{array}{rcl} & \\left \[\\begin{array}{cc} \\cos {\\alpha }_{1} & \\sin {\\alpha }_{1} \\\\ \\sin {\\alpha }_{1} & -\\cos {\\alpha }_{1} \\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos {\\alpha }_{2} & \\sin {\\alpha }_{2} \\\\ \\sin {\\alpha }_{2} & -\\cos {\\alpha }_{2} \\end{array} \\right \]& \\\\ & = \\left \[\\begin{array}{cc} \\cos \\left \({\\alpha }_{1} - {\\alpha }_{2}\\right \)& -\\sin \\left \({\\alpha }_{1} - {\\alpha }_{2}\\right \) \\\\ \\sin \\left \({\\alpha }_{1} - {\\alpha }_{2}\\right \)& \\cos \\left \({\\alpha }_{1} - {\\alpha }_{2}\\right \) \\end{array} \\right \]. & \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ54.gif) Thus, we see that if the transformations are of the same type, their product has type I, while if they have different type, their product has type II. This is analogous to multiplying positive and negative numbers. This result actually holds in all dimensions and has a very simple proof using determinants. Euler was the first to observe this phenomenon in the three-dimensional case. What we are going to look into here is the observation that any rotation (type I) in O 2 is a product of two reflections. More specifically, if θ = α1 − α2, then the above calculation shows that ![ $$\\left \[\\begin{array}{cc} \\cos \\theta & -\\sin \\theta \\\\ \\sin \\theta& \\cos \\theta\\end{array} \\right \] = \\left \[\\begin{array}{cc} \\cos {\\alpha }_{1} & \\sin {\\alpha }_{1} \\\\ \\sin {\\alpha }_{1} & -\\cos {\\alpha }_{1} \\end{array} \\right \]\\left \[\\begin{array}{cc} \\cos {\\alpha }_{2} & \\sin {\\alpha }_{2} \\\\ \\sin {\\alpha }_{2} & -\\cos {\\alpha }_{2} \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equdo.gif) Definition 4.7.4. To pave the way for a higher dimensional analogue of this, we define A ∈ O n to be a reflection if it has the canonical form ![ $$A = O\\left \[\\begin{array}{cccc} - 1&0&&0\\\\ 0 &1\\\\ & &\\ddots \\\\ 0 & &&1 \\end{array} \\right \]{O}^{{_\\ast}}.$$ ](A81414_1_En_4_Chapter_Equdp.gif) This implies that BAB ∗ is also a reflection for all B ∈ O n . To get a better picture of what A does, we note that the − 1 eigenvector gives the reflection in the hyperplane spanned by the n − 1-dimensional + 1 eigenspace. If z is a unit eigenvector for − 1, then we can write A in the following way: ![ $$A\\left \(x\\right \) = {R}_{z}\\left \(x\\right \) = x - 2\\left \(x\\vert z\\right \)z.$$ ](A81414_1_En_4_Chapter_Equdq.gif) To see why this is true, first note that if x is an eigenvector for + 1, then it is perpendicular to z, and hence, ![ $$x - 2\\left \(x\\vert z\\right \)z = x.$$ ](A81414_1_En_4_Chapter_Equdr.gif) In case x = z, we have ![ $$\\begin{array}{rcl} z - 2\\left \(z\\vert z\\right \)z& =& z - 2z \\\\ & =& -z \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ55.gif) as desired. We can now prove an interesting and important lemma. Lemma 4.7.5. (E. Cartan) Let A ∈ O n . If A has type I, then A is a product of an even number of reflections, while if A has type II, then it is a product of an odd number of reflections. Proof. A very simple alternate proof can be found in the exercises. The canonical form for A can be expressed as follows: ![ $$A = O{I}_{\\pm }{R}_{1}\\cdots {R}_{l}{O}^{{_\\ast}},$$ ](A81414_1_En_4_Chapter_Equds.gif) where O is the orthogonal change of basis matrix, each R i corresponds to a rotation on a two-dimensional subspace M i , and ![ $${I}_{\\pm } = \\left \[\\begin{array}{cccc} \\pm1&0&&0\\\\ 0 &1\\\\ & &\\ddots \\\\ 0 & &&1 \\end{array} \\right \],$$ ](A81414_1_En_4_Chapter_Equdt.gif) where + is used for type I and − is used for type II. The above two-dimensional construction shows that each rotation is a product of two reflections on M i . If we extend these two-dimensional reflections to be the identity on M i ⊥ , then they become reflections on the whole space. Thus, we have ![ $$A = O{I}_{\\pm }\\left \({A}_{1}{B}_{1}\\right \)\\cdots \\left \({A}_{l}{B}_{l}\\right \){O}^{{_\\ast}},$$ ](A81414_1_En_4_Chapter_Equdu.gif) where I ± is either the identity or a reflection and A 1, B 1,..., A l , B l are all reflections. Finally, ![ $$\\begin{array}{rcl} A& =& O{I}_{\\pm }\\left \({A}_{1}{B}_{1}\\right \)\\cdots \\left \({A}_{l}{B}_{l}\\right \){O}^{{_\\ast}} \\\\ & =& \\left \(O{I}_{\\pm }{O}^{{_\\ast}}\\right \)\\left \(O{A}_{ 1}{O}^{{_\\ast}}\\right \)\\left \(O{B}_{ 1}{O}^{{_\\ast}}\\right \)\\cdots \\left \(O{A}_{ l}{O}^{{_\\ast}}\\right \)\\left \(O{B}_{ l}{O}^{{_\\ast}}\\right \).\\end{array}$$ ](A81414_1_En_4_Chapter_Equ56.gif) This proves the claim. □ Remark 4.7.6. The converse to this lemma is also true, namely, that any even number of reflection compose to a type I orthogonal transformation, while an odd number yields one of type II. This proof of this fact is very simple if one uses determinants. ### 4.7.1 Exercises 1. Decide the type and what the rotation and/or line of reflection is for each the matrices ![ $$\\begin{array}{rcl} & & \\left \[\\begin{array}{cc} \\frac{1} {2} & \\frac{\\sqrt{3}} {2} \\\\ -\\frac{\\sqrt{3}} {2} & \\frac{1} {2}\\end{array} \\right \], \\\\ & & \\left \[\\begin{array}{cc} \\frac{1} {2} & \\frac{\\sqrt{3}} {2} \\\\ \\frac{\\sqrt{3}} {2} & -\\frac{1} {2}\\end{array} \\right \].\\end{array}$$ ](A81414_1_En_4_Chapter_Equ57.gif) 2. Decide on the type, ± 1 eigenvector and possible rotation angles on the orthogonal complement for the ± 1 eigenvector for the matrices: ![ $$\\begin{array}{rcl} & & \\left \[\\begin{array}{ccc} -\\dfrac{1} {3} & -\\dfrac{2} {3} & -\\dfrac{2} {3} \\\\ -\\dfrac{2} {3} & -\\dfrac{1} {3} & \\dfrac{2} {3} \\\\ -\\dfrac{2} {3} & \\dfrac{2} {3} & -\\dfrac{1} {3}\\end{array} \\right \], \\\\ & & \\left \[\\begin{array}{ccc} 0& 0 &1\\\\ 0 & - 1 &0 \\\\ 1& 0 &0\\end{array} \\right \], \\\\ & & \\left \[\\begin{array}{ccc} \\dfrac{2} {3} & -\\dfrac{2} {3} &\\dfrac{1} {3} \\\\ -\\dfrac{2} {3} & -\\dfrac{1} {3} &\\dfrac{2} {3} \\\\ \\dfrac{1} {3} & \\dfrac{2} {3} &\\dfrac{2} {3}\\end{array} \\right \], \\\\ & & \\left \[\\begin{array}{ccc} \\dfrac{1} {3} & \\dfrac{2} {3} & \\dfrac{2} {3} \\\\ \\dfrac{2} {3} & -\\dfrac{2} {3} & \\dfrac{1} {3} \\\\ \\dfrac{2} {3} & \\dfrac{1} {3} & -\\dfrac{2} {3}\\end{array} \\right \].\\end{array}$$ ](A81414_1_En_4_Chapter_Equ58.gif) 3. Write the matrices from Exercises 1 and 2 as products of reflections. 4. Let O ∈ O 3 and assume we have a Darboux vector u ∈ ![ $${\\mathbb{R}}^{3}$$ ](A81414_1_En_4_Chapter_IEq85.gif) such that for all ![ $$x \\in{\\mathbb{R}}^{3}$$ ](A81414_1_En_4_Chapter_IEq86.gif), ![ $$\\frac{1} {2}\\left \(O - {O}^{t}\\right \)\\left \(x\\right \) = u \\times x.$$ ](A81414_1_En_4_Chapter_Equdv.gif) (See also Exercise 7 in Sect. 4.6). (a) Show that u determines the axis of rotation by showing that Ou = ± u. (b) Show that the rotation is determined by sinθ = u. (c) Show that for any O ∈ O 3, we can find a Darboux vector ![ $$u \\in{\\mathbb{R}}^{3}$$ ](A81414_1_En_4_Chapter_IEq87.gif) such that the above formula holds. 5. (Euler) Define the rotations around the three coordinate axes in ![ $${\\mathbb{R}}^{3}$$ ](A81414_1_En_4_Chapter_IEq88.gif) by ![ $$\\begin{array}{rcl}{ O}_{x}\\left \(\\alpha \\right \)& =& \\left \[\\begin{array}{ccc} 1& 0 & 0\\\\ 0 &\\cos \\alpha& -\\sin \\alpha\\\\ 0&\\sin \\alpha & \\cos \\alpha \\end{array} \\right \], \\\\ {O}_{y}\\left \(\\beta \\right \)& =& \\left \[\\begin{array}{ccc} \\cos \\beta&0& -\\sin \\beta\\\\ 0 &1& 0\\\\ \\sin \\beta&0 & \\cos \\beta\\end{array} \\right \], \\\\ {O}_{z}\\left \(\\gamma \\right \)& =& \\left \[\\begin{array}{ccc} \\cos \\gamma& -\\sin \\gamma&0 \\\\ \\sin \\gamma& \\cos \\gamma&0 \\\\ 0 & 0 &1\\end{array} \\right \].\\end{array}$$ ](A81414_1_En_4_Chapter_Equ59.gif) (a) Show that any O ∈ SO3 is of the form O = O x αO y βO z γ. The angles α, β, γ are called the Euler angles for O. Hint: ![ $${O}_{x}\\left \(\\alpha \\right \){O}_{y}\\left \(\\beta \\right \){O}_{z}\\left \(\\gamma \\right \) = \\left \[\\begin{array}{ccc} \\cos \\beta \\cos \\gamma& -\\cos \\beta \\sin \\gamma& -\\sin \\beta\\\\ & & -\\sin \\alpha \\cos \\beta\\\\ & & \\cos \\alpha \\sin \\beta \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equdw.gif) (b) Show that O x αO y βO z γ ∈ SO3 for all α, β, γ. (c) Show that if O 1, O 2 ∈ SO3, then also O 1 O 2 ∈ SO3. 6. Find the matrix representations with respect to the canonical basis for ![ $${\\mathbb{R}}^{3}$$ ](A81414_1_En_4_Chapter_IEq89.gif) for all of the orthogonal matrices that describe a rotation by θ in span1, 1, 0, 1, 2, 1. 7. Show, without using canonical forms or Cartan's lemma, that if O ∈ O n , then O is a composition of at most n reflections. Hint: For ![ $$x \\in{\\mathbb{R}}^{n}$$ ](A81414_1_En_4_Chapter_IEq90.gif), select a reflection R that takes x to Ox. Then, show that RO fixes x and conclude that RO also fixes the orthogonal complement. 8. Let ![ $$z \\in{\\mathbb{R}}^{n}$$ ](A81414_1_En_4_Chapter_IEq91.gif) be a unit vector and ![ $${R}_{z}\\left \(x\\right \) = x - 2\\left \(x\\vert z\\right \)z$$ ](A81414_1_En_4_Chapter_Equdx.gif) the reflection in the hyperplane perpendicular to z. (a) Show that ![ $$\\begin{array}{rcl} {R}_{z}& =& {R}_{-z}, \\\\ {\\left \({R}_{z}\\right \)}^{-1}& =& {R}_{ z}.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ60.gif) (b) If ![ $$y,z \\in{\\mathbb{R}}^{n}$$ ](A81414_1_En_4_Chapter_IEq92.gif) are linearly independent unit vectors, then show that R y R z ∈ O n is a rotation on M = spany, z and the identity on M ⊥ . (c) Show that the angle θ of rotation is given by the relationship ![ $$\\begin{array}{rcl} \\cos \\theta & =& -1 + 2{\\left \\vert \\left \(y\\vert z\\right \)\\right \\vert }^{2} \\\\ & =& \\cos \\left \(2\\psi \\right \), \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ61.gif) where y | z = cosψ. 9. Let S n denote the group of permutations. These are the bijective maps from 1, 2,..., n to itself. The group product is composition, and inverses are the inverse maps. Show that the map defined by sending σ ∈ S n to the permutation matrix O σ defined by O σ e i = e σi is a group homomorphism ![ $${S}_{n} \\rightarrow{O}_{n},$$ ](A81414_1_En_4_Chapter_Equdy.gif) i.e., show O σ ∈ O n and O σ ∘ τ = O σ ∘ O τ. (See also Example 1.7.7.) 10. Let A ∈ O 4. (a) Show that we can find a two-dimensional subspace ![ $$M \\subset{\\mathbb{R}}^{4}$$ ](A81414_1_En_4_Chapter_IEq93.gif) such that M and M ⊥ are both invariant under A. (b) Show that we can choose M so that ![ $$A{\\vert }_{{M}^{\\perp }}$$ ](A81414_1_En_4_Chapter_IEq94.gif) is rotation and A | M is a rotation precisely when A is type I while A | M is a reflection when A has type II. (c) Show that if A has type I, then ![ $$\\begin{array}{rcl}{ \\chi }_{A}\\left \(t\\right \)& =& {t}^{4} - 2\\left \(\\cos \\left \({\\theta }_{ 1}\\right \) +\\cos \\left \({\\theta }_{2}\\right \)\\right \){t}^{3} \\\\ & & +\\left \(2 + 4\\cos \\left \({\\theta }_{1}\\right \)\\cos \\left \({\\theta }_{2}\\right \)\\right \){t}^{2} - 2\\left \(\\cos \\left \({\\theta }_{ 1}\\right \) +\\cos \\left \({\\theta }_{2}\\right \)\\right \)t + 1 \\\\ & =& {t}^{4} -\\left \(\\mathrm{tr}\\left \(A\\right \)\\right \){t}^{3} + \\left \(2 +\\mathrm{ tr}\\left \(A{\\vert }_{ M}\\right \)\\mathrm{tr}\\left \(A{\\vert }_{{M}^{\\perp }}\\right \)\\right \){t}^{2} -\\left \(\\mathrm{tr}\\left \(A\\right \)\\right \)t + 1, \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ62.gif) where ![ $$\\mathrm{tr}\\left \(A\\right \) =\\mathrm{ tr}\\left \(A{\\vert }_{M}\\right \) +\\mathrm{ tr}\\left \(A{\\vert }_{{M}^{\\perp }}\\right \).$$ ](A81414_1_En_4_Chapter_IEq95.gif) (d) Show that if A has type II, then ![ $$\\begin{array}{rcl}{ \\chi }_{A}\\left \(t\\right \)& =& {t}^{4} -\\left \(2\\cos \\left \(\\theta \\right \)\\right \){t}^{3} + \\left \(2\\cos \\theta \\right \)t - 1 \\\\ & =& {t}^{4} -\\left \(\\mathrm{tr}\\left \(A\\right \)\\right \){t}^{3} + \\left \(\\mathrm{tr}\\left \(A\\right \)\\right \)t - 1 \\\\ & =& {t}^{4} -\\left \(\\mathrm{tr}\\left \(A{\\vert }_{{ M}^{\\perp }}\\right \)\\right \){t}^{3} + \\left \(\\mathrm{tr}\\left \(A{\\vert }_{{ M}^{\\perp }}\\right \)\\right \)t - 1.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ63.gif) ## 4.8 Triangulability* There is a result that gives a simple form for general complex linear maps in an orthonormal basis. This is a sort of consolation prize for operators without any special properties relating to the inner product structure. In Sects. 4.9 and 4.10 on the singular value decomposition and the polar composition we shall encounter some other simplified forms for general linear maps between inner product spaces. Theorem 4.8.1. (Schur's Theorem) Let L : V -> V be a linear operator on a finite-dimensional complex inner product space. It is possible to find an orthonormal basis e 1 ,...,e n such that the matrix representation L is upper triangular in this basis, i.e. ![ $$\\begin{array}{rcl} L& =& \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \[L\\right \]{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}} \\\\ & =& \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \[\\begin{array}{cccc} {\\alpha }_{11} & {\\alpha }_{12} & \\cdots & {\\alpha }_{1n} \\\\ 0 &{\\alpha }_{22} & \\cdots & {\\alpha }_{2n}\\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ 0 & 0 &\\cdots &{\\alpha }_{nn} \\end{array} \\right \]{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]}^{{_\\ast}}.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ64.gif) Before discussing how to prove this result, let us consider a few examples. Example 4.8.2. Note that ![ $$\\left \[\\begin{array}{cc} 1&1\\\\ 0 &2 \\end{array} \\right \],\\left \[\\begin{array}{cc} 0&1\\\\ 0 &0 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equdz.gif) are both in the desired form. The former matrix is diagonalizable but not with respect to an orthonormal basis. So within that framework, we cannot improve its canonical form. The latter matrix is not diagonalizable so there is nothing else to discuss. Example 4.8.3. Any 2 ×2 matrix A can be put into upper triangular form by finding an eigenvector e 1 and then selecting e 2 to be orthogonal to e 1. Specifically, ![ $$\\left \[\\begin{array}{cc} A{e}_{1} & A{e}_{2} \\end{array} \\right \] = \\left \[\\begin{array}{cc} {e}_{1} & {e}_{2} \\end{array} \\right \]\\left \[\\begin{array}{cc} \\lambda &\\beta \\\\ 0 & \\gamma\\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equea.gif) Proof. (of Schur's theorem) Note that if we have the desired form ![ $$\\left \[\\begin{array}{ccc} L\\left \({e}_{1}\\right \)&\\cdots &L\\left \({e}_{n}\\right \) \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \[\\begin{array}{cccc} {\\alpha }_{11} & {\\alpha }_{12} & \\cdots & {\\alpha }_{1n} \\\\ 0 &{\\alpha }_{22} & \\cdots & {\\alpha }_{2n}\\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ 0 & 0 &\\cdots &{\\alpha }_{nn} \\end{array} \\right \],$$ ](A81414_1_En_4_Chapter_Equeb.gif) then we can construct a flag of invariant subspaces ![ $$\\left \\{0\\right \\} \\subset{V }_{1} \\subset{V }_{2} \\subset \\cdots\\subset{V }_{n-1} \\subset V,$$ ](A81414_1_En_4_Chapter_Equec.gif) where dimV k = k and LV k ⊂ V k , defined by V k = spane 1,..., e k . Conversely, given such a flag of subspaces, we can find the orthonormal basis by selecting unit vectors e k ∈ V k ∩ V k − 1 ⊥ . In order to exhibit such a flag, we use an induction argument along the lines of what we did when proving the spectral theorems for self-adjoint and normal operators (Theorems 4.3.4 and 4.4.6). In this case, the proof of Schur's theorem is reduced to showing that any complex linear map has an invariant subspace of dimension dimV − 1. To see why this is true, consider the adjoint L ∗ : V -> V and select an eigenvalue/vector pair L ∗ y = μy (note that in order to find an eigenvalue, we must invoke the Fundamental Theorem of Algebra 2.1.8). Then, define V n − 1 = y ⊥ = x ∈ V : x | y = 0 and note that for x ∈ V n − 1, we have ![ $$\\begin{array}{rcl} \\left \(L\\left \(x\\right \)\\vert y\\right \)& =& \\left \(x\\vert {L}^{{_\\ast}}\\left \(y\\right \)\\right \) \\\\ & =& \\left \(x\\vert \\mu y\\right \) \\\\ & =& \\mu \\left \(x\\vert y\\right \) \\\\ & =&0. \\end{array}$$ ](A81414_1_En_4_Chapter_Equ65.gif) Thus, V n − 1 is L-invariant. □ Example 4.8.4. Let ![ $$A = \\left \[\\begin{array}{ccc} 0&0&1\\\\ 1 &0 &0 \\\\ 1&1&0 \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equed.gif) To find the basis that puts A into upper triangular form, we can always use an eigenvalue e 1 for A as the first vector. To use the induction, we need one for A ∗ as well. Note, however, that if Ax = λx and A ∗ y = μy, then ![ $$\\begin{array}{rcl} \\lambda \\left \(x\\vert y\\right \)& =& \\left \(\\lambda x\\vert y\\right \) \\\\ & =& \\left \(Ax\\vert y\\right \) \\\\ & =& \\left \(x\\vert {A}^{{_\\ast}}y\\right \) \\\\ & =& \\left \(x\\vert \\mu y\\right \) \\\\ & =& \\bar{\\mu }\\left \(x\\vert y\\right \).\\end{array}$$ ](A81414_1_En_4_Chapter_Equ66.gif) So x and y are perpendicular as long as ![ $$\\lambda \\neq \\bar{\\mu }$$ ](A81414_1_En_4_Chapter_IEq96.gif). Having selected e 1, we should then select e 3 as an eigenvector for A ∗ where the eigenvalue is not conjugate to the one for e 1. Next, we note that e 3 ⊥ is invariant and contains e 1. Thus, we can easily find e 2 ∈ e 3 ⊥ as a vector perpendicular to e 1. This then gives the desired basis for A. Now, let us implement this on the original matrix. First, note that 0 is not an eigenvalue for either matrix as kerA = 0 = kerA ∗ . This is a little unlucky of course. Thus, we must find λ such that ![ $$\\left \(A - \\lambda {1}_{{\\mathbb{C}}^{3}}\\right \)x = 0$$ ](A81414_1_En_4_Chapter_IEq97.gif) has a nontrivial solution. This means that we should study the augmented system ![ $$\\begin{array}{rcl} & & \\left \[\\begin{array}{ccc} - \\lambda & 0 & 1 \\\\ 1 & - \\lambda & 0 \\\\ 1 & 1 & - \\lambda\\end{array} \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\end{array} \\right \] \\\\ & & \\left \[\\begin{array}{ccc} 1 & 1 & - \\lambda\\\\ 1 & - \\lambda & 0 \\\\ - \\lambda & 0 & 1 \\end{array} \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\end{array} \\right \] \\\\ & & \\left \[\\begin{array}{ccc} 1& 1 & - \\lambda\\\\ 0& - \\lambda- 1& \\lambda\\\\ 0& \\lambda&1 - {\\lambda }^{2} \\end{array} \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\end{array} \\right \] \\\\ & & \\left \[\\begin{array}{ccc} 1& 1 & - \\lambda\\\\ 0& \\lambda&1 - {\\lambda }^{2} \\\\ 0&\\lambda+ 1& - \\lambda\\end{array} \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\end{array} \\right \] \\\\ & & \\left \[\\begin{array}{ccc} 1& 1 & - \\lambda\\\\ 0&\\lambda & 1 - {\\lambda }^{2} \\\\ 0& 0 & - \\lambda-\\frac{\\lambda +1} {\\lambda } \\left \(1 - {\\lambda }^{2}\\right \) \\end{array} \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_4_Chapter_Equ67.gif) In order to find a nontrivial solution to the last equation, the characteristic equation ![ $$\\lambda \\left \(-\\lambda-\\frac{\\lambda+ 1} {\\lambda } \\left \(1 - {\\lambda }^{2}\\right \)\\right \) = {\\lambda }^{3} - \\lambda- 1$$ ](A81414_1_En_4_Chapter_Equee.gif) must vanish. This is not a pretty equation to solve but we do know that it has a solution which is real. We run into the same equation when considering A ∗ and we know that we can find yet another solution that is either complex or a different real number. Thus, we can conclude that we can put this matrix into upper triangular form. Despite the simple nature of the matrix, the upper triangular form is not very pretty. Schur's theorem evidently does not depend on our earlier theorems such as the spectral theorem. In fact, all of those results can be reproved using the Schur's theorem. The spectral theorem itself can, for instance, be proved by simply observing that the matrix representation for a normal operator must be normal if the basis is orthonormal. But an upper triangular matrix can only be normal if it is diagonal. One of the nice uses of Schur's theorem is to linear differential equations. Assume that we have a system ![ $$L\\left \(x\\right \) =\\dot{ x} - Ax = b,$$ ](A81414_1_En_4_Chapter_IEq98.gif) where ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq99.gif), ![ $$b \\in{\\mathbb{C}}^{n}.$$ ](A81414_1_En_4_Chapter_IEq100.gif) Then, find a basis arranged as a matrix U so that U ∗ AU is upper triangular. If we let x = Uy, then the system can be rewritten as ![ $$U\\dot{y} - AUy = b,$$ ](A81414_1_En_4_Chapter_IEq101.gif) which is equivalent to solving ![ $$K\\left \(y\\right \) =\\dot{ y} - {U}^{{_\\ast}}AUy = {U}^{{_\\ast}}b.$$ ](A81414_1_En_4_Chapter_Equef.gif) Since U ∗ AU is upper triangular, it will look like ![ $$\\left \[\\begin{array}{c} \\dot{{y}}_{1}\\\\ \\vdots \\\\ \\dot{{y}}_{n-1} \\\\ \\dot{{y}}_{n} \\end{array} \\right \]-\\left \[\\begin{array}{cccc} {\\beta }_{11} & \\cdots & {\\beta }_{1,n-1} & {\\beta }_{1,n}\\\\ \\vdots & \\ddots & \\vdots & \\vdots \\\\ 0 &\\cdots &{\\beta }_{n-1,n-1} & {\\beta }_{n-1,n} \\\\ 0 &\\cdots & 0 & {\\beta }_{nn} \\end{array} \\right \]\\left \[\\begin{array}{c} {y}_{1}\\\\ \\vdots \\\\ {y}_{n-1} \\\\ {y}_{n} \\end{array} \\right \] = \\left \[\\begin{array}{c} {\\gamma }_{1}\\\\ \\vdots \\\\ {\\gamma }_{n-1} \\\\ {\\gamma }_{n} \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equeg.gif) Now, start by solving the last equation ![ $$\\dot{{y}}_{n} - {\\beta }_{nn}{y}_{n} = {\\gamma }_{n}$$ ](A81414_1_En_4_Chapter_IEq102.gif) and then successively solve backwards; using that, we know how to solve linear equations of the form ![ $$\\dot{z} - \\alpha z = f\\left \(t\\right \).$$ ](A81414_1_En_4_Chapter_IEq103.gif) Finally, translate back to x = U ∗ y to find x. Note that this also solves any particular initial value problem xt 0 = x 0 as we know how to solve each of the systems with a fixed initial value at t 0. Specifically, ![ $$\\dot{z} - \\alpha z = f\\left \(t\\right \),$$ ](A81414_1_En_4_Chapter_IEq104.gif) zt 0 = z 0 has the unique solution ![ $$\\begin{array}{rcl} z\\left \(t\\right \)& =& {z}_{0}\\exp \\left \(\\alpha \\left \(t - {t}_{0}\\right \)\\right \){\\int\\nolimits \\nolimits }_{{t}_{0}}^{t}\\exp \\left \(-\\alpha \\left \(s - {t}_{ 0}\\right \)\\right \)f\\left \(s\\right \)\\mathrm{d}s \\\\ & =& {z}_{0}\\exp \\left \(\\alpha t\\right \){\\int\\nolimits \\nolimits }_{{t}_{0}}^{t}\\exp \\left \(-\\alpha s\\right \)f\\left \(s\\right \)\\mathrm{d}s.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ68.gif) Note that the procedure only uses that A is a matrix whose entries are complex numbers. The constant b can in fact be allowed to have smooth functions as entries without changing a single step in the construction. We could, of course, have used the Jordan canonical form (Theorem 2.8.3) as an upper triangular representative for A as well. The advantage of Schur's theorem is that the transition matrix is unitary and therefore easy to invert. ### 4.8.1 Exercises 1. Show that for any linear map L : V -> V on an n-dimensional vector space, where the field of scalars ![ $$\\mathbb{F} \\subset\\mathbb{C},$$ ](A81414_1_En_4_Chapter_IEq105.gif) we have trL = λ1 + ⋯ + λ n , where λ1,..., λ n are the complex roots of χ L t counted with multiplicities. Hint: First go to a matrix representation L, then consider this as a linear map on ![ $${\\mathbb{C}}^{n}$$ ](A81414_1_En_4_Chapter_IEq106.gif) and triangularize it. 2. Let L : V -> V, where V is a real finite-dimensional inner product space, and assume that χ L t splits, i.e., all roots are real. Show that there is an orthonormal basis in which the matrix representation for L is upper triangular. 3. Use Schur's theorem to prove that if ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq107.gif) and ε > 0, then we can find ![ $${A}_{\\epsilon } \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq108.gif) such that A − A ε ≤ ε and the n eigenvalues for A ε are distinct. Conclude that any complex linear operator on a finite-dimensional inner product space can be approximated by diagonalizable operators. 4. Let L : V -> V be a linear operator on a finite-dimensional complex inner product space and let ![ $$p \\in\\mathbb{C}\\left \[t\\right \]$$ ](A81414_1_En_4_Chapter_IEq109.gif). Show that μ is an eigenvalue for pL if and only if μ = pλ where λ is an eigenvalue for L. 5. Show that a linear operator L : V -> V on an n-dimensional inner product space is normal if and only if ![ $$\\mathrm{tr}\\left \({L}^{{_\\ast}}L\\right \) ={ \\left \\vert {\\lambda }_{ 1}\\right \\vert }^{2} + \\cdots+{ \\left \\vert {\\lambda }_{ n}\\right \\vert }^{2},$$ ](A81414_1_En_4_Chapter_Equeh.gif) where λ1,..., λ n are the complex roots of the characteristic polynomial χ L t. 6. Let L : V -> V be an invertible linear operator on an n-dimensional complex inner product space. If λ1,..., λ n are the eigenvalues for L counted with multiplicities, then ![ $$\\left \\Vert {L}^{-1}\\right \\Vert \\leq{C}_{ n} \\frac{{\\left \\Vert L\\right \\Vert }^{n-1}} {\\left \\vert {\\lambda }_{1}\\right \\vert \\cdots \\left \\vert {\\lambda }_{n}\\right \\vert }$$ ](A81414_1_En_4_Chapter_Equei.gif) for some constant C n that depends only on n. Hint: If Ax = b and A is upper triangular, show that there are constants ![ $$1 = {C}_{n,n} \\leq{C}_{n,n-1} \\leq \\cdots\\leq{C}_{n,1}$$ ](A81414_1_En_4_Chapter_Equej.gif) such that ![ $$\\begin{array}{rcl} \\left \\vert {\\xi }_{k}\\right \\vert &\\leq & {C}_{n,k} \\frac{\\left \\Vert b\\right \\Vert {\\left \\Vert A\\right \\Vert }^{n-k}} {\\left \\vert {\\alpha }_{nn}\\cdots {\\alpha }_{kk}\\right \\vert }, \\\\ A& =& \\left \[\\begin{array}{cccc} {\\alpha }_{11} & {\\alpha }_{12} & \\cdots & {\\alpha }_{1n} \\\\ 0 &{\\alpha }_{22} & \\cdots & {\\alpha }_{2n}\\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ 0 & 0 &\\cdots &{\\alpha }_{nn}\\end{array} \\right \], \\\\ x& =& \\left \[\\begin{array}{c} {\\xi }_{1}\\\\ \\vdots \\\\ {\\xi }_{n}\\end{array} \\right \].\\end{array}$$ ](A81414_1_En_4_Chapter_Equ69.gif) Then, bound L − 1 e i using that LL − 1 e i = e i . 7. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq110.gif) and ![ $$\\lambda\\in\\mathbb{C}$$ ](A81414_1_En_4_Chapter_IEq111.gif) be given and assume that there is a unit vector x such that ![ $$\\left \\Vert Ax - \\lambda x\\right \\Vert < \\frac{{\\epsilon }^{n}} {{C}_{n}{\\left \\Vert A - \\lambda {1}_{V }\\right \\Vert }^{n-1}}.$$ ](A81414_1_En_4_Chapter_Equek.gif) Show that there is an eigenvalue λ ′ for A such that ![ $$\\left \\vert \\lambda- {\\lambda }^{{\\prime}}\\right \\vert < \\epsilon.$$ ](A81414_1_En_4_Chapter_Equel.gif) Hint: Use the above exercise to conclude that if ![ $$\\begin{array}{rcl} \\left \(A - \\lambda {1}_{V }\\right \)\\left \(x\\right \)& =& b, \\\\ \\left \\Vert b\\right \\Vert & <& \\frac{{\\epsilon }^{n}} {{C}_{n}{\\left \\Vert A - \\lambda {1}_{V }\\right \\Vert }^{n-1}} \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ70.gif) and all eigenvalues for A − λ1 V have absolute value ≥ ε, then x < 1. 8. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq112.gif) be given and assume that A − B < δ for some small δ. (a) Show that all eigenvalues for A and B lie in the compact set K = z : z ≤ A + 1. (b) Show that if λ ∈ K is no closer than ε to any eigenvalue for A, then ![ $$\\left \\Vert {\\left \(\\lambda {1}_{V } - A\\right \)}^{-1}\\right \\Vert < {C}_{ n}\\frac{{\\left \(2\\left \\Vert A\\right \\Vert + 2\\right \)}^{n-1}} {{\\epsilon }^{n}}.$$ ](A81414_1_En_4_Chapter_Equem.gif) (c) Using ![ $$\\delta= \\frac{{\\epsilon }^{n}} {{C}_{n}{\\left \(2\\left \\Vert A\\right \\Vert + 2\\right \)}^{n-1}},$$ ](A81414_1_En_4_Chapter_Equen.gif) show that any eigenvalue for B is within ε of some eigenvalue for A. (d) Show that ![ $$\\left \\Vert {\\left \(\\lambda {1}_{V } - B\\right \)}^{-1}\\right \\Vert \\leq{C}_{ n}\\frac{{\\left \(2\\left \\Vert A\\right \\Vert + 2\\right \)}^{n-1}} {{\\epsilon }^{n}}$$ ](A81414_1_En_4_Chapter_Equeo.gif) and that any eigenvalue for A is within ε of an eigenvalue for B. 9. Show directly that the solution to ![ $$\\dot{z} - \\alpha z = f\\left \(t\\right \),$$ ](A81414_1_En_4_Chapter_IEq113.gif) zt 0 = z 0 is unique. Conclude that the initial value problems for systems of differential equations with constant coefficients have unique solutions. 10. Find the general solution to the system ![ $$\\dot{x} - Ax = b,$$ ](A81414_1_En_4_Chapter_IEq114.gif) where (a) ![ $$A = \\left \[\\begin{array}{cc} 0&1\\\\ 1 &2\\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_IEq115.gif) (b) ![ $$A = \\left \[\\begin{array}{cc} 1&1\\\\ 1 &2\\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_IEq116.gif) (c) A = ![ $$\\left \[\\begin{array}{cc} -\\frac{1} {2} & \\frac{1} {2} \\\\ -\\frac{1} {2} & \\frac{1} {2}\\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_IEq117.gif) ## 4.9 The Singular Value Decomposition* Using the results we have developed so far, it is possible to obtain some very nice decompositions for general linear maps as well. First, we treat the so-called singular value decomposition. Note that general linear maps L : V -> W do not have eigenvalues. The singular values of L that we define below are a good substitute for eigenvalues when we have a map between inner product spaces. Theorem 4.9.1 (The Singular Value Decomposition). Let L : V -> W be a linear map between finite-dimensional inner product spaces. There is an orthonormal basis e 1 ,...,e m for V such that Le i |Le j = 0 if i≠j. Moreover, we can find orthonormal bases e1,...,em for V and f1,...,fn for W so that ![ $$\\begin{array}{rcl} & L\\left \({e}_{1}\\right \) = {\\sigma }_{1}{f}_{1},\\ldots,L\\left \({e}_{k}\\right \) = {\\sigma }_{k}{f}_{k},& \\\\ & L\\left \({e}_{k+1}\\right \) = \\cdots= L\\left \({e}_{m}\\right \) = 0 & \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ71.gif) for some k ≤ m. In particular, ![ $$\\begin{array}{rcl} L& =& \\left \[\\begin{array}{c@{\\quad }c@{\\quad }c} {f}_{1}\\quad &\\cdots \\quad &{f}_{n} \\end{array} \\right \]\\left \[L\\right \]{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \]}^{{_\\ast}} \\\\ & =& \\left \[\\begin{array}{ccc} {f}_{1} & \\cdots &{f}_{n} \\end{array} \\right \]\\left \[\\begin{array}{ccccc} {\\sigma }_{1} & 0& \\cdots\\\\ 0 & \\ddots & 0\\\\ \\vdots &0 &{\\sigma }_{ k}&0\\\\ & & 0 &0\\\\ & & & &\\ddots \\end{array} \\right \]{\\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \]}^{{_\\ast}}.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ72.gif) Proof. Use the spectral theorem (Theorem 4.3.4) on L ∗ L : V -> V to find an orthonormal basis e 1,..., e m for V such that L ∗ Le i = λ i e i . Then, ![ $$\\left \(L\\left \({e}_{i}\\right \)\\vert L\\left \({e}_{j}\\right \)\\right \) = \\left \({L}^{{_\\ast}}L\\left \({e}_{ i}\\right \)\\vert {e}_{j}\\right \) = \\left \({\\lambda }_{i}{e}_{i}\\vert {e}_{j}\\right \) = {\\lambda }_{i}{\\delta }_{ij}.$$ ](A81414_1_En_4_Chapter_Equep.gif) Next, reorder if necessary so that λ1,..., λ k ≠0 and define ![ $${f}_{i} = \\frac{L\\left \({e}_{i}\\right \)} {\\left \\Vert L\\left \({e}_{i}\\right \)\\right \\Vert },\\:i = 1,\\ldots,k.$$ ](A81414_1_En_4_Chapter_Equeq.gif) Finally, select f k + 1,..., f n so that we get an orthonormal basis for W. In this way, we see that σ i = Le i . Finally, note that ![ $$L\\left \({e}_{k+1}\\right \) = \\cdots= L\\left \({e}_{m}\\right \) = 0$$ ](A81414_1_En_4_Chapter_Equer.gif) since Le i 2 = λ i for all i. □ The values ![ $$\\sigma= \\sqrt{\\lambda }$$ ](A81414_1_En_4_Chapter_IEq118.gif) where λ is an eigenvalue for L ∗ L are called the singular values of L. We often write the decomposition of L as follows: ![ $$\\begin{array}{rcl} L& =& U\\Sigma \\tilde{{U}}^{{_\\ast}}, \\\\ U& =& \\left \[\\begin{array}{ccc} {f}_{1} & \\cdots &{f}_{n} \\end{array} \\right \], \\\\ \\tilde{U}& =& \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{m} \\end{array} \\right \], \\\\ \\Sigma & =& \\left \[\\begin{array}{ccccc} {\\sigma }_{1} & 0& \\cdots\\\\ 0 & \\ddots & 0\\\\ \\vdots &0 &{\\sigma }_{ k}&0\\\\ & & 0 &0\\\\ & & & &\\ddots \\end{array} \\right \] \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ73.gif) and we generally order the singular values σ1 ≥ ⋯ ≥ σ k . The singular value decomposition gives us a nice way of studying systems Lx = b, when L is not necessarily invertible. In this case, L has a partial or generalized inverse called the Moore-Penrose inverse. The construction is quite simple. Take a linear map L : V -> W, then use Theorems 3.5.4 and 1.11.7 to conclude that ![ $$L{\\vert }_{{\\left \(\\ker \\left \(L\\right \)\\right \)}^{\\perp }} :{ \\left \(\\ker \\left \(L\\right \)\\right \)}^{\\perp }\\rightarrow \\mathrm{ im}\\left \(L\\right \)$$ ](A81414_1_En_4_Chapter_IEq119.gif) is an isomorphism. Thus, we can define the generalized inverse L † : W -> V in such a way that ![ $$\\begin{array}{rcl} \\ker \\left \({L}^{\\dag }\\right \)& =&{ \\left \(\\mathrm{im}\\left \(L\\right \)\\right \)}^{\\perp }, \\\\ \\mathrm{im}\\left \({L}^{\\dag }\\right \)& =&{ \\left \(\\ker \\left \(L\\right \)\\right \)}^{\\perp }, \\\\ {L}^{\\dag }{\\vert }_{\\mathrm{ im}\\left \(L\\right \)}& =&{ \\left \(L{\\vert }_{{\\left \(\\ker \\left \(L\\right \)\\right \)}^{\\perp }} :{ \\left \(\\ker \\left \(L\\right \)\\right \)}^{\\perp }\\rightarrow \\mathrm{ im}\\left \(L\\right \)\\right \)}^{-1}.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ74.gif) If we have picked orthonormal bases that yield the singular value decomposition, then ![ $$\\begin{array}{rcl} {L}^{\\dag }\\left \({f}_{ 1}\\right \)& =& {\\sigma }_{1}^{-1}{f}_{ 1},\\ldots,{L}^{\\dag }\\left \({f}_{ k}\\right \) = {\\sigma }_{k}^{-1}{f}_{ k}, \\\\ {L}^{\\dag }\\left \({f}_{ k+1}\\right \)& =& \\cdots= {L}^{\\dag }\\left \({f}_{ n}\\right \) =0. \\end{array}$$ ](A81414_1_En_4_Chapter_Equ75.gif) Or in matrix form using ![ $$L = U\\Sigma \\tilde{{U}}^{{_\\ast}}$$ ](A81414_1_En_4_Chapter_IEq120.gif), we have ![ $${L}^{\\dag } =\\tilde{ U}{\\Sigma }^{\\dag }{U}^{{_\\ast}},$$ ](A81414_1_En_4_Chapter_Eques.gif) where ![ $${\\Sigma }^{\\dag } = \\left \[\\begin{array}{ccccc} {\\sigma }_{1}^{-1} & 0& \\cdots\\\\ 0 & \\ddots & 0 \\\\ \\vdots &0&{\\sigma }_{k}^{-1} & 0 \\\\ & & 0 &0\\\\ & & & &\\ddots \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equet.gif) This generalized inverse can now be used to try to solve Lx = b for given b ∈ W. Before explaining how that works, we list some of the important properties of the generalized inverse. Proposition 4.9.2. Let L : V -> W be a linear map between finite-dimensional inner product spaces and L † the Moore-Penrose inverse. Then: (1) λL † = λ −1 L † if λ≠0. (2) L † † = L. (3) L ∗ † = L † ∗. (4) LL † is an orthogonal projection with im LL † = im L and ker LL † = ker L ∗ = ker L †. (5) L † L is an orthogonal projection with im L † L = im L ∗ = im L † and ker L † L = ker L. (6) L † LL † = L † . (7) LL † L = L. Proof. All of these properties can be proven using the abstract definition. Instead, we shall see how the matrix representation coming from the singular value decomposition can also be used to prove the results. Conditions (1)-(3) are straightforward to prove using that the singular value decomposition of L yields singular value decompositions of both L † and L ∗ . To prove (4) and (5), we use the matrix representation to see that ![ $$\\begin{array}{rcl}{ L}^{\\dag }L& =& \\tilde{U}{\\Sigma }^{\\dag }{U}^{{_\\ast}}U\\Sigma \\tilde{{U}}^{{_\\ast}} \\\\ & =& \\tilde{U}\\left \[\\begin{array}{ccccc} 1&0&\\cdots \\\\ 0 & \\ddots & 0 \\\\ \\vdots &0& 1 &0\\\\ & & 0 &0\\\\ & & & &\\ddots \\end{array} \\right \]\\tilde{{U}}^{{_\\ast}}\\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ76.gif) and similarly ![ $$L{L}^{\\dag } = U\\left \[\\begin{array}{ccccc} 1&0&\\cdots \\\\ 0 & \\ddots & 0 \\\\ \\vdots &0& 1 &0\\\\ & & 0 &0\\\\ & & & &\\ddots \\end{array} \\right \]{U}^{{_\\ast}}.$$ ](A81414_1_En_4_Chapter_Equeu.gif) This proves that these maps are orthogonal projections as the bases are orthonormal. It also yields the desired properties for kernels and images. Finally, (6) and (7) now follow via a similar calculation using the matrix representations. □ To solve Lx = b for given b ∈ W, we can now use Corollary 4.9.3. Lx = b has a solution if and only if b = LL † b. Moreover, if b is a solution, then all solutions are given by ![ $$x = {L}^{\\dag }b + \\left \({1}_{ V } - {L}^{\\dag }L\\right \)z,$$ ](A81414_1_En_4_Chapter_Equev.gif) where z ∈ V. The smallest solution is given by ![ $${x}_{0} = {L}^{\\dag }b.$$ ](A81414_1_En_4_Chapter_Equew.gif) In case b≠LL † b, the best approximate solutions are given by ![ $$x = {L}^{\\dag }b + \\left \({1}_{ V } - {L}^{\\dag }L\\right \)z,z \\in V$$ ](A81414_1_En_4_Chapter_Equex.gif) again with ![ $${x}_{0} = {L}^{\\dag }b$$ ](A81414_1_En_4_Chapter_Equey.gif) being the smallest. Proof. Since LL † is the orthogonal projection onto imL, we see that b ∈ imL if and only if b = LL † b. This means that b = LL † b so that x 0 = L † b is a solution to the system. Next, we note that 1 V − L † L is the orthogonal projection onto imL ∗ ⊥ = kerL. Thus, all solutions are of the desired form. Finally, as L † b ∈ imL ∗ , the Pythagorean theorem implies that ![ $${\\left \\Vert {L}^{\\dag }b + \\left \({1}_{ V } - {L}^{\\dag }L\\right \)z\\right \\Vert }^{2} ={ \\left \\Vert {L}^{\\dag }b\\right \\Vert }^{2} +{ \\left \\Vert \\left \({1}_{ V } - {L}^{\\dag }L\\right \)z\\right \\Vert }^{2}$$ ](A81414_1_En_4_Chapter_Equez.gif) showing that ![ $${\\left \\Vert {L}^{\\dag }b\\right \\Vert }^{2} \\leq {\\left \\Vert {L}^{\\dag }b + \\left \({1}_{ V } - {L}^{\\dag }L\\right \)z\\right \\Vert }^{2}$$ ](A81414_1_En_4_Chapter_Equfa.gif) for all z. The last statement is a consequence of the fact that LL † b is the element in imL that is closest to b since LL † is an orthogonal projection. □ ### 4.9.1 Exercises 1. Show that the singular value decomposition of a self-adjoint operator L with nonnegative eigenvalues looks like UΣU ∗ where the diagonal entries of Σ are the eigenvalues of L. 2. Find the singular value decompositions of ![ $$\\left \[\\begin{array}{cc} 0&1\\\\ 0 &1 \\\\ 1&0\\end{array} \\right \]\\text{ and }\\left \[\\begin{array}{ccc} 0&0&1\\\\ 1 &1 &0\\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equfb.gif) 3. Find the generalized inverses to ![ $$\\left \[\\begin{array}{cc} 0&1\\\\ 0 &0\\end{array} \\right \]\\text{ and }\\left \[\\begin{array}{ccc} 0&0&0\\\\ 1 &0 &0 \\\\ 0&1&1\\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equfc.gif) 4. Let L : V -> W be a linear operator between finite-dimensional inner product spaces and σ1 ≥ ⋯ ≥ σ k the singular values of L. Show that the results of the section can be rephrased as follows: There exist orthonormal bases e 1,..., e m for V and f 1,..., f n for W such that ![ $$L\\left \(x\\right \) = {\\sigma }_{1}\\left \(x\\vert {e}_{1}\\right \){f}_{1} + \\cdots+ {\\sigma }_{k}\\left \(x\\vert {e}_{k}\\right \){f}_{k},$$ ](A81414_1_En_4_Chapter_Equfd.gif) ![ $${L}^{{_\\ast}}\\left \(y\\right \) = {\\sigma }_{ 1}\\left \(y\\vert {f}_{1}\\right \){e}_{1} + \\cdots+ {\\sigma }_{k}\\left \(y\\vert {f}_{k}\\right \){e}_{k},$$ ](A81414_1_En_4_Chapter_Equfe.gif) ![ $${L}^{\\dag }\\left \(y\\right \) = {\\sigma }_{ 1}^{-1}\\left \(y\\vert {f}_{ 1}\\right \){e}_{1} + \\cdots+ {\\sigma }_{k}^{-1}\\left \(y\\vert {f}_{ k}\\right \){e}_{k}.$$ ](A81414_1_En_4_Chapter_Equff.gif) 5. Let L : V -> V be a linear operator on a finite-dimensional inner product space. Show that L is an isometry if and only if kerL = 0 and all singular values are 1. 6. Let L : V -> W be a linear operator between finite-dimensional inner product spaces. Show that ![ $$\\left \\Vert L\\right \\Vert = {\\sigma }_{1},$$ ](A81414_1_En_4_Chapter_Equfg.gif) where σ1 is the largest singular value of L (see Theorem 3.3.8 for the definition of L). 7. Let L : V -> W be a linear operator between finite-dimensional inner product spaces. Show that if there are orthonormal bases e 1,..., e m for V and f 1,..., f n for W such that Le i = τ i f i , i ≤ k and Le i = 0, i > k, then the τ i s are the singular values of L. 8. Let L : V -> W be a nontrivial linear operator between finite-dimensional inner product spaces. (a) If e 1,..., e m is an orthonormal basis for V, show that ![ $$\\mathrm{tr}\\left \({L}^{{_\\ast}}L\\right \) ={ \\left \\Vert L\\left \({e}_{ 1}\\right \)\\right \\Vert }^{2} + \\cdots+{ \\left \\Vert L\\left \({e}_{ m}\\right \)\\right \\Vert }^{2}.$$ ](A81414_1_En_4_Chapter_Equfh.gif) (b) If σ1 ≥ ⋯ ≥ σ k are the singular values for L, show that ![ $$\\mathrm{tr}\\left \({L}^{{_\\ast}}L\\right \) = {\\sigma }_{ 1}^{2} + \\cdots+ {\\sigma }_{ k}^{2}.$$ ](A81414_1_En_4_Chapter_Equfi.gif) ## 4.10 The Polar Decomposition* In this section, we are going to study general linear operators L : V -> V on a finite-dimensional inner product space. These can be decomposed in a manner similar to the polar coordinate decomposition of complex numbers: z = e iθ z. Theorem 4.10.1 (The Polar Decomposition). ; Let L : V -> V be a linear operator on a finite-dimensional inner product space; then, L = WS, where W is unitary (or orthogonal) and S is self-adjoint with nonnegative eigenvalues. Moreover, if L is invertible, then W and S are uniquely determined by L. Proof. The proof is similar to the construction of the singular value decomposition (Theorem 4.9.1). In fact, we can use the singular value decomposition to prove the polar decomposition: ![ $$\\begin{array}{rcl} L& =& U\\Sigma \\tilde{{U}}^{{_\\ast}} \\\\ & =& U\\tilde{{U}}^{{_\\ast}}\\tilde{U}\\Sigma \\tilde{{U}}^{{_\\ast}} \\\\ & =& \\left \(U\\tilde{{U}}^{{_\\ast}}\\right \)\\left \(\\tilde{U}\\Sigma \\tilde{{U}}^{{_\\ast}}\\right \).\\end{array}$$ ](A81414_1_En_4_Chapter_Equ77.gif) Thus, we define ![ $$\\begin{array}{rcl} W& =& U\\tilde{{U}}^{{_\\ast}}, \\\\ S& =& \\tilde{U}\\Sigma \\tilde{{U}}^{{_\\ast}}.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ78.gif) Clearly, W is unitary as it is a composition of two isometries. And S is certainly self-adjoint with nonnegative eigenvalues as we have diagonalized it with an orthonormal basis, and Σ has nonnegative diagonal entries. Finally, assume that L is invertible and ![ $$L = WS =\\tilde{ W}T,$$ ](A81414_1_En_4_Chapter_Equfj.gif) where ![ $$W,\\tilde{W}$$ ](A81414_1_En_4_Chapter_IEq121.gif) are unitary and S, T are self-adjoint with positive eigenvalues. Then, S and T must also be invertible and ![ $$\\begin{array}{rcl} S{T}^{-1}& =& \\tilde{W}{W}^{-1} \\\\ & =& \\tilde{W}{W}^{{_\\ast}}.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ79.gif) This implies that ST − 1 is unitary. Thus, ![ $$\\begin{array}{rcl}{ \\left \(S{T}^{-1}\\right \)}^{-1}& =&{ \\left \(S{T}^{-1}\\right \)}^{{_\\ast}} \\\\ & =&{ \\left \({T}^{{_\\ast}}\\right \)}^{-1}{S}^{{_\\ast}} \\\\ & =& {T}^{-1}S, \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ80.gif) and therefore, ![ $$\\begin{array}{rcl}{ 1}_{V }& =& {T}^{-1}SS{T}^{-1} \\\\ & =& {T}^{-1}{S}^{2}{T}^{-1}.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ81.gif) This means that S 2 = T 2. Since both operators are self-adjoint and have nonnegative eigenvalues, this implies that S = T (see Exercise 8 in Sect. 4.3) and hence ![ $$\\tilde{W} = W$$ ](A81414_1_En_4_Chapter_IEq122.gif) as desired. □ There is also a decomposition L = SW, where S = UΣU ∗ and ![ $$W = U\\tilde{{U}}^{{_\\ast}}$$ ](A81414_1_En_4_Chapter_IEq123.gif). From this, it is clear that S and W need not be the same in the two decompositions unless ![ $$U =\\tilde{ U}$$ ](A81414_1_En_4_Chapter_IEq124.gif) in the singular value decomposition. This is equivalent to L being normal (see also exercises). Recall from Sect. 1.13 that we have the general linear group ![ $$G{l}_{n}\\left \(\\mathbb{F}\\right \) \\subset \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_4_Chapter_IEq125.gif) of invertible n ×n matrices. Further, define ![ $$P{S}_{n}\\left \(\\mathbb{F}\\right \) \\subset \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_4_Chapter_IEq126.gif) as being the self-adjoint positive matrices, i.e., the eigenvalues are positive. The polar decomposition says that we have bijective (nonlinear) maps (i.e., one-to-one and onto maps) ![ $$\\begin{array}{rcl} G{l}_{n}\\left \(\\mathbb{C}\\right \)& \\approx & {U}_{n} \\times P{S}_{n}\\left \(\\mathbb{C}\\right \), \\\\ G{l}_{n}\\left \(\\mathbb{R}\\right \)& \\approx & {O}_{n} \\times P{S}_{n}\\left \(\\mathbb{R}\\right \), \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ82.gif) given by A = WS↔W, S. These maps are in fact homeomorphisms, i.e., both W, S↦WS and A = WS↦W, S are continuous. The first map only involves matrix multiplication, so it is obviously continuous. That A = WS -> W, S is continuous takes a little more work. Assume that A k = W k S k and that A k -> A = WS ∈ Gl n . Then, we need to show that W k -> W and S k -> S. The space of unitary or orthogonal operators is compact. So any subsequence of W k has a convergent subsequence. Now, assume that ![ $${W}_{{k}_{l}} \\rightarrow \\bar{ W},$$ ](A81414_1_En_4_Chapter_IEq127.gif) then also ![ $${S}_{{k}_{l}} = \\left \({W}_{{k}_{l}}^{{_\\ast}}\\right \){A}_{{k}_{l}} \\rightarrow \\bar{ {W}}^{{_\\ast}}A.$$ ](A81414_1_En_4_Chapter_IEq128.gif) Thus, ![ $$A =\\bar{ W}\\left \(\\bar{{W}}^{{_\\ast}}A\\right \),$$ ](A81414_1_En_4_Chapter_IEq129.gif) which implies by the uniqueness of the polar decomposition that ![ $$\\bar{W} = W$$ ](A81414_1_En_4_Chapter_IEq130.gif) and ![ $${S}_{{k}_{l}} \\rightarrow S.$$ ](A81414_1_En_4_Chapter_IEq131.gif) This means that convergent subsequences of W k always converge to W; this in turn implies that W k -> W. We then conclude that also S k -> S. Next, we note that PS n is a convex cone. This means that if A, B ∈ PS n , then also sA + tB ∈ PS n for all t, s > 0. It is obvious that sA + tB is self-adjoint. To see that all eigenvalues are positive, we use that Ax | x, Bx | x > 0 for all x≠0 to see that ![ $$\\left \(\\left \(sA + tB\\right \)\\left \(x\\right \)\\vert x\\right \) = s\\left \(Ax\\vert x\\right \) + t\\left \(Bx\\vert x\\right \) > 0.$$ ](A81414_1_En_4_Chapter_Equfk.gif) The importance of this last observation is that we can deform any matrix A = WS via ![ $${A}_{t} = W\\left \(tI + \\left \(1 - t\\right \)A\\right \) \\in G{l}_{n}$$ ](A81414_1_En_4_Chapter_Equfl.gif) into a unitary or orthogonal matrix. This means that many topological properties of Gl n can be investigated by studying the compact groups U n and O n . The simplest example of this is that ![ $$G{l}_{n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq132.gif) is path connected, i.e., for any two matrices ![ $$A,B \\in G{l}_{n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq133.gif), there is a continuous path ![ $$C : \\left \[0,\\alpha \\right \] \\rightarrow G{l}_{n}\\left \(\\mathbb{C}\\right \)$$ ](A81414_1_En_4_Chapter_IEq134.gif) such that C0 = A and Cα = B. By way of contrast, ![ $$G{l}_{n}\\left \(\\mathbb{R}\\right \)$$ ](A81414_1_En_4_Chapter_IEq135.gif) has two path connected components. We can see these two facts directly when n = 1 as ![ $$G{l}_{1}\\left \(\\mathbb{C}\\right \) = \\left \\{\\alpha\\in\\mathbb{C} : \\alpha \\neq 0\\right \\}$$ ](A81414_1_En_4_Chapter_IEq136.gif) is connected, while ![ $$G{l}_{1}\\left \(\\mathbb{R}\\right \) = \\left \\{\\alpha\\in\\mathbb{R} : \\alpha \\neq 0\\right \\}$$ ](A81414_1_En_4_Chapter_IEq137.gif) consists of the two components corresponding the positive and negative numbers. For general n, we can prove the claim by using the canonical form for unitary and orthogonal matrices. In the unitary situation, we have that any U ∈ U n looks like ![ $$\\begin{array}{rcl} U& =& BD{B}^{{_\\ast}} \\\\ & =& B\\left \[\\begin{array}{ccc} \\exp \\left \(\\mathit{i}{\\theta }_{1}\\right \)&& 0\\\\ &\\ddots \\\\ 0 &&\\exp \\left \(\\mathit{i}{\\theta }_{n}\\right \) \\end{array} \\right \]{B}^{{_\\ast}},\\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ83.gif) where B ∈ U n . Then, define ![ $$D\\left \(t\\right \) = \\left \[\\begin{array}{ccc} \\exp \\left \(it{\\theta }_{1}\\right \)&& 0\\\\ &\\ddots \\\\ 0 &&\\exp \\left \(it{\\theta }_{n}\\right \) \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equfm.gif) Hence, Dt ∈ U n and Ut = BDtB ∗ ∈ U n defines a path that at t = 0 is I n and at t = 1 is U. Thus, any unitary transformation can be joined to the identity matrix inside U n . In the orthogonal case, we see using the real canonical form that a similar deformation using ![ $$\\left \[\\begin{array}{cc} \\cos \\left \(t{\\theta }_{i}\\right \)& -\\sin \\left \(t{\\theta }_{i}\\right \) \\\\ \\sin \\left \(t{\\theta }_{i}\\right \)& \\cos \\left \(t{\\theta }_{i}\\right \) \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equfn.gif) will deform any orthogonal transformation to one of the following two matrices: ![ $$\\left \[\\begin{array}{cccc} 1&0&&0\\\\ 0 &1 & &0\\\\ & &\\ddots \\\\ 0&0&&1 \\end{array} \\right \]\\text{ or }O\\left \[\\begin{array}{cccc} - 1&0&&0\\\\ 0 &1 & &0\\\\ & &\\ddots \\\\ 0 &0&&1 \\end{array} \\right \]{O}^{t}.$$ ](A81414_1_En_4_Chapter_Equfo.gif) Here ![ $$O\\left \[\\begin{array}{cccc} - 1&0&&0\\\\ 0 &1 & &0\\\\ & &\\ddots \\\\ 0 &0&&1 \\end{array} \\right \]{O}^{t}$$ ](A81414_1_En_4_Chapter_Equfp.gif) is the same as the reflection R x where x is the first column vector in O ( − 1 eigenvector). We then have to show that ![ $${1}_{{\\mathbb{R}}^{n}}$$ ](A81414_1_En_4_Chapter_IEq138.gif) and R x cannot be joined to each other inside O n . This is done by contradiction. Thus, assume that At is a continuous path with ![ $$\\begin{array}{rcl} A\\left \(0\\right \)& =& \\left \[\\begin{array}{cccc} 1&0&&0\\\\ 0 &1 & &0\\\\ & &\\ddots \\\\ 0&0&&1 \\end{array} \\right \], \\\\ A\\left \(1\\right \)& =& O\\left \[\\begin{array}{cccc} - 1&0&&0\\\\ 0 &1 & &0\\\\ & &\\ddots \\\\ 0 &0&&1 \\end{array} \\right \]{O}^{t}, \\\\ A\\left \(t\\right \)& \\in & {O}_{n},\\text{ for all }t \\in \\left \[0,1\\right \].\\end{array}$$ ](A81414_1_En_4_Chapter_Equ84.gif) The characteristic polynomial ![ $${\\chi }_{A\\left \(t\\right \)}\\left \(\\lambda \\right \) = {\\lambda }^{n} + \\cdots+ {\\alpha }_{ 0}\\left \(t\\right \)$$ ](A81414_1_En_4_Chapter_Equfq.gif) has coefficients that vary continuously with t (the proof of this uses determinants). However, α00 = − 1 n , while α01 = − 1 n − 1. Thus, the intermediate value theorem tells us that α0 t 0 = 0 for some t 0 ∈ 0, 1. But this implies that λ = 0 is a root of At 0, thus contradicting that At 0 ∈ O n ⊂ Gl n . ### 4.10.1 Exercises 1. Find the polar decompositions for ![ $$\\left \[\\begin{array}{cc} \\alpha & - \\beta \\\\ \\beta& \\alpha \\end{array} \\right \],\\text{ }\\left \[\\begin{array}{cc} \\alpha&\\beta \\\\ - \\beta&\\alpha \\end{array} \\right \],\\text{ and }\\left \[\\begin{array}{cc} \\alpha & 1\\\\ 0 &\\alpha \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equfr.gif) 2. Find the polar decompositions for ![ $$\\left \[\\begin{array}{ccc} 0 &\\beta&0 \\\\ \\alpha & 0 & 0\\\\ 0 & 0 &\\gamma\\end{array} \\right \]\\text{ and }\\left \[\\begin{array}{ccc} 1& - 1&0\\\\ 0 & 0 &2 \\\\ 1& 1 &0\\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equfs.gif) 3. If L : V -> V is a linear operator on a finite-dimensional inner product space, define the Cayley transform of L as L + 1 V L − 1 V − 1. (a) If L is skew-adjoint, show that L + 1L − 1 − 1 is an isometry that does not have − 1 as an eigenvalue. (b) Show that U -> U − 1 V U + 1 V − 1 takes isometries that do not have − 1 as an eigenvalue to skew-adjoint operators and is an inverse to the Cayley transform. 4. Let L : V -> V be a linear operator on a finite-dimensional inner product space. Show that L = SW, where W is unitary (or orthogonal) and S is self-adjoint with nonnegative eigenvalues. Moreover, if L is invertible, then W and S are unique. Show by example that the operators in this polar decomposition do not have to be the same as in the L = WS decomposition. 5. Let L = WS be the unique polar decomposition of an invertible operator L : V -> V on a finite-dimensional inner product space V. Show that L is normal if and only if WS = SW. 6. The purpose of this exercise is to check some properties of the exponential map ![ $$\\exp :\\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \) \\rightarrow G{l}_{n}\\left \(\\mathbb{F}\\right \).$$ ](A81414_1_En_4_Chapter_IEq139.gif) You may want to consult Sect. 3.7 for the definition and various elementary properties. (a) Show that exp maps normal operators to normal operators. (b) Show that exp maps self-adjoint operators to positive self-adjoint operators and that it is a homeomorphism, i.e., it is one-to-one, onto, continuous, and the inverse is also continuous. (c) Show that exp maps skew-adjoint operators to isometries but is not one-to-one. In the complex case, show that it is onto. 7. Let L : V -> V be normal and L = S + A, where S is self-adjoint and A skew-adjoint. Recall that since L is normal S and A commute. (a) Show that expSexpA = expAexpS is the polar decomposition of expL. (b) Show that any invertible normal transformation can be written as expL for some normal L. ## 4.11 Quadratic Forms* Conic sections are those figures we obtain by intersecting a cone with a plane. Analytically, this is the problem of determining all of the intersections of a cone given by z = x 2 + y 2 with a plane z = ax + by + c. We can picture what these intersections look like by shining a flash light on a wall. The light emanating from the flash light describes a cone which is then intersected by the wall. The figures we get are circles, ellipses, parabolae, and hyperbolae, depending on how we hold the flash light. These questions naturally lead to the more general question of determining the figures described by the equation ![ $$a{x}^{2} + bxy + c{y}^{2} + dx + ey + f = 0.$$ ](A81414_1_En_4_Chapter_Equft.gif) We shall see below that it is possible to make a linear change of coordinates, that depends only on the quadratic quantities, such that the equation is transformed into an equation of the simpler form: ![ $${a}^{{\\prime}}{\\left \({x}^{{\\prime}}\\right \)}^{2} + {c}^{{\\prime}}{\\left \({y}^{{\\prime}}\\right \)}^{2} + {d}^{{\\prime}}{x}^{{\\prime}} + {e}^{{\\prime}}{y}^{{\\prime}} + {f}^{{\\prime}} = 0.$$ ](A81414_1_En_4_Chapter_Equfu.gif) It is now easy to see that the solutions to such an equation consist of a circle, ellipse, parabola, hyperbola, or the degenerate cases of two lines, a point or nothing. Moreover a, b, c together determine the type of the figure as long as it is not degenerate. Aside from the aesthetic virtues of this problem, it also comes up naturally when solving the two-body problem from physics, a rather remarkable coincidence between beauty and the real world. Another application is to the problem of deciding when a function in two variables has a maximum, minimum, or neither at a critical point. The goal here is to study this problem in the more general case with n variables and show how the spectral theorem can be brought in to help our investigations. We shall also explain the use in multivariable calculus. A quadratic formQ in n real variables x = x 1,..., x n is a function of the form ![ $$Q\\left \(x\\right \) ={ \\sum\\nolimits }_{1\\leq \\mathit{i}\\leq j\\leq n}{a}_{ij}{x}_{i}{x}_{j}.$$ ](A81414_1_En_4_Chapter_Equfv.gif) The term x i x j only appears once in this sum. We can artificially have it appear twice so that the sum is more symmetric ![ $$Q\\left \(x\\right \) ={ \\sum\\nolimits }_{i,j=1}^{n}{a}_{ ij}^{{\\prime}}{x}_{ i}{x}_{j},$$ ](A81414_1_En_4_Chapter_Equfw.gif) where a ii ′ = a ii and a ij ′ = a ji ′ = a ij ∕ 2. If we define A as the matrix whose entries are a ij ′ and use the inner product on ![ $${\\mathbb{R}}^{n},$$ ](A81414_1_En_4_Chapter_IEq140.gif) then the quadratic form can be written in the more abstract and condensed form ![ $$Q\\left \(x\\right \) = \\left \(Ax\\vert x\\right \).$$ ](A81414_1_En_4_Chapter_Equfx.gif) The important observation is that A is a symmetric real matrix and hence self-adjoint. This means that we can find a new orthonormal basis for ![ $${\\mathbb{R}}^{n}$$ ](A81414_1_En_4_Chapter_IEq141.gif) that diagonalizes A. If this basis is given by the matrix B, then ![ $$\\begin{array}{rcl} A& =& BD{B}^{-1} \\\\ & =& B\\left \[\\begin{array}{ccc} {\\sigma }_{1} & & 0\\\\ &\\ddots \\\\ 0 &&{\\sigma }_{n} \\end{array} \\right \]{B}^{-1} \\\\ & =& B\\left \[\\begin{array}{ccc} {\\sigma }_{1} & & 0\\\\ &\\ddots \\\\ 0 &&{\\sigma }_{n} \\end{array} \\right \]{B}^{t}.\\end{array}$$ ](A81414_1_En_4_Chapter_Equ85.gif) If we define new coordinates by ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{c} {y}_{1}\\\\ \\vdots \\\\ {y}_{n} \\end{array} \\right \]& =& {B}^{-1}\\left \[\\begin{array}{c} {x}_{1}\\\\ \\vdots \\\\ {x}_{n} \\end{array} \\right \],\\text{ or} \\\\ x& =& By, \\\\ \\end{array}$$ ](A81414_1_En_4_Chapter_Equ86.gif) then ![ $$\\begin{array}{rcl} Q\\left \(x\\right \)& =& \\left \(Ax\\vert x\\right \) \\\\ & =& \\left \(ABy\\vert By\\right \) \\\\ & =& \\left \({B}^{t}ABy\\vert y\\right \) \\\\ & =& {Q}^{{\\prime}}\\left \(y\\right \).\\end{array}$$ ](A81414_1_En_4_Chapter_Equ87.gif) Since B is an orthogonal matrix, we have that B − 1 = B t and hence B t AB = B − 1 AB = D. Thus, ![ $${Q}^{{\\prime}}\\left \(y\\right \) = {\\sigma }_{ 1}{y}_{1}^{2} + \\cdots+ {\\sigma }_{ n}{y}_{n}^{2}$$ ](A81414_1_En_4_Chapter_Equfy.gif) in the new coordinates. The general classification of the types of quadratic forms is given by (1) If all of σ1,..., σ n are positive or negative, then it is said to be elliptic. (2) If all of σ1,..., σ n are nonzero and there are both negative and positive values, then it said to be hyperbolic. (3) If at least one of σ1,..., σ n is zero, then it is called parabolic. In the case of two variables, this makes perfect sense as x 2 + y 2 = r 2 is a circle (special ellipse), x 2 − y 2 = f two branches of a hyperbola, and x 2 = f a parabola. The first two cases occur when σ1⋯σ n ≠0. In this case, the quadratic form is said to be nondegenerate. In the parabolic case, σ1⋯σ n = 0 and we say that the quadratic form is degenerate. Having obtained this simple classification it would be nice to find a way of characterizing these types directly from the characteristic polynomial of A without having to find the roots. This is actually not too hard to accomplish. Lemma 4.11.1 (Descartes' Rule of Signs). Let ![ $$p\\left \(t\\right \) = {t}^{n} + {a}_{ n-1}{t}^{n-1} + \\cdots+ {a}_{ 1}t + {a}_{0} = \\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{n}\\right \),$$ ](A81414_1_En_4_Chapter_Equfz.gif) where ![ $${a}_{0},\\ldots,{a}_{n-1},{\\lambda }_{1},\\ldots,{\\lambda }_{n} \\in\\mathbb{R}$$ ](A81414_1_En_4_Chapter_IEq142.gif). (1) 0 is a root of pt if and only if a 0 = 0. (2) All roots of pt are negative if and only if a n−1 ,...,a 0 > 0. (3) If n is odd, then all roots of pt are positive if and only if a n−1 < 0, a n−2 > 0,...,a 1 > 0, a 0 < 0. (4) If n is even, then all roots of pt are positive if and only if a n−1 < 0, a n−2 > 0,...,a 1 < 0, a 0 > 0. Proof. Descartes' rule is actually more general as it relates the number of positive roots to the number of times the coefficients change sign. The simpler version, however, suffices for our purposes. Part 1 is obvious as p0 = a 0. The relationship ![ $${t}^{n} + {a}_{ n-1}{t}^{n-1} + \\cdots+ {a}_{ 1}t + {a}_{0} = \\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{n}\\right \)$$ ](A81414_1_En_4_Chapter_Equga.gif) clearly shows that a n − 1,..., a 0 > 0 if λ1,..., λ n < 0. Conversely, if a n − 1,..., a 0 > 0, then it is obvious that pt > 0 for all t ≥ 0. For the other two properties, consider qt = p − t and use (2). □ This lemma gives us a very quick way of deciding whether a given quadratic form is parabolic or elliptic. If it is not one of these two types, then we know it has to be hyperbolic. Example 4.11.2. The matrix ![ $$\\left \[\\begin{array}{ccc} 2& 3 & 0\\\\ 3 & - 2 & 4 \\\\ 0& 4 & - 2 \\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equgb.gif) has characteristic polynomial t 3 + 2t 2 − 29t + 6. The coefficients do not conform to the patterns that guarantee that the roots are all positive or negative so we conclude that the corresponding quadratic form is hyperbolic. Example 4.11.3. Let Q be a quadratic form corresponding to the matrix ![ $$A = \\left \[\\begin{array}{cccc} 6&1&2&3\\\\ 1 &5 &0 &4 \\\\ 2&0&2&0\\\\ 3 &4 &0 &7 \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equgc.gif) The characteristic polynomial is given by t 4 − 20t 3 + 113t 2 − 200t + 96. In this case, the coefficients tell us that the roots must be positive. ### 4.11.1 Exercises 1. A bilinear form on a vector space V is a function ![ $$B : V \\times V \\rightarrow\\mathbb{F}$$ ](A81414_1_En_4_Chapter_IEq143.gif) such that x -> Bx, y and y -> Bx, y are both linear. Show that a quadratic form Q always looks like Qx = Bx, x, where B is a bilinear form. 2. A bilinear form is said to be symmetric, respectively skew-symmetric, if Bx, y = By, x, respectively Bx, y = − By, x for all x, y. (a) Show that a quadratic form looks like Qx = Bx, x where B is symmetric. (b) Show that Bx, x = 0 for all x ∈ V if and only if B is skew-symmetric. 3. Let B be a bilinear form on ![ $${\\mathbb{R}}^{n}$$ ](A81414_1_En_4_Chapter_IEq144.gif) or ![ $${\\mathbb{C}}^{n}$$ ](A81414_1_En_4_Chapter_IEq145.gif). (a) Show that Bx, y = Ax | y for some matrix A. (b) Show that B is symmetric if and only if A is symmetric. (c) Show that B is skew-symmetric if and only if A is skew-symmetric. (d) If x = Cx ′ is a change of basis, show that if B corresponds to A in the standard basis, then it corresponds to C t AC in the new basis. 4. Let Qx be a quadratic form on ![ $${\\mathbb{R}}^{n}.$$ ](A81414_1_En_4_Chapter_IEq146.gif) Show that there is an orthogonal basis where ![ $$Q\\left \(z\\right \) = -{z}_{1}^{2} -\\cdots- {z}_{ k}^{2} + {z}_{ k+1}^{2} + \\cdots+ {z}_{ l}^{2},$$ ](A81414_1_En_4_Chapter_Equgd.gif) where 0 ≤ k ≤ l ≤ n. Hint: Use the orthonormal basis that diagonalized Q and adjust the lengths of the basis vectors. 5. Let Bx, y be a skew-symmetric form on ![ $${\\mathbb{R}}^{n}.$$ ](A81414_1_En_4_Chapter_IEq147.gif) (a) If Bx, y = Ax | y where ![ $$A = \\left \[\\begin{array}{cc} 0 & - \\beta \\\\ \\beta& 0\\end{array} \\right \],$$ ](A81414_1_En_4_Chapter_IEq148.gif) ![ $$\\beta\\in\\mathbb{R},$$ ](A81414_1_En_4_Chapter_IEq149.gif) then show that there is a basis for ![ $${\\mathbb{R}}^{2}$$ ](A81414_1_En_4_Chapter_IEq150.gif) where Bx ′ , y ′ corresponds to ![ $${A}^{{\\prime}} = \\left \[\\begin{array}{cc} 0& - 1\\\\ 1 & 0\\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_IEq151.gif) (b) If Bx, y is a skew-symmetric bilinear form on ![ $${\\mathbb{R}}^{n},$$ ](A81414_1_En_4_Chapter_IEq152.gif) then there is a basis where Bx ′ , y ′ corresponds to a matrix of the type ![ $${A}^{{\\prime}} = \\left \[\\begin{array}{cccccccc} 0& - 1&\\cdots &0& 0 &0&\\cdots &0\\\\ 1 & 0 & 0 &0 & 0 &0 & \\vdots &0 \\\\ \\vdots & \\vdots & \\ddots &0& 0 &0& \\vdots &0\\\\ 0 & 0 & 0 &0 & - 1 &0 & 0 &0 \\\\ 0& 0 & 0 &1& 0 &0& 0 &0\\\\ 0 & 0 & 0 &0 & 0 &0 &\\cdots&0 \\\\ 0& 0 &\\cdots &0& 0 & \\vdots & \\ddots & \\vdots\\\\ 0 & 0 &\\cdots&0 & 0 &0 &\\cdots&0\\end{array} \\right \]$$ ](A81414_1_En_4_Chapter_Equge.gif) 6. Show that for a quadratic form Qz on ![ $${\\mathbb{C}}^{n}$$ ](A81414_1_En_4_Chapter_IEq153.gif), we can always change coordinates to make it look like ![ $${Q}^{{\\prime}}\\left \({z}^{{\\prime}}\\right \) ={ \\left \({z}_{ 1}^{{\\prime}}\\right \)}^{2} + \\cdots+{ \\left \({z}_{ n}^{{\\prime}}\\right \)}^{2}.$$ ](A81414_1_En_4_Chapter_Equgf.gif) 7. Show that Qx, y = ax 2 + 2bxy + cy 2 is elliptic when ac − b 2 > 0, hyperbolic when ac − b 2 < 0, and parabolic when ac − b 2 = 0. 8. If A is a symmetric real matrix, then show that tI + A defines an elliptic quadratic form when t is sufficiently large. 9. Decide for each of the following matrices whether or not the corresponding quadratic form is elliptic, hyperbolic, or parabolic: (a) ![ $$\\left \[\\begin{array}{cccc} - 7& - 2& - 3& 0\\\\ - 2 & - 6 & - 4 & 0 \\\\ - 3& - 4& - 5& 2\\\\ 0 & 0 & 2 & -3 \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equgg.gif) (b) ![ $$\\left \[\\begin{array}{cccc} 7 & 3 & - 3& 4\\\\ 3 & 2 & - 1 & 0 \\\\ - 3& - 1& 5 & - 2\\\\ 4 & 0 & - 2 & 10 \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equgh.gif) (c) ![ $$\\left \[\\begin{array}{cccc} - 8& - 3& 0 & - 2\\\\ - 3 & - 1 & - 1 & 0 \\\\ 0 & - 1& 1 & 3\\\\ - 2 & 0 & 3 & -3 \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equgi.gif) (d) ![ $$\\left \[\\begin{array}{cccc} 15&2& 3 & 4\\\\ 2 &4 & 2 & 0 \\\\ 3 &2& 3 & - 2\\\\ 4 &0 & - 2 & 5 \\end{array} \\right \].$$ ](A81414_1_En_4_Chapter_Equgj.gif) References [Axler]. Axler, S.: Linear Algebra Done Right. Springer-Verlag, New York (1997) [Bretscher]. Bretscher, O.: Linear Algebra with Applications, 2nd edn. Prentice-Hall, Upper Saddle River (2001) [Curtis]. Curtis, C.W.: Linear Algebra: An Introductory Approach. Springer-Verlag, New York (1984) [Greub]. Greub, W.: Linear Algebra, 4th edn. Springer-Verlag, New York (1981) [Halmos]. Halmos, P.R.: Finite-Dimensional Vector Spaces. Springer-Verlag, New York (1987) [Hoffman-Kunze]. Hoffman, K., Kunze, R.: Linear Algebra. Prentice-Hall, Upper Saddle River (1961) [Lang]. Lang, S.: Linear Algebra, 3rd edn. Springer-Verlag, New York (1987) [Roman]. Roman, S.: Advanced Linear Algebra, 2nd edn. Springer-Verlag, New York (2005) [Serre]. Serre, D.: Matrices, Theory and Applications. Springer-Verlag, New York (2002) Peter PetersenUndergraduate Texts in MathematicsLinear Algebra201210.1007/978-1-4614-3612-6_5(C) Springer Science+Business Media New York 2012 # 5. Determinants Peter Petersen1 (1) Department of Mathematics, University of California, Los Angeles, CA, USA Abstract Before plunging in to the theory of determinants, we are going to make an attempt at defining them in a more geometric fashion. This works well in low dimensions and will serve to motivate our more algebraic constructions in subsequent sections. ## 5.1 Geometric Approach Before plunging in to the theory of determinants, we are going to make an attempt at defining them in a more geometric fashion. This works well in low dimensions and will serve to motivate our more algebraic constructions in subsequent sections. From a geometric point of view, the determinant of a linear operator L : V -> V is a scalar detL that measures how L changes the volume of solids in V. To understand how this works, we obviously need to figure out how volumes are computed in V. In this section, we will study this problem in dimensions 1 and 2. In subsequent sections, we take a more axiomatic and algebraic approach, but the ideas come from what we present here. Let V be one-dimensional and assume that the scalar field is ![ $$\\mathbb{R}$$ ](A81414_1_En_5_Chapter_IEq1.gif) so as to keep things as geometric as possible. We already know that L : V -> V must be of the form Lx = λx for some ![ $$\\lambda\\in\\mathbb{R}.$$ ](A81414_1_En_5_Chapter_IEq2.gif) This λ clearly describes how L changes the length of vectors as Lx = λx. The important and surprising thing to note is that while we need an inner product to compute the length of vectors, it is not necessary to know the norm in order to compute how L changes the length of vectors. Let now V be two-dimensional. If we have a real inner product, then we can talk about areas of simple geometric configurations. We shall work with parallelograms as they are easy to define, one can easily find their area, and linear operators map parallelograms to parallelograms. Given x, y ∈ V, the parallelogram πx, y with sides x and y is defined by ![ $$\\pi \\left \(x,y\\right \) = \\left \\{sx + ty : s,t \\in \\left \[0,1\\right \]\\right \\}.$$ ](A81414_1_En_5_Chapter_Equa.gif) The area of πx, y can be computed by the usual formula where one multiplies the base length with the height. If we take x to be the base, then the height is the projection of y onto to orthogonal complement of x. Thus, we get the formula (see also Fig. 5.1) ![ $$\\begin{array}{rcl} \\mathrm{area}\\left \(\\pi \\left \(x,y\\right \)\\right \)& =& \\left \\Vert x\\right \\Vert \\left \\Vert y -\\mathrm{{ proj}}_{x}\\left \(y\\right \)\\right \\Vert \\\\ & =& \\left \\Vert x\\right \\Vert \\left \\Vert y -\\frac{\\left \(y\\vert x\\right \)x} {{\\left \\Vert x\\right \\Vert }^{2}} \\right \\Vert.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ1.gif) This expression does not appear to be symmetric in x and y, but if we square it, we get ![ $$ \\begin{array}{rcl}{ \\left \(\\mathrm{area}\\left \(\\pi \\left \(x,y\\right \)\\right \)\\right \)}^{2}& =& \\left \(x\\vert x\\right \)\\left \(y -\\mathrm{{ proj}}_{ x}\\left \(y\\right \)\\vert y -\\mathrm{{ proj}}_{x}\\left \(y\\right \)\\right \) \\\\ & =& \\left \(x\\vert x\\right \)\\left \(\\left \(y\\vert y\\right \) - 2\\left \(y\\vert \\mathrm{{proj}}_{x}\\left \(y\\right \)\\right \) + \\left \(\\mathrm{{proj}}_{x}\\left \(y\\right \)\\vert \\mathrm{{proj}}_{x}\\left \(y\\right \)\\right \)\\right \) \\\\ & =& \\left \(x\\vert x\\right \)\\left \(\\left \(y\\vert y\\right \) - 2\\left \(y\\left \\vert \\frac{\\left \(y\\vert x\\right \)x} {{\\left \\Vert x\\right \\Vert }^{2}} \\right.\\right \) + \\left \(\\frac{\\left \(y\\vert x\\right \)x} {{\\left \\Vert x\\right \\Vert }^{2}} \\left \\vert \\frac{\\left \(y\\vert x\\right \)x} {{\\left \\Vert x\\right \\Vert }^{2}} \\right.\\right \)\\right \) \\\\ & =& \\left \(x\\vert x\\right \)\\left \(y\\vert y\\right \) -{\\left \(x\\vert y\\right \)}^{2}, \\\\ \\end{array} $$ ](A81414_1_En_5_Chapter_Equ2.gif) which is symmetric in x and y. Now assume that ![ $$ \\begin{array}{rcl}{ x}^{{\\prime}}& =& \\alpha x + \\beta y \\\\ {y}^{{\\prime}}& =& \\gamma x + \\delta y \\\\ \\end{array} $$ ](A81414_1_En_5_Chapter_Equ3.gif) or ![ $$\\left \[\\begin{array}{cc} {x}^{{\\prime}}&{y}^{{\\prime}} \\end{array} \\right \] = \\left \[\\begin{array}{cc} x&y \\end{array} \\right \]\\left \[\\begin{array}{cc} \\alpha &\\gamma\\\\ \\beta&\\delta\\end{array} \\right \];$$ ](A81414_1_En_5_Chapter_Equb.gif) then, we see that ![ $$ \\begin{array}{rcl} &{ \\left \(\\mathrm{area}\\left \(\\pi \\left \({x}^{{\\prime}},{y}^{{\\prime}}\\right \)\\right \)\\right \)}^{2} & \\\\ & \\quad = \\left \({x}^{{\\prime}}\\vert {x}^{{\\prime}}\\right \)\\left \({y}^{{\\prime}}\\vert {y}^{{\\prime}}\\right \) -{\\left \({x}^{{\\prime}}\\vert {y}^{{\\prime}}\\right \)}^{2} & \\\\ & \\quad = \\left \(\\alpha x + \\beta y\\vert \\alpha x + \\beta y\\right \)\\left \(\\gamma x + \\delta y\\vert \\gamma x + \\delta y\\right \) -{\\left \(\\alpha x + \\beta y\\vert \\gamma x + \\delta y\\right \)}^{2} & \\\\ & \\quad = \\left \({\\alpha }^{2}\\left \(x\\vert x\\right \) + 2\\alpha \\beta \\left \(x\\vert y\\right \) + {\\beta }^{2}\\left \(y\\vert y\\right \)\\right \)\\left \({\\gamma }^{2}\\left \(x\\vert x\\right \) + 2\\gamma \\delta \\left \(x\\vert y\\right \) + {\\delta }^{2}\\left \(y\\vert y\\right \)\\right \)& \\\\ & \\quad \\quad \\; -{\\left \(\\alpha \\gamma \\left \(x\\vert x\\right \) + \\left \(\\alpha \\delta+ \\beta \\gamma \\right \)\\left \(x\\vert y\\right \) + \\beta \\delta \\left \(y\\vert y\\right \)\\right \)}^{2} & \\\\ & \\quad = \\left \({\\alpha }^{2}{\\delta }^{2} + {\\beta }^{2}{\\gamma }^{2} - 2\\alpha \\beta \\gamma \\delta \\right \)\\left \(\\left \(x\\vert x\\right \)\\left \(y\\vert y\\right \) -{\\left \(x\\vert y\\right \)}^{2}\\right \) & \\\\ & \\quad ={ \\left \(\\alpha \\delta- \\beta \\gamma \\right \)}^{2}{\\left \(\\mathrm{area}\\left \(\\pi \\left \(x,y\\right \)\\right \)\\right \)}^{2}. & \\\\ \\end{array} $$ ](A81414_1_En_5_Chapter_Equ4.gif) This tells us several things. First, if we know how to compute the area of just one parallelogram, then we can use linear algebra to compute the area of any other parallelogram by simply expanding the base vectors for the new parallelogram in terms of the base vectors of the given parallelogram. This has the surprising consequence that the ratio of the areas of two parallelograms does not depend upon the inner product! With this in mind, we can then define the determinant of a linear operator L : V -> V so that ![ $${\\left \(\\det \\left \(L\\right \)\\right \)}^{2} = \\frac{{\\left \(\\mathrm{area}\\left \(\\pi \\left \(L\\left \(x\\right \),L\\left \(y\\right \)\\right \)\\right \)\\right \)}^{2}} {{\\left \(\\mathrm{area}\\left \(\\pi \\left \(x,y\\right \)\\right \)\\right \)}^{2}}.$$ ](A81414_1_En_5_Chapter_Equc.gif) To see that this does not depend on x and y, we chose x ′ and y ′ as above and note that ![ $$\\left \[\\begin{array}{cc} L\\left \({x}^{{\\prime}}\\right \)&L\\left \({y}^{{\\prime}}\\right \) \\end{array} \\right \] = \\left \[\\begin{array}{cc} L\\left \(x\\right \)&L\\left \(y\\right \) \\end{array} \\right \]\\left \[\\begin{array}{cc} \\alpha &\\gamma\\\\ \\beta&\\delta\\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_Equd.gif) and ![ $$ \\begin{array}{rcl} \\frac{{\\left \(\\mathrm{area}\\left \(\\pi \\left \(L\\left \({x}^{{\\prime}}\\right \),L\\left \({y}^{{\\prime}}\\right \)\\right \)\\right \)\\right \)}^{2}} {{\\left \(\\mathrm{area}\\left \(\\pi \\left \({x}^{{\\prime}},{y}^{{\\prime}}\\right \)\\right \)\\right \)}^{2}} & =& \\frac{{\\left \(\\alpha \\delta- \\beta \\gamma \\right \)}^{2}{\\left \(\\mathrm{area}\\left \(\\pi \\left \(L\\left \(x\\right \),L\\left \(y\\right \)\\right \)\\right \)\\right \)}^{2}} {{\\left \(\\alpha \\delta- \\beta \\gamma \\right \)}^{2}{\\left \(\\mathrm{area}\\left \(\\pi \\left \(x,y\\right \)\\right \)\\right \)}^{2}} \\\\ & =& \\frac{{\\left \(\\mathrm{area}\\left \(\\pi \\left \(L\\left \(x\\right \),L\\left \(y\\right \)\\right \)\\right \)\\right \)}^{2}} {{\\left \(\\mathrm{area}\\left \(\\pi \\left \(x,y\\right \)\\right \)\\right \)}^{2}}.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ5.gif) Thus, detL 2 depends neither on the inner product that is used to compute the area nor on the vectors x and y. Finally, we can refine the definition so that ![ $$ \\begin{array}{rcl} \\det \\left \(L\\right \)& = \\left \\vert \\begin{array}{cc} \\alpha &\\gamma\\\\ \\beta&\\delta\\end{array} \\right \\vert = \\alpha \\delta- \\beta \\gamma,\\text{ where} & \\\\ \\left \[\\begin{array}{cc} L\\left \(x\\right \)&L\\left \(y\\right \) \\end{array} \\right \]& = \\left \[\\begin{array}{cc} x&y \\end{array} \\right \]\\left \[\\begin{array}{cc} \\alpha &\\gamma\\\\ \\beta&\\delta\\end{array} \\right \].& \\\\ \\end{array} $$ ](A81414_1_En_5_Chapter_Equ6.gif) This introduces a sign in the definition which one can also easily check does not depend on the choice of x and y. Fig. 5.1 Area of a parallelogram This approach generalizes to higher dimensions, but it also runs into a little trouble. The keen observer might have noticed that the formula for the area is in fact a determinant: ![ $$\\begin{array}{rcl}{ \\left \(\\mathrm{area}\\left \(\\pi \\left \(x,y\\right \)\\right \)\\right \)}^{2}& =& \\left \(x\\vert x\\right \)\\left \(y\\vert y\\right \) -{\\left \(x\\vert y\\right \)}^{2} \\\\ & =& \\left \\vert \\begin{array}{cc} \\left \(x\\vert x\\right \)&\\left \(x\\vert y\\right \)\\\\ \\left \(x\\vert y \\right \) & \\left \(y\\vert y \\right \) \\end{array} \\right \\vert. \\end{array}$$ ](A81414_1_En_5_Chapter_Equ7.gif) When passing to higher dimensions, it will become increasingly harder to justify how the volume of a parallelepiped depends on the base vectors without using a determinant. Thus, we encounter a bit of a vicious circle when trying to define determinants in this fashion. The other problem is that we used only real scalars. One can modify the approach to also work for complex numbers, but beyond that, there is not much hope. The approach we take below is mirrored on the constructions here but works for general scalar fields. ## 5.2 Algebraic Approach As was done in the previous section, we are going to separate the idea of volumes and determinants, the latter being exclusively for linear operators and a quantity which is independent of other structures on the vector space. Since volumes are used to define determinants, we start by defining what a volume forms is. Unlike the more motivational approach we took in the previous section, we will take a more strictly axiomatic approach. Let V be an n-dimensional vector space over ![ $$\\mathbb{F}.$$ ](A81414_1_En_5_Chapter_IEq3.gif) Definition 5.2.1. A volume form ![ $$\\mathrm{vol} : \\frac{n\\ \\mathrm{times}} {V \\times \\cdots\\times V } \\rightarrow\\mathbb{F}$$ ](A81414_1_En_5_Chapter_Eque.gif) is a multi-linear map, i.e., it is linear in each variable if the others are fixed, which is also alternating. More precisely, if x 1,..., x i − 1, x i + 1,..., x n ∈ V, then ![ $$x \\rightarrow \\mathrm{ vol}\\left \({x}_{1},\\ldots,{x}_{i-1},x,{x}_{i+1},\\ldots,{x}_{n}\\right \)$$ ](A81414_1_En_5_Chapter_Equf.gif) is linear, and for i < j, we have the alternating property when x i and x j are transposed: ![ $$\\mathrm{vol}\\left \(\\ldots,{x}_{i},\\ldots,{x}_{j},\\ldots \\right \) = -\\mathrm{vol}\\left \(\\ldots,{x}_{j},\\ldots,{x}_{i},\\ldots \\right \).$$ ](A81414_1_En_5_Chapter_Equg.gif) In Sect. 5.4 below, we shall show that such volume forms always exist. In this section, we are going to establish some important properties and also give some methods for computing volumes. Proposition 5.2.2. Let ![ $$\\mathrm{vol} : V \\times \\cdots\\times V \\rightarrow\\mathbb{F}$$ ](A81414_1_En_5_Chapter_IEq4.gif) be a volume form on an n-dimensional vector space over ![ $$\\mathbb{F}.$$ ](A81414_1_En_5_Chapter_IEq5.gif) Then (1) vol...,x,...,x,... = 0. (2) vol x 1 ,...,x i−1 ,x i \+ y,x i+1 ,...,x n = vol x 1 ,...,x i−1 ,x i ,x i+1 ,...,x n if y = ∑ \nolimits k≠i α k x k is a linear combination of x 1 ,...,x i−1 ,x i+1 ,...x n. (3) vol x 1 ,...,x n = 0 if x 1 ,...,x n are linearly dependent. (4) If vol x 1 ,...,x n ≠0, then x 1 ,...,x n form a basis for V. Proof. (1) The alternating property tells us that ![ $$\\mathrm{vol}\\left \(\\ldots,x,\\ldots,x,\\ldots \\right \) = -\\mathrm{vol}\\left \(\\ldots,x,\\ldots,x,\\ldots \\right \)$$ ](A81414_1_En_5_Chapter_Equh.gif) if we switch x and x. Thus, vol..., x,..., x,... = 0. (2) Let y = ∑ k≠i α k x k and use linearity to conclude ![ $$\\begin{array}{rcl} & \\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{i-1},{x}_{i} + y,{x}_{i+1},\\ldots,{x}_{n}\\right \) & \\\\ & \\quad =\\mathrm{ vol}\\left \({x}_{1},\\ldots,{x}_{i-1},{x}_{i},{x}_{i+1},\\ldots,{x}_{n}\\right \) & \\\\ & \\qquad +{ \\sum\\nolimits }_{k\\neq i}{\\alpha }_{k}\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{i-1},{x}_{k},{x}_{i+1},\\ldots,{x}_{n}\\right \).& \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ8.gif) Since x k is always equal to one of x 1,..., x i − 1, x i + 1,..., x n , we see that ![ $${\\alpha }_{k}\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{i-1},{x}_{k},{x}_{i+1},\\ldots,{x}_{n}\\right \) = 0.$$ ](A81414_1_En_5_Chapter_Equi.gif) This implies the claim. (3) If x 1 = 0, we are finished. Otherwise, Lemma 1.12.3 shows that there is k ≥ 1 such that x k = ∑ i = 1 k − 1α i x i . Then, (2) implies that ![ $$\\begin{array}{rcl} \\mathrm{vol}\\left \({x}_{1},\\ldots,0 + {x}_{k},\\ldots,{x}_{n}\\right \)& =& \\mathrm{vol}\\left \({x}_{1},\\ldots,0,\\ldots,{x}_{n}\\right \) \\\\ & =&0. \\end{array}$$ ](A81414_1_En_5_Chapter_Equ9.gif) (4) From (3), we have that x 1,..., x n are linearly independent. Since V has dimension n, they must also form a basis. Note that in the above proof, we had to use that 1≠ − 1 in the scalar field. This is certainly true for the fields we work with. When working with more general fields such as ![ $$\\mathbb{F} = \\left \\{0,1\\right \\}$$ ](A81414_1_En_5_Chapter_IEq6.gif), we need to modify the alternating property. Instead, we assume that the volume form volx 1,..., x n satisfies volx 1,..., x n = 0 whenever x i = x j . This in turn implies the alternating property. To prove this note that if x = x i + x j , then ![ $$\\begin{array}{rcl} 0& =& \\mathrm{vol}\\left \(\\ldots, \\frac{i\\mathrm{th}\\ \\mathrm{place}} {x},\\ldots, \\frac{j\\mathrm{th}\\ \\mathrm{place}} {x},\\ldots \\right \) \\\\ & =& \\mathrm{vol}\\left \(\\ldots, \\frac{i\\mathrm{th}\\ \\mathrm{place}} {{x}_{i} + {x}_{j}},\\ldots, \\frac{j\\mathrm{th}\\ \\mathrm{place}} {{x}_{i} + {x}_{j}},\\ldots \\right \) \\\\ & =& \\mathrm{vol}\\left \(\\ldots, \\frac{i\\mathrm{th}\\ \\mathrm{place}} {{x}_{i}},\\ldots, \\frac{j\\mathrm{th}\\ \\mathrm{place}} {{x}_{i}},\\ldots \\right \) \\\\ & & +\\mathrm{vol}\\left \(\\ldots, \\frac{i\\mathrm{th}\\ \\mathrm{place}} {{x}_{j}},\\ldots, \\frac{j\\mathrm{th}\\ \\mathrm{place}} {{x}_{i}},\\ldots \\right \) \\\\ & & +\\mathrm{vol}\\left \(\\ldots, \\frac{i\\mathrm{th}\\ \\mathrm{place}} {{x}_{i}},\\ldots, \\frac{j\\mathrm{th}\\ \\mathrm{place}} {{x}_{j}},\\ldots \\right \) \\\\ & & +\\mathrm{vol}\\left \(\\ldots, \\frac{i\\mathrm{th}\\ \\mathrm{place}} {{x}_{j}},\\ldots, \\frac{j\\mathrm{th}\\ \\mathrm{place}} {{x}_{j}},\\ldots \\right \) \\\\ & =& \\mathrm{vol}\\left \(\\ldots, \\frac{i\\mathrm{th}\\ \\mathrm{place}} {{x}_{j}},\\ldots, \\frac{j\\mathrm{th}\\ \\mathrm{place}} {{x}_{i}},\\ldots \\right \) \\\\ & & +\\mathrm{vol}\\left \(\\ldots, \\frac{i\\mathrm{th}\\ \\mathrm{place}} {{x}_{i}},\\ldots, \\frac{j\\mathrm{th}\\ \\mathrm{place}} {{x}_{j}},\\ldots \\right \), \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ10.gif) which shows that the form is alternating. Theorem 5.2.3. (Uniqueness of Volume Forms) Let ![ $$\\mathrm{{vol}}_{1},\\mathrm{{vol}}_{2} : V \\times \\cdots\\times V \\rightarrow\\mathbb{F}$$ ](A81414_1_En_5_Chapter_IEq7.gif) be two volume forms on an n-dimensional vector space over ![ $$\\mathbb{F}.$$ ](A81414_1_En_5_Chapter_IEq8.gif) If vol 2 is nontrivial, then vol 1 = λvol 2 for some ![ $$\\lambda\\in\\mathbb{F}.$$ ](A81414_1_En_5_Chapter_IEq9.gif) Proof. If we assume that vol2 is nontrivial, then we can find x 1,..., x n ∈ V so that vol2 x 1,..., x n ≠0. Then, define λ so that ![ $$\\mathrm{{vol}}_{1}\\left \({x}_{1},\\ldots,{x}_{n}\\right \) = \\lambda \\mathrm{{vol}}_{2}\\left \({x}_{1},\\ldots,{x}_{n}\\right \).$$ ](A81414_1_En_5_Chapter_Equj.gif) If z 1,..., z n ∈ V, then we can write ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{ccc} {z}_{1} & \\cdots &{z}_{n} \\end{array} \\right \]& =& \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]A \\\\ & =& \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]\\left \[\\begin{array}{ccc} {\\alpha }_{11} & \\cdots & {\\alpha }_{1n}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\alpha }_{n1} & \\cdots &{\\alpha }_{nn} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_5_Chapter_Equ11.gif) For any volume form vol, we have ![ $$\\begin{array}{rcl} \\mathrm{vol}\\left \({z}_{1},\\ldots,{z}_{n}\\right \)& =& \\mathrm{vol}\\left \({\\sum\\nolimits }_{{i}_{1}=1}^{n}{x}_{{ i}_{1}}{\\alpha }_{{i}_{1}1},\\ldots,{\\sum\\nolimits }_{{i}_{n}=1}^{n}{x}_{{ i}_{n}}{\\alpha }_{{i}_{n}n}\\right \) \\\\ & =& {\\sum\\nolimits }_{{i}_{1}=1}^{n}{\\alpha }_{{ i}_{1}1}\\mathrm{vol}\\left \({x}_{{i}_{1}},\\ldots,{\\sum\\nolimits }_{{i}_{n}=1}^{n}{\\alpha }_{{ i}_{n}n}{x}_{{i}_{n}}\\right \) \\\\ & & \\vdots \\\\ & =& {\\sum\\nolimits }_{{i}_{1},\\ldots,{i}_{n}=1}^{n}{\\alpha }_{{ i}_{1}1}\\cdots {\\alpha }_{{i}_{n}n}\\mathrm{vol}\\left \({x}_{{i}_{1}},\\ldots,{x}_{{i}_{n}}\\right \).\\end{array}$$ ](A81414_1_En_5_Chapter_Equ12.gif) The first thing we should note is that ![ $$\\mathrm{vol}\\left \({x}_{{i}_{1}},\\ldots,{x}_{{i}_{n}}\\right \) = 0$$ ](A81414_1_En_5_Chapter_IEq10.gif) if any two of the indices i 1,..., i n are equal. When doing the sum ![ $${\\sum\\nolimits }_{{i}_{1},\\ldots,{i}_{n}=1}^{n}{\\alpha }_{{ i}_{1}1}\\cdots {\\alpha }_{{i}_{n}n}\\mathrm{vol}\\left \({x}_{{i}_{1}},\\ldots,{x}_{{i}_{n}}\\right \)\\!,$$ ](A81414_1_En_5_Chapter_Equk.gif) we can therefore assume that all of the indices i 1,..., i n are different. This means that by switching indices around, we have ![ $$\\mathrm{vol}\\left \({x}_{{i}_{1}},\\ldots,{x}_{{i}_{n}}\\right \) = \\pm \\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\!,$$ ](A81414_1_En_5_Chapter_Equl.gif) where the sign ± depends on the number of switches we have to make in order to rearrange i 1,..., i n to get back to the standard ordering 1,..., n. Since this number of switches does not depend on vol but only on the indices, we obtain the desired result: ![ $$\\begin{array}{rcl} \\mathrm{{vol}}_{1}\\left \({z}_{1},\\ldots,{z}_{n}\\right \)& =& {\\sum\\nolimits }_{{i}_{1},\\ldots,{i}_{n}=1}^{n} \\pm{\\alpha }_{{ i}_{1}1}\\cdots {\\alpha }_{{i}_{n}n}\\mathrm{{vol}}_{1}\\left \({x}_{1},\\ldots,{x}_{n}\\right \) \\\\ & =& {\\sum\\nolimits }_{{i}_{1},\\ldots,{i}_{n}=1}^{n} \\pm{\\alpha }_{{ i}_{1}1}\\cdots {\\alpha }_{{i}_{n}n}\\lambda \\mathrm{{vol}}_{2}\\left \({x}_{1},\\ldots,{x}_{n}\\right \) \\\\ & =& \\lambda {\\sum\\nolimits }_{{i}_{1},\\ldots,{i}_{n}=1}^{n} \\pm{\\alpha }_{{ i}_{1}1}\\cdots {\\alpha }_{{i}_{n}n}\\mathrm{{vol}}_{2}\\left \({x}_{1},\\ldots,{x}_{n}\\right \) \\\\ & =& \\lambda \\mathrm{{vol}}_{2}\\left \({z}_{1},\\ldots,{z}_{n}\\right \). \\\\ & & \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ13.gif) From the proof of this theorem, we also obtain one of the crucial results about volumes that we mentioned in the previous section. Corollary 5.2.4. If x 1 ,...,x n ∈ V is a basis for V, then any volume form vol is completely determined by its value vol x 1 ,...,x n. This corollary could be used to create volume forms by simply defining ![ $$\\mathrm{vol}\\left \({z}_{1},\\ldots,{z}_{n}\\right \) ={ \\sum\\nolimits }_{{i}_{1},\\ldots,{i}_{n}} \\pm{\\alpha }_{{i}_{1}1}\\cdots {\\alpha }_{{i}_{n}n}\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \),$$ ](A81414_1_En_5_Chapter_Equm.gif) where i 1,..., i n = 1,..., n. For that to work, we would have to show that the sign ± is well defined in the sense that it does not depend on the particular way in which we reorder i 1,..., i n to get 1,..., n. While this is certainly true, we shall not prove this combinatorial fact here. Instead, we observe that if we have a volume form that is nonzero on x 1,..., x n , then the fact that ![ $$\\mathrm{vol}\\left \({x}_{{i}_{1}},\\ldots,{x}_{{i}_{n}}\\right \)$$ ](A81414_1_En_5_Chapter_IEq11.gif) is a multiple of volx 1,..., x n tells us that this sign is well defined and so does not depend on the way in which 1,..., n was rearranged to get i 1,..., i n . We use the notation signi 1,..., i n for the sign we get from ![ $$\\mathrm{vol}\\left \({x}_{{i}_{1}},\\ldots,{x}_{{i}_{n}}\\right \) =\\mathrm{ sign}\\left \({i}_{1},\\ldots,{i}_{n}\\right \)\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \).$$ ](A81414_1_En_5_Chapter_Equn.gif) Finally, we need to check what happens when we restrict it to subspaces. To this end, let vol be a nontrivial volume form on V and M ⊂ V a k-dimensional subspace of V. If we fix vectors y 1,..., y n − k ∈ V, then we can define a form on M by ![ $$\\mathrm{{vol}}_{M}\\left \({x}_{1},\\ldots,{x}_{k}\\right \) =\\mathrm{ vol}\\left \({x}_{1},\\ldots,{x}_{k},{y}_{1},\\ldots,{y}_{n-k}\\right \),$$ ](A81414_1_En_5_Chapter_Equo.gif) where x 1,..., x k ∈ M. It is clear that vol M is linear in each variable and also alternating as vol has those properties. Moreover, if y 1,..., y n − k form a basis for a complement to M in V, then x 1,..., x k , y 1,..., y n − k will be a basis for V as long as x 1,..., x k is a basis for M. In this case, vol M becomes a nontrivial volume form as well. If, however, some linear combination of y 1,..., y n − k lies in M, then it follows that vol M = 0. ### 5.2.1 Exercises 1. Let V be a three-dimensional real inner product space and vol a volume form so that vole 1, e 2, e 3 = 1 for some orthonormal basis. For x, y ∈ V, define x ×y as the unique vector such that ![ $$\\mathrm{vol}\\left \(x,y,z\\right \) =\\mathrm{ vol}\\left \(z,x,y\\right \) = \\left \(z\\vert x \\times y\\right \).$$ ](A81414_1_En_5_Chapter_Equp.gif) (a) Show that x ×y = − y ×x and that x -> x ×y is linear. (b) Show that ![ $$\\left \({x}_{1} \\times{y}_{1}\\vert {x}_{2} \\times{y}_{2}\\right \) = \\left \({x}_{1}\\vert {x}_{2}\\right \)\\left \({y}_{1}\\vert {y}_{2}\\right \) -\\left \({x}_{1}\\vert {y}_{2}\\right \)\\left \({x}_{2}\\vert {y}_{1}\\right \).$$ ](A81414_1_En_5_Chapter_Equq.gif) (c) Show that ![ $$\\left \\Vert x \\times y\\right \\Vert = \\left \\Vert x\\right \\Vert \\left \\Vert y\\right \\Vert \\left \\vert \\sin \\theta \\right \\vert,$$ ](A81414_1_En_5_Chapter_Equr.gif) where ![ $$\\cos \\theta= \\frac{\\left \(x,y\\right \)} {\\left \\Vert x\\right \\Vert \\left \\Vert y\\right \\Vert }.$$ ](A81414_1_En_5_Chapter_Equs.gif) (d) Show that ![ $$x \\times \\left \(y \\times z\\right \) = \\left \(x\\vert z\\right \)y -\\left \(x\\vert y\\right \)z.$$ ](A81414_1_En_5_Chapter_Equt.gif) (e) Show that theJacobi identity holds ![ $$x \\times \\left \(y \\times z\\right \) + z \\times \\left \(x \\times y\\right \) + y \\times \\left \(z \\times x\\right \) = 0.$$ ](A81414_1_En_5_Chapter_Equu.gif) 2. Let x 1,..., x n ![ $$\\in{\\mathbb{R}}^{n}$$ ](A81414_1_En_5_Chapter_IEq12.gif) and do a Gram-Schmidt procedure so as to obtain a QR decomposition ![ $$\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n}\\end{array} \\right \] = \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n}\\end{array} \\right \]\\left \[\\begin{array}{ccc} {r}_{11} & \\cdots & {r}_{1n}\\\\ & \\ddots & \\vdots \\\\ 0 & &{r}_{nn}\\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_Equv.gif) Show that ![ $$\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \) = {r}_{11}\\cdots {r}_{nn}\\mathrm{vol}\\left \({e}_{1},\\ldots,{e}_{n}\\right \),$$ ](A81414_1_En_5_Chapter_Equw.gif) where ![ $$\\begin{array}{rcl} {r}_{11}& =& \\left \\Vert {x}_{1}\\right \\Vert, \\\\ {r}_{22}& =& \\left \\Vert {x}_{2} -\\mathrm{{ proj}}_{{x}_{1}}\\left \({x}_{2}\\right \)\\right \\Vert, \\\\ & & \\vdots \\\\ {r}_{nn}& =& \\left \\Vert {x}_{n} -\\mathrm{{ proj}}_{{M}_{n-1}}\\left \({x}_{n}\\right \)\\right \\Vert.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ14.gif) and explain why r 11⋯r nn gives the geometrically defined volume that comes from the formula where one multiplies height and base "area" and in turn uses that same principle to compute base "area". 3. Show that ![ $$\\mathrm{vol}\\left \(\\left \[\\begin{array}{c} \\alpha \\\\ \\beta \\end{array} \\right \],\\left \[\\begin{array}{c} \\gamma \\\\ \\delta \\end{array} \\right \]\\right \) = \\alpha \\delta -\\gamma \\beta $$ ](A81414_1_En_5_Chapter_Equx.gif) defines a volume form on ![ $${\\mathbb{F}}^{2}$$ ](A81414_1_En_5_Chapter_IEq13.gif) such that vole 1, e 2 = 1. 4. Show that we can define a volume form on ![ $${\\mathbb{F}}^{3}$$ ](A81414_1_En_5_Chapter_IEq14.gif) by ![ $$\\begin{array}{rcl} & \\mathrm{vol}\\left \(\\left \[\\begin{array}{c} {\\alpha }_{11} \\\\ {\\alpha }_{21} \\\\ {\\alpha }_{31}\\end{array} \\right \],\\left \[\\begin{array}{c} {\\alpha }_{12} \\\\ {\\alpha }_{22} \\\\ {\\alpha }_{32}\\end{array} \\right \],\\left \[\\begin{array}{c} {\\alpha }_{13} \\\\ {\\alpha }_{23} \\\\ {\\alpha }_{33}\\end{array} \\right \]\\right \)& \\\\ & \\quad = {\\alpha }_{11}\\mathrm{vol}\\left \(\\left \[\\begin{array}{c} {\\alpha }_{22} \\\\ {\\alpha }_{32}\\end{array} \\right \],\\left \[\\begin{array}{c} {\\alpha }_{23} \\\\ {\\alpha }_{33}\\end{array} \\right \]\\right \) & \\\\ & \\qquad - {\\alpha }_{12}\\mathrm{vol}\\left \(\\left \[\\begin{array}{c} {\\alpha }_{21} \\\\ {\\alpha }_{31}\\end{array} \\right \],\\left \[\\begin{array}{c} {\\alpha }_{23} \\\\ {\\alpha }_{33}\\end{array} \\right \]\\right \) & \\\\ & \\qquad + {\\alpha }_{13}\\mathrm{vol}\\left \(\\left \[\\begin{array}{c} {\\alpha }_{21} \\\\ {\\alpha }_{31}\\end{array} \\right \],\\left \[\\begin{array}{c} {\\alpha }_{22} \\\\ {\\alpha }_{32}\\end{array} \\right \]\\right \) & \\\\ & \\quad = {\\alpha }_{11}{\\alpha }_{22}{\\alpha }_{33} + {\\alpha }_{12}{\\alpha }_{23}{\\alpha }_{31} + {\\alpha }_{13}{\\alpha }_{32}{\\alpha }_{21} & \\\\ & \\qquad - {\\alpha }_{11}{\\alpha }_{23}{\\alpha }_{32} - {\\alpha }_{33}{\\alpha }_{12}{\\alpha }_{21} - {\\alpha }_{22}{\\alpha }_{13}{\\alpha }_{31}. & \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ15.gif) 5. Assume that vole 1,..., e 4 = 1 for the standard basis in ![ $${\\mathbb{R}}^{4}.$$ ](A81414_1_En_5_Chapter_IEq15.gif) Using the permutation formula for the volume form, determine with a minimum of calculations the sign for the volume of the columns in each of the matrices: (a) ![ $$\\left \[\\begin{array}{cccc} 1000& - 1 & 2 & - 1\\\\ 1 &1000 & 1 & 2 \\\\ 3 & - 2 & 1 &1000\\\\ 2 & - 1 &1000 & 2 \\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_Equy.gif) (b) ![ $$\\left \[\\begin{array}{cccc} 2 &1000& 2 & - 1\\\\ 1 & - 1 &1000 & 2 \\\\ 3 & - 2 & 1 &1000\\\\ 1000 & - 1 & 1 & 2 \\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_Equz.gif) (c) ![ $$\\left \[\\begin{array}{cccc} 2 & - 2 & 2 &1000\\\\ 1 & - 1 &1000 & 2 \\\\ 3 &1000& 1 & - 1\\\\ 1000 & - 1 & 1 & 2 \\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_Equaa.gif) (d) ![ $$\\left \[\\begin{array}{cccc} 2 & - 2 &1000& - 1\\\\ 1 &1000 & 2 & 2 \\\\ 3 & - 1 & 1 &1000\\\\ 1000 & - 1 & 1 & 2 \\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_Equab.gif) ## 5.3 How to Calculate Volumes Before establishing the existence of the volume form, we shall try to use what we learned in the previous section in a more concrete fashion to calculate volz 1,..., z n . Assume that volz 1,..., z n is a volume form on V and that there is a basis x 1,..., x n for V where volx 1,..., x n is known. First, observe that when ![ $$\\left \[\\begin{array}{ccc} {z}_{1} & \\cdots &{z}_{n} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]A$$ ](A81414_1_En_5_Chapter_Equac.gif) and A = α ij is an upper triangular matrix, then ![ $${\\alpha }_{{i}_{1}1}\\cdots {\\alpha }_{{i}_{n}n} = 0$$ ](A81414_1_En_5_Chapter_IEq16.gif) unless i 1 ≤ 1,..., i n ≤ n. Since we also need all the indices i 1,..., i n to be distinct, this implies that i 1 = 1,..., i n = n. Thus, we obtain the simple relationship ![ $$\\mathrm{vol}\\left \({z}_{1},\\ldots,{z}_{n}\\right \) = {\\alpha }_{11}\\cdots {\\alpha }_{nn}\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \).$$ ](A81414_1_En_5_Chapter_Equad.gif) While we cannot expect this to happen too often, it is possible to change z 1,..., z n to vectors y 1,..., y n in such a way that ![ $$\\mathrm{vol}\\left \({z}_{1},\\ldots,{z}_{n}\\right \) = \\pm \\mathrm{vol}\\left \({y}_{1},\\ldots,{y}_{n}\\right \)$$ ](A81414_1_En_5_Chapter_Equae.gif) and ![ $$\\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]A,$$ ](A81414_1_En_5_Chapter_Equaf.gif) where A is upper triangular. To construct the y i s, we simply use elementary column operations (see also Sect. 1.13 and Exercise 6 in that section). This works in almost the same way as Gauss elimination but with the twist that we are multiplying by matrices on the right.The allowable operations are (1) Interchanging vectors z k and z l . (2) Multiplying z l by ![ $$\\alpha\\in\\mathbb{F}$$ ](A81414_1_En_5_Chapter_IEq17.gif) and adding it to z k . The first operation changes the volume by a sign, while the second leaves the volume unchanged. So if ![ $$\\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_IEq18.gif) is obtained from ![ $$\\left \[\\begin{array}{ccc} {z}_{1} & \\cdots &{z}_{n} \\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_IEq19.gif) through these operations, then we have ![ $$\\mathrm{vol}\\left \({z}_{1},\\ldots,{z}_{n}\\right \) = \\pm \\mathrm{vol}\\left \({y}_{1},\\ldots,{y}_{n}\\right \).$$ ](A81414_1_En_5_Chapter_Equag.gif) The minus sign occurs exactly when we have done an odd number of interchanges. We now need to explain why we can obtain ![ $$\\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_IEq20.gif)such that ![ $$\\left \[\\begin{array}{ccc} {y}_{1} & \\cdots &{y}_{n} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]\\left \[\\begin{array}{cccc} {\\alpha }_{11} & {\\alpha }_{12} & \\cdots & {\\alpha }_{1n} \\\\ 0 &{\\alpha }_{22} & & {\\alpha }_{2n}\\\\ \\vdots & & \\ddots & \\vdots \\\\ 0 & 0 &\\cdots &{\\alpha }_{nn} \\end{array} \\right \].$$ ](A81414_1_En_5_Chapter_Equah.gif) One issue to note is that the process might break down if z 1,..., z n are linearly dependent. In that case, we have vol = 0. Instead of describing the procedure abstractly, let us see how it works in practice. In the case of ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_5_Chapter_IEq21.gif), we assume that we are using a volume form such that vole 1,..., e n = 1 for the canonical basis. Since that uniquely defines the volume form, we introduce some special notation for it: ![ $$\\left \\vert A\\right \\vert = \\left \\vert \\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \\vert =\\mathrm{ vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \),$$ ](A81414_1_En_5_Chapter_Equai.gif) where ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_5_Chapter_IEq22.gif) is the matrix such that ![ $$\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]A$$ ](A81414_1_En_5_Chapter_Equaj.gif) Example 5.3.1. Let ![ $$\\left \[\\begin{array}{ccc} {z}_{1} & {z}_{2} & {z}_{3} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} 0 &1&0\\\\ 0 &0 &3 \\\\ - 2&0&0 \\end{array} \\right \].$$ ](A81414_1_En_5_Chapter_Equak.gif) We can rearrange this into ![ $$\\left \[\\begin{array}{ccc} {z}_{2} & {z}_{3} & {z}_{1} \\end{array} \\right \] = \\left \[\\begin{array}{ccc} 1&0& 0\\\\ 0 &3 & 0 \\\\ 0&0& - 2 \\end{array} \\right \].$$ ](A81414_1_En_5_Chapter_Equal.gif) This takes two transpositions. Thus, ![ $$\\begin{array}{rcl} \\mathrm{vol}\\left \({z}_{1},{z}_{2},{z}_{3}\\right \)& =& \\mathrm{vol}\\left \({z}_{2},{z}_{3},{z}_{1}\\right \) \\\\ & =& 1 \\cdot3 \\cdot \\left \(-2\\right \)\\mathrm{vol}\\left \({e}_{1},{e}_{2},{e}_{3}\\right \) \\\\ & =& -6\\mathrm{vol}\\left \({e}_{1},{e}_{2},{e}_{3}\\right \).\\end{array}$$ ](A81414_1_En_5_Chapter_Equ16.gif) Example 5.3.2. Let ![ $$\\left \[\\begin{array}{cccc} {z}_{1} & {z}_{2} & {z}_{3} & {z}_{4} \\end{array} \\right \] = \\left \[\\begin{array}{cccc} 3 & 0 &1& 3\\\\ 1 & - 1 &2 & 0 \\\\ - 1& 1 &0& - 2\\\\ - 3 & 1 &1 & - 3 \\end{array} \\right \].$$ ](A81414_1_En_5_Chapter_Equam.gif) ![ $$\\begin{array}{rcl} & \\left \\vert \\begin{array}{cccc} 3 & 0 &1& 3\\\\ 1 & - 1 &2 & 0 \\\\ - 1& 1 &0& - 2\\\\ - 3 & 1 &1 & - 3 \\end{array} \\right \\vert & \\\\ & \\quad = \\left \\vert \\begin{array}{cccc} 0& 1 & 2 & 3\\\\ 1 & - 1 & 2 & 0 \\\\ 1& \\frac{1} {3} & -\\frac{2} {3} & - 2\\\\ 0 & 0 & 0 & - 3 \\end{array} \\right \\vert \\text{ after eliminating entries in row 4,}& \\\\ & \\quad = \\left \\vert \\begin{array}{cccc} 3&2& 2 & 3\\\\ 4 &0 & 2 & 0 \\\\ 0&0& -\\frac{2} {3} & - 2\\\\ 0 &0 & 0 & - 3 \\end{array} \\right \\vert \\text{ after eliminating entries in row 3,}& \\\\ & \\quad = \\left \\vert \\begin{array}{cccc} 2&3& 2 & 3\\\\ 0 &4 & 2 & 0 \\\\ 0&0& -\\frac{2} {3} & - 2\\\\ 0 &0 & 0 & - 3 \\end{array} \\right \\vert \\text{ after switching column 1 and 2.} & \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ17.gif) Thus, we get ![ $$\\begin{array}{rcl} \\mathrm{vol}\\left \({z}_{1},\\ldots,{z}_{4}\\right \)& =& -2 \\cdot4 \\cdot \\left \(-\\frac{2} {3}\\right \) \\cdot \\left \(-3\\right \)\\mathrm{vol}\\left \({e}_{1},\\ldots,{e}_{4}\\right \) \\\\ & =& -16\\mathrm{vol}\\left \({e}_{1},\\ldots,{e}_{4}\\right \).\\end{array}$$ ](A81414_1_En_5_Chapter_Equ18.gif) Example 5.3.3. Let us try to find ![ $$\\left \\vert \\begin{array}{ccccc} 1&1&1&\\cdots & 1\\\\ 1 &2 &2 &\\cdots& 2 \\\\ 1&2&3&\\cdots & 3\\\\ \\vdots & \\vdots & \\vdots & \\ddots & \\vdots\\\\ 1 &2 &3 &\\cdots&n \\end{array} \\right \\vert $$ ](A81414_1_En_5_Chapter_Equan.gif) Instead of starting with the last column vector, we are going to start with the first. This will lead us to a lower triangular matrix, but otherwise, we are using the same principles. ![ $$\\begin{array}{rcl} \\left \\vert \\begin{array}{ccccc} 1&1&1&\\cdots & 1\\\\ 1 &2 &2 &\\cdots& 2 \\\\ 1&2&3&\\cdots & 3\\\\ \\vdots & \\vdots & \\vdots & \\ddots & \\vdots\\\\ 1 &2 &3 &\\cdots&n \\end{array} \\right \\vert & =& \\left \\vert \\begin{array}{ccccc} 1&0&0&\\cdots & 0\\\\ 1 &1 &1 &\\cdots& 1 \\\\ 1&1&2&\\cdots & 2\\\\ \\vdots & \\vdots & \\vdots & \\ddots & \\vdots\\\\ 1 &1 &2 &\\cdots&n - 1 \\end{array} \\right \\vert\\\\ & =& \\left \\vert \\begin{array}{ccccc} 1&0&0&\\cdots & 0\\\\ 1 &1 &0 &\\cdots& 0 \\\\ 1&1&1&\\cdots & 1\\\\ \\vdots & \\vdots & \\vdots & \\ddots & \\vdots\\\\ 1 &1 &1 &\\cdots&n - 2 \\end{array} \\right \\vert\\\\ & &\\vdots \\\\ & =& \\left \\vert \\begin{array}{ccccc} 1&0&0&\\cdots &0\\\\ 1 &1 &0 &\\cdots&0 \\\\ 1&1&1&\\cdots &0\\\\ \\vdots & \\vdots & \\vdots & \\ddots & \\vdots\\\\ 1 &1 &1 &\\cdots&1 \\end{array} \\right \\vert\\\\ & =&1. \\end{array}$$ ](A81414_1_En_5_Chapter_Equ19.gif) ### 5.3.1 Exercises 1. The following problem was first considered by Leibniz and appears to be the first use of determinants. Let ![ $$A \\in \\mathrm{{ Mat}}_{\\left \(n+1\\right \)\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_5_Chapter_IEq23.gif) and ![ $$b \\in{\\mathbb{F}}^{n+1}.$$ ](A81414_1_En_5_Chapter_IEq24.gif) Show that: (a) If there is a solution to the overdetermined system Ax = b, ![ $$x \\in{\\mathbb{F}}^{n},$$ ](A81414_1_En_5_Chapter_IEq25.gif) then the augmented matrix satisfies Ab = 0. (b) Conversely, if A has rankA = n and Ab = 0, then there is a solution to Ax = b, ![ $$x \\in{\\mathbb{F}}^{n}.$$ ](A81414_1_En_5_Chapter_IEq26.gif) 2. Compute ![ $$\\left \\vert \\begin{array}{ccccc} 1& 1 &1&\\cdots &1\\\\ 0 & 1 &1 &\\cdots&1 \\\\ 1& 0 &1&\\cdots &1\\\\ \\vdots & \\vdots & \\vdots & \\ddots & \\vdots\\\\ 1 &\\cdots&1 & 0 &1\\end{array} \\right \\vert $$ ](A81414_1_En_5_Chapter_Equao.gif) 3. Let ![ $${x}_{1},\\ldots,{x}_{k} \\in{\\mathbb{R}}^{n}$$ ](A81414_1_En_5_Chapter_IEq27.gif) and assume that vole 1,..., e n = 1. Show that ![ $$\\left \\vert G\\left \({x}_{1},\\ldots,{x}_{k}\\right \)\\right \\vert \\leq {\\left \\Vert {x}_{1}\\right \\Vert }^{2}\\cdots {\\left \\Vert {x}_{ k}\\right \\Vert }^{2},$$ ](A81414_1_En_5_Chapter_Equap.gif) where Gx 1,..., x k is the Gram matrix whose ij entries are the inner products x j | x i . Look at Exercise 4 in Sect. 3.5 for the definition of the Gram matrix and use Exercise 2 in Sect. 5.2. 4. Let ![ $${x}_{1},\\ldots,{x}_{k} \\in{\\mathbb{R}}^{n}$$ ](A81414_1_En_5_Chapter_IEq28.gif) and assume that vole 1,..., e n = 1. (a) Show that ![ $$G\\left \({x}_{1},\\ldots,{x}_{n}\\right \) ={ \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n}\\end{array} \\right \]}^{{_\\ast}}\\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n}\\end{array} \\right \].$$ ](A81414_1_En_5_Chapter_Equaq.gif) (b) Show that ![ $$\\left \\vert G\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\right \\vert ={ \\left \\vert \\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\right \\vert }^{2}.$$ ](A81414_1_En_5_Chapter_Equar.gif) (c) Using the previous exercise, conclude that Hadamard's inequality holds ![ $${\\left \\vert \\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\right \\vert }^{2} \\leq {\\left \\Vert {x}_{ 1}\\right \\Vert }^{2}\\cdots {\\left \\Vert {x}_{ n}\\right \\Vert }^{2}.$$ ](A81414_1_En_5_Chapter_Equas.gif) (d) When is ![ $${\\left \\vert \\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\right \\vert }^{2} ={ \\left \\Vert {x}_{ 1}\\right \\Vert }^{2}\\cdots {\\left \\Vert {x}_{ n}\\right \\Vert }^{2}?$$ ](A81414_1_En_5_Chapter_Equat.gif) 5. Assume that vole 1,..., e 4 = 1 for the standard basis in ![ $${\\mathbb{R}}^{4}.$$ ](A81414_1_En_5_Chapter_IEq29.gif) Find the volumes: (a) ![ $$\\left \\vert \\begin{array}{cccc} 0& - 1&2& - 1\\\\ 1 & 0 &1 & 2 \\\\ 3& - 2&1& 0\\\\ 2 & - 1 &0 & 2 \\end{array} \\right \\vert $$ ](A81414_1_En_5_Chapter_Equau.gif) (b) ![ $$\\left \\vert \\begin{array}{cccc} 2& 0 &2& - 1\\\\ 1 & - 1 &0 & 2 \\\\ 3& - 2&1& 1\\\\ 0 & - 1 &1 & 2 \\end{array} \\right \\vert $$ ](A81414_1_En_5_Chapter_Equav.gif) (c) ![ $$\\left \\vert \\begin{array}{cccc} 2& - 2&2& 0\\\\ 1 & - 1 &1 & 2 \\\\ 3& 0 &1& - 1\\\\ 1 & - 1 &1 & 2 \\end{array} \\right \\vert $$ ](A81414_1_En_5_Chapter_Equaw.gif) (d) ![ $$\\left \\vert \\begin{array}{cccc} 2& - 2&0& - 1\\\\ 1 & 1 &2 & 2 \\\\ 3& - 1&1& 1\\\\ 1 & - 1 &1 & 2 \\end{array} \\right \\vert $$ ](A81414_1_En_5_Chapter_Equax.gif) ## 5.4 Existence of the Volume Form The construction of volx 1,..., x n proceeds by induction on the dimension of V. We start with a basis e 1,..., e n ∈ V that is assumed to have unit volume. Next, we assume, by induction, that there is a volume form vol n − 1 on spane 2,..., e n such that e 2,..., e n has unit volume. Finally, let E : V -> V be the projection onto spane 2,..., e n whose kernel is spane 1. For a collection x 1,..., x n ∈ V, we decompose x i = α i e 1 + Ex i . The volume form vol n on V is then defined by ![ $$\\mathrm{{vol}}^{n}\\left \({x}_{ 1},\\ldots,{x}_{n}\\right \) ={ \\sum\\nolimits }_{k=1}^{n}{\\left \(-1\\right \)}^{k-1}{\\alpha }_{ k}\\mathrm{{vol}}^{n-1}\\left \(E\\left \({x}_{ 1}\\right \),\\ldots,\\widehat{E\\left \({x}_{k}\\right \)},\\ldots,E\\left \({x}_{n}\\right \)\\right \).$$ ](A81414_1_En_5_Chapter_Equay.gif) (Recall that ![ $$\\widehat{a}$$ ](A81414_1_En_5_Chapter_IEq30.gif) means that a has been eliminated). This is essentially like defining the volume via a Laplace expansion along the first row. As α k , E, and vol n − 1 are linear, it is obvious that the new vol n form is linear in each variable. The alternating property follows if we can show that the form vanishes when x i = x j . This is done as follows: ![ $$\\begin{array}{rcl} & \\mathrm{{vol}}^{n}\\left \(\\ldots,{x}_{i},\\ldots {x}_{j},\\ldots \\right \) & \\\\ & \\quad ={ \\sum\\nolimits }_{k\\neq i,j}{\\left \(-1\\right \)}^{k-1}{\\alpha }_{k}\\mathrm{{vol}}^{n-1}\\left \(\\ldots,E\\left \({x}_{i}\\right \),\\ldots,\\widehat{E\\left \({x}_{k}\\right \)},\\ldots,E\\left \({x}_{j}\\right \),\\ldots \\right \)& \\\\ & \\quad +{ \\left \(-1\\right \)}^{i-1}{\\alpha }_{i}\\mathrm{{vol}}^{n-1}\\left \(\\ldots,\\widehat{E\\left \({x}_{i}\\right \)},\\ldots,E\\left \({x}_{j}\\right \),\\ldots \\right \) & \\\\ & \\quad +{ \\left \(-1\\right \)}^{j-1}{\\alpha }_{j}\\mathrm{{vol}}^{n-1}\\left \(\\ldots,E\\left \({x}_{i}\\right \),\\ldots,\\widehat{E\\left \({x}_{j}\\right \)},\\ldots \\right \) & \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ20.gif) Using that Ex i = Ex j and vol n − 1 is alternating on spane 2,..., e n shows ![ $$\\mathrm{{vol}}^{n-1}\\left \(\\ldots,E\\left \({x}_{ i}\\right \),\\ldots,\\widehat{E\\left \({x}_{k}\\right \)},\\ldots,E\\left \({x}_{j}\\right \),\\ldots \\right \) = 0$$ ](A81414_1_En_5_Chapter_Equaz.gif) Hence, ![ $$\\begin{array}{rcl} & \\mathrm{{vol}}^{n}\\left \(\\ldots,{x}_{i},\\ldots {x}_{j},\\ldots \\right \) & \\\\ & \\quad ={ \\left \(-1\\right \)}^{i-1}{\\alpha }_{i}\\mathrm{{vol}}^{n-1}\\left \(\\ldots,\\widehat{E\\left \({x}_{i}\\right \)},\\ldots,E\\left \({x}_{j}\\right \),\\ldots \\right \) & \\\\ & \\quad +{ \\left \(-1\\right \)}^{j-1}{\\alpha }_{j}\\mathrm{{vol}}^{n-1}\\left \(\\ldots,E\({x}_{i}\),\\ldots,\\widehat{E\({x}_{j}\)},\\ldots \\right \) & \\\\ & \\quad ={ \\left \(-1\\right \)}^{i-1}{\\left \(-1\\right \)}^{j-1-i}{\\alpha }_{i}\\mathrm{{vol}}^{n-1}\\left \(\\ldots,E\\left \({x}_{i-1}\\right \), \\frac{{i}^{\\mathrm{th}}\\mathrm{ place}} {E\\left \({x}_{j}\\right \)},E\\left \({x}_{i+1}\\right \)\\ldots \\right \)& \\\\ & \\quad +{ \\left \(-1\\right \)}^{j-1}{\\alpha }_{j}\\mathrm{{vol}}^{n-1}\\left \(\\ldots,E\\left \({x}_{i}\\right \),\\ldots,\\widehat{E\({x}_{j}\)},\\ldots \\right \), & \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ21.gif) where moving Ex j to the ith-place in the expression ![ $$\\mathrm{{vol}}^{n-1}\\left \(\\ldots,\\widehat{E\\left \({x}_{ i}\\right \)},\\ldots,E\\left \({x}_{j}\\right \),\\ldots \\right \)$$ ](A81414_1_En_5_Chapter_Equba.gif) requires j − 1 − i moves since Ex j is in the j − 1-place. Using that α i = α j and Ex i = Ex j , this shows ![ $$\\begin{array}{rcl} \\mathrm{{vol}}^{n}\\left \(\\ldots,{x}_{ i},\\ldots {x}_{j},\\ldots \\right \)& =&{ \\left \(-1\\right \)}^{j-2}{\\alpha }_{ i}\\mathrm{{vol}}^{n-1}\\left \(\\ldots, \\frac{{i}^{th}\\mathrm{ place}} {E\\left \({x}_{j}\\right \)},\\ldots,,\\ldots \\right \) \\\\ & & +{\\left \(-1\\right \)}^{j-1}{\\alpha }_{ j}\\mathrm{{vol}}^{n-1}\\left \(\\ldots,E\\left \({x}_{ i}\\right \),\\ldots,\\widehat{E\({x}_{j}\)},\\ldots \\right \) \\\\ & =& 0.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ22.gif) Aside from defining the volume form, we also get a method for calculating volumes using induction on dimension. In ![ $$\\mathbb{F}$$ ](A81414_1_En_5_Chapter_IEq31.gif), we just define volx = x. For ![ $${\\mathbb{F}}^{2}$$ ](A81414_1_En_5_Chapter_IEq32.gif), we have ![ $$\\mathrm{vol}\\left \(\\left \[\\begin{array}{c} \\alpha \\\\ \\beta\\end{array} \\right \],\\left \[\\begin{array}{c} \\gamma \\\\ \\delta\\end{array} \\right \]\\right \) = \\alpha \\delta -\\gamma \\beta.$$ ](A81414_1_En_5_Chapter_Equbb.gif) In ![ $${\\mathbb{F}}^{3}$$ ](A81414_1_En_5_Chapter_IEq33.gif), we get ![ $$\\begin{array}{rcl} & \\mathrm{vol}\\left \(\\left \[\\begin{array}{c} {\\alpha }_{11} \\\\ {\\alpha }_{21} \\\\ {\\alpha }_{31} \\end{array} \\right \],\\left \[\\begin{array}{c} {\\alpha }_{12} \\\\ {\\alpha }_{22} \\\\ {\\alpha }_{32} \\end{array} \\right \],\\left \[\\begin{array}{c} {\\alpha }_{13} \\\\ {\\alpha }_{23} \\\\ {\\alpha }_{33} \\end{array} \\right \]\\right \)& \\\\ & \\quad = {\\alpha }_{11}\\mathrm{vol}\\left \(\\left \[\\begin{array}{c} {\\alpha }_{22} \\\\ {\\alpha }_{32} \\end{array} \\right \],\\left \[\\begin{array}{c} {\\alpha }_{23} \\\\ {\\alpha }_{33} \\end{array} \\right \]\\right \) & \\\\ & \\qquad - {\\alpha }_{12}\\mathrm{vol}\\left \(\\left \[\\begin{array}{c} {\\alpha }_{21} \\\\ {\\alpha }_{31} \\end{array} \\right \],\\left \[\\begin{array}{c} {\\alpha }_{23} \\\\ {\\alpha }_{33} \\end{array} \\right \]\\right \) & \\\\ & \\qquad + {\\alpha }_{13}\\mathrm{vol}\\left \(\\left \[\\begin{array}{c} {\\alpha }_{21} \\\\ {\\alpha }_{31} \\end{array} \\right \],\\left \[\\begin{array}{c} {\\alpha }_{22} \\\\ {\\alpha }_{32} \\end{array} \\right \]\\right \) & \\\\ & \\quad = {\\alpha }_{11}{\\alpha }_{22}{\\alpha }_{33} + {\\alpha }_{12}{\\alpha }_{23}{\\alpha }_{31} + {\\alpha }_{13}{\\alpha }_{21}{\\alpha }_{32} & \\\\ & \\qquad - {\\alpha }_{11}{\\alpha }_{32}{\\alpha }_{23} - {\\alpha }_{12}{\\alpha }_{21}{\\alpha }_{33} - {\\alpha }_{13}{\\alpha }_{31}{\\alpha }_{22} & \\\\ & \\quad = {\\alpha }_{11}{\\alpha }_{22}{\\alpha }_{33} + {\\alpha }_{12}{\\alpha }_{23}{\\alpha }_{31} + {\\alpha }_{13}{\\alpha }_{32}{\\alpha }_{21} & \\\\ & \\qquad - {\\alpha }_{11}{\\alpha }_{23}{\\alpha }_{32} - {\\alpha }_{33}{\\alpha }_{12}{\\alpha }_{21} - {\\alpha }_{22}{\\alpha }_{13}{\\alpha }_{31}. & \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ23.gif) In the above definition, there is, of course, nothing special about the choice of basis e 1,..., e n or the ordering of the basis. Let us refer to the specific choice of volume form as vol1 as we are expanding along the first row. If we switch e 1 and e k , then we are apparently expanding along the kth row instead. This defines a volume form vol k . By construction, we have ![ $$\\begin{array}{rcl} \\mathrm{{vol}}_{1}\\left \({e}_{1},\\ldots,{e}_{n}\\right \)& =& 1, \\\\ \\mathrm{{vol}}_{k}\\left \({e}_{k},{e}_{2},\\ldots, \\frac{{}_{k\\mathrm{th}\\ \\mathrm{place}}} {{e}_{1}},\\ldots,{e}_{n}\\right \)& =& 1.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ24.gif) Thus, ![ $$\\begin{array}{rcl} \\mathrm{{vol}}_{1}& =&{ \\left \(-1\\right \)}^{k-1}\\mathrm{{vol}}_{ k} \\\\ & =&{ \\left \(-1\\right \)}^{k+1}\\mathrm{{vol}}_{ k}.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ25.gif) So if we wish to calculate vol1 by an expansion along the kth row, we need to remember the extra sign − 1 k + 1. In the case of ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_5_Chapter_IEq34.gif), we define the volume form vol to be vol1 as constructed above. In this case, we shall often just write ![ $$\\left \\vert \\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n}\\end{array} \\right \\vert =\\mathrm{ vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)$$ ](A81414_1_En_5_Chapter_Equbc.gif) as in the previous section. Example 5.4.1. Let us try this with the example from the previous section: ![ $$\\left \[\\begin{array}{cccc} {z}_{1} & {z}_{2} & {z}_{3} & {z}_{4} \\end{array} \\right \] = \\left \[\\begin{array}{cccc} 3 & 0 &1& 3\\\\ 1 & - 1 &2 & 0 \\\\ - 1& 1 &0& - 2\\\\ - 3 & 1 &1 & - 3 \\end{array} \\right \].$$ ](A81414_1_En_5_Chapter_Equbd.gif) Expansion along the first row gives ![ $$\\begin{array}{rcl} \\left \\vert \\begin{array}{cccc} {z}_{1} & {z}_{2} & {z}_{3} & {z}_{4} \\end{array} \\right \\vert & =& 3\\left \\vert \\begin{array}{ccc} - 1&2& 0\\\\ 1 &0 & -2 \\\\ 1 &1& - 3 \\end{array} \\right \\vert - 0\\left \\vert \\begin{array}{ccc} 1 &2& 0\\\\ - 1 &0 & -2 \\\\ - 3&1& - 3 \\end{array} \\right \\vert\\\\ & & +1\\left \\vert \\begin{array}{ccc} 1 & - 1& 0\\\\ - 1 & 1 & -2 \\\\ - 3& 1 & - 3 \\end{array} \\right \\vert - 3\\left \\vert \\begin{array}{ccc} 1 & - 1&2\\\\ - 1 & 1 &0 \\\\ - 3& 1 &1 \\end{array} \\right \\vert\\\\ & =& 3 \\cdot0 - 0 + 1 \\cdot \\left \(-4\\right \) - 3 \\cdot4 \\\\ & =& -16.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ26.gif) Expansion along the second row gives ![ $$\\begin{array}{rcl} \\left \\vert \\begin{array}{cccc} {z}_{1} & {z}_{2} & {z}_{3} & {z}_{4} \\end{array} \\right \\vert & =& -1\\left \\vert \\begin{array}{ccc} 0&1& 3\\\\ 1 &0 & -2 \\\\ 1&1& - 3 \\end{array} \\right \\vert+ \\left \(-1\\right \)\\left \\vert \\begin{array}{ccc} 3 &1& 3\\\\ - 1 &0 & -2 \\\\ - 3&1& - 3 \\end{array} \\right \\vert\\\\ & &-2\\left \\vert \\begin{array}{ccc} 3 &0& 3\\\\ - 1 &1 & -2 \\\\ - 3&1& - 3 \\end{array} \\right \\vert+ 0\\left \\vert \\begin{array}{ccc} 3 &0&1\\\\ - 1 &1 &0 \\\\ - 3&1&1 \\end{array} \\right \\vert\\\\ & =& -1 \\cdot4 - 1 \\cdot6 - 2 \\cdot3 + 0 \\\\ & =& -16.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ27.gif) Definition 5.4.2. The general formula in ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_5_Chapter_IEq35.gif) for expanding along the kth row in an n ×n matrix ![ $$A = \\left \[\\begin{array}{ccc} {x}_{1} & \\cdots &{x}_{n} \\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_IEq36.gif)is calledthe Laplace expansion along the kth row and looks like ![ $$\\begin{array}{rcl} \\left \\vert A\\right \\vert & ={ \\left \(-1\\right \)}^{k+1}{\\alpha }_{k1}\\left \\vert {A}_{k1}\\right \\vert +{ \\left \(-1\\right \)}^{k+2}{\\alpha }_{k2}\\left \\vert {A}_{k2}\\right \\vert + \\cdots+{ \\left \(-1\\right \)}^{k+n}{\\alpha }_{kn}\\left \\vert {A}_{kn}\\right \\vert & \\\\ & ={ \\sum\\nolimits }_{i=1}^{n}{\\left \(-1\\right \)}^{k+i}{\\alpha }_{ki}\\left \\vert {A}_{ki}\\right \\vert. & \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ28.gif) Here α ij is the ij entry in A, i.e., the ith coordinate for x j , and A ij is the companionn − 1 ×n − 1 matrix for α ij . The matrix A ij is constructed from A by eliminating the ith row and jth column. Note that the exponent for − 1 is i + j when we are at the ij entry α ij . Example 5.4.3. This expansion gives us a very intriguing formula for the determinant that looks like we have used the chain rule for differentiation in several variables. To explain this, let us think of A as a function in the entries x ij . The expansion along the kth row then looks like ![ $$\\left \\vert A\\right \\vert ={ \\left \(-1\\right \)}^{k+1}{x}_{ k1}\\left \\vert {A}_{k1}\\right \\vert +{ \\left \(-1\\right \)}^{k+2}{x}_{ k2}\\left \\vert {A}_{k2}\\right \\vert + \\cdots+{ \\left \(-1\\right \)}^{k+n}{x}_{ kn}\\left \\vert {A}_{kn}\\right \\vert.$$ ](A81414_1_En_5_Chapter_Eqube.gif) From the definition of A kj , it follows that it does depend on the variables x ki . Thus, ![ $$\\begin{array}{rcl} \\frac{\\partial \\left \\vert A\\right \\vert } {\\partial {x}_{ki}}& ={ \\left \(-1\\right \)}^{k+1}\\frac{\\partial {x}_{k1}} {\\partial {x}_{ki}} \\left \\vert {A}_{k1}\\right \\vert +{ \\left \(-1\\right \)}^{k+2}\\frac{\\partial {x}_{k2}} {\\partial {x}_{ki}} \\left \\vert {A}_{k2}\\right \\vert + \\cdots+{ \\left \(-1\\right \)}^{k+n}\\frac{\\partial {x}_{kn}} {\\partial {x}_{ki}} \\left \\vert {A}_{kn}\\right \\vert & \\\\ & ={ \\left \(-1\\right \)}^{k+i}\\left \\vert {A}_{ki}\\right \\vert. & \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ29.gif) Replacing − 1 k + i A ki by the partial derivative then gives us the formula ![ $$\\begin{array}{rcl} \\left \\vert A\\right \\vert & =& {x}_{k1} \\frac{\\partial \\left \\vert A\\right \\vert } {\\partial {x}_{k1}} + {x}_{k2} \\frac{\\partial \\left \\vert A\\right \\vert } {\\partial {x}_{k2}} + \\cdots+ {x}_{kn} \\frac{\\partial \\left \\vert A\\right \\vert } {\\partial {x}_{kn}} \\\\ & =& {\\sum\\nolimits }_{i=1}^{n}{x}_{ ki} \\frac{\\partial \\left \\vert A\\right \\vert } {\\partial {x}_{ki}}.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ30.gif) Since we get the same answer for each k, this implies ![ $$n\\left \\vert A\\right \\vert ={ \\sum\\nolimits }_{i,j=1}^{n}{x}_{ ij} \\frac{\\partial \\left \\vert A\\right \\vert } {\\partial {x}_{ij}}.$$ ](A81414_1_En_5_Chapter_Equbf.gif) ### 5.4.1 Exercises 1. Find the determinant of the following n ×n matrix where all entries are 1 except the entries just below the diagonal which are 0: ![ $$\\left \\vert \\begin{array}{ccccc} 1& 1 &1&\\cdots &1\\\\ 0 & 1 &1 &\\cdots&1 \\\\ 1& 0 &1&\\cdots & \\vdots\\\\ \\vdots & 1 & \\ddots & \\ddots &1 \\\\ 1&\\cdots &1& 0 &1\\end{array} \\right \\vert $$ ](A81414_1_En_5_Chapter_Equbg.gif) 2. Find the determinant of the following n ×n matrix: ![ $$\\left \\vert \\begin{array}{ccccc} 1 &\\cdots &1 & 1 &1\\\\ 2 &\\cdots& 2 & 2 &1 \\\\ 3 &\\cdots &3 & 1 & \\vdots\\\\ \\vdots & & 1 &\\cdots&1 \\\\ n&1 &\\cdots &1 &1\\end{array} \\right \\vert $$ ](A81414_1_En_5_Chapter_Equbh.gif) 3. (The Vandermonde Determinant) (a) Show that ![ $$\\left \\vert \\begin{array}{ccc} 1 &\\cdots & 1\\\\ {\\lambda }_{ 1} & \\cdots & {\\lambda }_{n}\\\\ \\vdots & & \\vdots \\\\ {\\lambda }_{1}^{n-1} & \\cdots &{\\lambda }_{n}^{n-1}\\end{array} \\right \\vert={ \\prod\\nolimits }_{i<j}\\left \({\\lambda }_{i} - {\\lambda }_{j}\\right \).$$ ](A81414_1_En_5_Chapter_Equbi.gif) (b) When λ1,..., λ n are the complex roots of a polynomial pt = t n + a n − 1 t n − 1 + ⋯ + a 1 t + a 0, we define the discriminant of p as ![ $$\\Delta= D ={ \\left \({\\prod\\nolimits }_{i<j}\\left \({\\lambda }_{i} - {\\lambda }_{j}\\right \)\\right \)}^{2}.$$ ](A81414_1_En_5_Chapter_Equbj.gif) When n = 2, show that this conforms with the usual definition. In general, one can compute Δ from the coefficients of p. Show that Δ is real if p is real. 4. Let S n be the group of permutations, i.e., bijective maps from {1,..., n} to itself. These are generally denoted by σ and correspond to a switching of indices, σk = i k , k = 1,..., n. Consider the polynomial in n variables ![ $$p\\left \({x}_{1},\\ldots,{x}_{n}\\right \) ={ \\prod\\nolimits }_{i<j}\\left \({x}_{i} - {x}_{j}\\right \).$$ ](A81414_1_En_5_Chapter_Equbk.gif) (a) Show that if σ ∈ S n is a permutation, then ![ $$\\mathrm{sign}\\left \(\\sigma \\right \)p\\left \({x}_{1},\\ldots,{x}_{n}\\right \) = p\\left \({x}_{\\sigma \\left \(1\\right \)},\\ldots,{x}_{\\sigma \\left \(n\\right \)}\\right \)$$ ](A81414_1_En_5_Chapter_Equbl.gif) for some sign signσ ∈ ± 1. (b) Show that the sign function S n -> ± 1 is a homomorphism, i.e., signστ = signσsignτ. (c) Using the above characterization, show that signσ can be determined by the number of inversions in the permutation. An inversion in σ is a pair of consecutive integers whose order is reversed, i.e., σi > σi + 1. 5. Let A n = α ij be a real skew-symmetric n ×n matrix, i.e., α ij = − α ji . (a) Show that A 2 = α12 2. (b) Show that A 4 = α12α34 + α14α23 − α13α24 2. (c) Show that A 2n ≥ 0. (d) Show that A 2n + 1 = 0. 6. Show that the n ×n matrix satisfies ![ $$\\left \\vert \\begin{array}{lllll} \\alpha &\\beta&\\beta&\\cdots &\\beta\\\\ \\beta&\\alpha &\\beta&\\cdots &\\beta\\\\ \\beta&\\beta&\\alpha &\\cdots &\\beta \\\\ \\vdots &\\vdots &\\vdots &\\ddots &\\vdots \\\\ \\beta&\\beta&\\beta&\\cdots &\\alpha \\end{array} \\right \\vert = \\left \(\\alpha+ \\left \(n - 1\\right \)\\beta \\right \){\\left \(\\alpha- \\beta \\right \)}^{n-1}.$$ ](A81414_1_En_5_Chapter_Equbm.gif) 7. Show that the n ×n matrix ![ $${A}_{n} = \\left \[\\begin{array}{lllll} {\\alpha }_{1} & 1 &0 &\\cdots &0 \\\\ - 1&{\\alpha }_{2} & 1 &\\cdots &0 \\\\ 0 & - 1&{\\alpha }_{3} & \\cdots &0\\\\ \\vdots &\\vdots &\\vdots &\\ddots &\\vdots \\\\ 0 &0 &0 &\\cdots &{\\alpha }_{n}\\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_Equbn.gif) satisfies ![ $$\\begin{array}{rcl} \\left \\vert {A}_{1}\\right \\vert & =& {\\alpha }_{1} \\\\ \\left \\vert {A}_{2}\\right \\vert & =& 1 + {\\alpha }_{1}{\\alpha }_{2}, \\\\ \\left \\vert {A}_{n}\\right \\vert & =& {\\alpha }_{n}\\left \\vert {A}_{n-1}\\right \\vert + \\left \\vert {A}_{n-2}\\right \\vert.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ31.gif) 8. Show that an n ×m matrix has (column) rank ≥ k if and only there is a submatrix of size k ×k with nonzero determinant. Use this to prove that row and column ranks are equal. 9. Here are some problems that discuss determinants and geometry. (a) Show that the area of the triangle whose vertices are ![ $$\\left \[\\begin{array}{c} {\\alpha }_{1} \\\\ {\\beta }_{1}\\end{array} \\right \],\\left \[\\begin{array}{c} {\\alpha }_{2} \\\\ {\\beta }_{2}\\end{array} \\right \],\\left \[\\begin{array}{c} {\\alpha }_{3} \\\\ {\\beta }_{3}\\end{array} \\right \] \\in{\\mathbb{R}}^{2}$$ ](A81414_1_En_5_Chapter_Equbo.gif) is given by ![ $$\\frac{1} {2}\\left \\vert \\begin{array}{lll} 1 &1 &1\\\\ {\\alpha }_{ 1} & {\\alpha }_{2} & {\\alpha }_{3} \\\\ {\\beta }_{1} & {\\beta }_{2} & {\\beta }_{3}\\end{array} \\right \\vert.$$ ](A81414_1_En_5_Chapter_Equbp.gif) (b) Show that three vectors ![ $$\\left \[\\begin{array}{c} {\\alpha }_{1} \\\\ {\\beta }_{1}\\end{array} \\right \],\\left \[\\begin{array}{c} {\\alpha }_{2} \\\\ {\\beta }_{2}\\end{array} \\right \],\\left \[\\begin{array}{c} {\\alpha }_{3} \\\\ {\\beta }_{3}\\end{array} \\right \] \\in{\\mathbb{R}}^{2}$$ ](A81414_1_En_5_Chapter_Equbq.gif) satisfy ![ $$\\left \\vert \\begin{array}{lll} 1 &1 &1\\\\ {\\alpha }_{ 1} & {\\alpha }_{2} & {\\alpha }_{3} \\\\ {\\beta }_{1} & {\\beta }_{2} & {\\beta }_{3}\\end{array} \\right \\vert = 0$$ ](A81414_1_En_5_Chapter_Equbr.gif) if and only if they are collinear, i.e., lie on a line ![ $$l = \\left \\{at + b : t \\in\\mathbb{R}\\right \\}$$ ](A81414_1_En_5_Chapter_IEq37.gif), where ![ $$a,b \\in{\\mathbb{R}}^{2}.$$ ](A81414_1_En_5_Chapter_IEq38.gif) (c) Show that four vectors ![ $$\\left \[\\begin{array}{l} {\\alpha }_{1} \\\\ {\\beta }_{1} \\\\ {\\gamma }_{1}\\end{array} \\right \],\\left \[\\begin{array}{l} {\\alpha }_{2} \\\\ {\\beta }_{2} \\\\ {\\gamma }_{2}\\end{array} \\right \],\\left \[\\begin{array}{l} {\\alpha }_{3} \\\\ {\\beta }_{3} \\\\ {\\gamma }_{3}\\end{array} \\right \],\\left \[\\begin{array}{l} {\\alpha }_{4} \\\\ {\\beta }_{4} \\\\ {\\gamma }_{4}\\end{array} \\right \] \\in{\\mathbb{R}}^{3}$$ ](A81414_1_En_5_Chapter_Equbs.gif) satisfy ![ $$\\left \\vert \\begin{array}{llll} 1 &1 &1 &1\\\\ {\\alpha }_{ 1} & {\\alpha }_{2} & {\\alpha }_{3} & {\\alpha }_{4} \\\\ {\\beta }_{1} & {\\beta }_{2} & {\\beta }_{3} & {\\beta }_{4} \\\\ {\\gamma }_{1} & {\\gamma }_{2} & {\\gamma }_{3} & {\\gamma }_{4}\\end{array} \\right \\vert = 0$$ ](A81414_1_En_5_Chapter_Equbt.gif) if and only if they are coplanar, i.e., lie in the same plane ![ $$\\pi= \\left \\{x \\in{\\mathbb{R}}^{3} :\\right.$$ ](A81414_1_En_5_Chapter_IEq39.gif) a, x = α. 10. Let ![ $$\\left \[\\begin{array}{c} {\\alpha }_{1} \\\\ {\\beta }_{1}\\end{array} \\right \],\\left \[\\begin{array}{c} {\\alpha }_{2} \\\\ {\\beta }_{2}\\end{array} \\right \],\\left \[\\begin{array}{c} {\\alpha }_{3} \\\\ {\\beta }_{3}\\end{array} \\right \] \\in{\\mathbb{R}}^{2}$$ ](A81414_1_En_5_Chapter_Equbu.gif) be three points in the plane. (a) If α1, α2, α3 are distinct, then the equation for the parabola y = ax 2 + bx + c passing through the three given points is given by ![ $$\\frac{\\left \\vert \\begin{array}{llll} 1 &1 &1 &1\\\\ x &{\\alpha }_{ 1} & {\\alpha }_{2} & {\\alpha }_{3} \\\\ {x}^{2} & {\\alpha }_{1}^{2} & {\\alpha }_{2}^{2} & {\\alpha }_{3}^{2} \\\\ y &{\\beta }_{1} & {\\beta }_{2} & {\\beta }_{3}\\end{array} \\right \\vert } {\\left \\vert \\begin{array}{lll} 1 &1 &1\\\\ {\\alpha }_{ 1} & {\\alpha }_{2} & {\\alpha }_{3} \\\\ {\\alpha }_{1}^{2} & {\\alpha }_{2}^{2} & {\\alpha }_{3}^{2}\\end{array} \\right \\vert } = 0.$$ ](A81414_1_En_5_Chapter_Equbv.gif) (b) If the points are not collinear, then the equation for the circle x 2 + y 2 + ax + by + c = 0 passing through the three given points is given by ![ $$\\frac{\\left \\vert \\begin{array}{llll} 1 &1 &1 &1\\\\ x &{\\alpha }_{ 1} & {\\alpha }_{2} & {\\alpha }_{3} \\\\ y &{\\beta }_{1} & {\\beta }_{2} & {\\beta }_{3} \\\\ {x}^{2} + {y}^{2} & {\\alpha }_{1}^{2} + {\\beta }_{1}^{2} & {\\alpha }_{2}^{2} + {\\beta }_{2}^{2} & {\\alpha }_{3}^{2} + {\\beta }_{3}^{2}\\end{array} \\right \\vert } {\\left \\vert \\begin{array}{lll} 1 &1 &1\\\\ {\\alpha }_{ 1} & {\\alpha }_{2} & {\\alpha }_{3} \\\\ {\\beta }_{1} & {\\beta }_{2} & {\\beta }_{3}\\end{array} \\right \\vert } = 0.$$ ](A81414_1_En_5_Chapter_Equbw.gif) ## 5.5 Determinants of Linear Operators Definition 5.5.1. To define the determinant of a linear operator L : V -> V, we simply observe that volLx 1,..., Lx n defines an alternating n-form that is linear in each variable. Thus, ![ $$\\mathrm{vol}\\left \(L\\left \({x}_{1}\\right \),\\ldots,L\\left \({x}_{n}\\right \)\\right \) =\\det \\left \(L\\right \)\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)$$ ](A81414_1_En_5_Chapter_Equbx.gif) for some scalar ![ $$\\det \\left \(L\\right \) \\in\\mathbb{F}.$$ ](A81414_1_En_5_Chapter_IEq40.gif) This is the determinant of L. We note that a different volume form vol1 x 1,..., x n gives the same definition of the determinant. To see this, we first use that vol1 = λvol and then observe that ![ $$\\begin{array}{rcl} \\mathrm{{vol}}_{1}\\left \(L\\left \({x}_{1}\\right \),\\ldots,L\\left \({x}_{n}\\right \)\\right \)& =& \\lambda \\mathrm{vol}\\left \(L\\left \({x}_{1}\\right \),\\ldots,L\\left \({x}_{n}\\right \)\\right \) \\\\ & =& \\det \\left \(L\\right \)\\lambda \\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \) \\\\ & =& \\det \\left \(L\\right \)\\mathrm{{vol}}_{1}\\left \({x}_{1},\\ldots,{x}_{n}\\right \).\\end{array}$$ ](A81414_1_En_5_Chapter_Equ32.gif) If e 1,..., e n is chosen so that vole 1,..., e n = 1, then we get the simpler formula ![ $$\\mathrm{vol}\\left \(L\\left \({e}_{1}\\right \),\\ldots,L\\left \({e}_{n}\\right \)\\right \) =\\det \\left \(L\\right \).$$ ](A81414_1_En_5_Chapter_Equby.gif) This leads us to one of the standard formulas for the determinant of a matrix. From the properties of volume forms (see Proposition 5.2.2), we obtain ![ $$\\begin{array}{rcl} \\det \\left \(L\\right \)& =& \\mathrm{vol}\\left \(L\\left \({e}_{1}\\right \),\\ldots,L\\left \({e}_{n}\\right \)\\right \) \\\\ & =& \\sum\\nolimits {\\alpha }_{{i}_{1}1}\\cdots {\\alpha }_{{i}_{n}n}\\mathrm{vol}\\left \({e}_{{i}_{1}},\\ldots,{e}_{{i}_{n}}\\right \) \\\\ & =& \\sum\\nolimits \\pm {\\alpha }_{{i}_{1}1}\\cdots {\\alpha }_{{i}_{n}n} \\\\ & =& \\sum\\nolimits \\mathrm{sign}\\left \({i}_{1},\\ldots,{i}_{n}\\right \){\\alpha }_{{i}_{1}1}\\cdots {\\alpha }_{{i}_{n}n}, \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ33.gif) where α ij = L is the matrix representation for L with respect to e 1,..., e n . This formula is often used as the definition of determinants. Note that it also shows that detL = detL since ![ $$\\begin{array}{rcl} \\left \[\\begin{array}{ccc} L\\left \({e}_{1}\\right \)&\\cdots &L\\left \({e}_{n}\\right \) \\end{array} \\right \]& =& \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \[L\\right \] \\\\ & =& \\left \[\\begin{array}{ccc} {e}_{1} & \\cdots &{e}_{n} \\end{array} \\right \]\\left \[\\begin{array}{ccc} {\\alpha }_{11} & \\cdots & {\\alpha }_{1n}\\\\ \\vdots & \\ddots & \\vdots \\\\ {\\alpha }_{n1} & \\cdots &{\\alpha }_{nn} \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_5_Chapter_Equ34.gif) The next proposition contains the fundamental properties for determinants. Proposition 5.5.2. (Determinant Characterization of Invertibility) Let V be an n-dimensional vector space. (1) If L,K : V -> V are linear operators, then ![ $$\\det \\left \(L \\circ K\\right \) =\\det \\left \(L\\right \)\\det \\left \(K\\right \).$$ ](A81414_1_En_5_Chapter_Equbz.gif) (2) det α1 V = α n . (3) If L is invertible, then ![ $$\\det {L}^{-1} = \\frac{1} {\\det L}.$$ ](A81414_1_En_5_Chapter_Equca.gif) (4) If det L≠0, then L is invertible. Proof. For any x 1,..., x n , we have ![ $$\\begin{array}{rcl} \\det \\left \(L \\circ K\\right \)\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)& =& \\mathrm{vol}\\left \(L \\circ K\\left \({x}_{1}\\right \),\\ldots,L \\circ K\\left \({x}_{n}\\right \)\\right \) \\\\ & =& \\det \\left \(L\\right \)\\mathrm{vol}\\left \(K\\left \({x}_{1}\\right \),\\ldots,L\\left \({x}_{n}\\right \)\\right \) \\\\ & =& \\det \\left \(L\\right \)\\det \\left \(K\\right \)\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \).\\end{array}$$ ](A81414_1_En_5_Chapter_Equ35.gif) The second property follows from ![ $$\\mathrm{vol}\\left \(\\alpha {x}_{1},\\ldots,\\alpha {x}_{n}\\right \) = {\\alpha }^{n}\\mathrm{vol}\\left \({x}_{ 1},\\ldots,{x}_{n}\\right \).$$ ](A81414_1_En_5_Chapter_Equcb.gif) For the third, we simply use that 1 V = L ∘ L − 1 so ![ $$1 =\\det \\left \(L\\right \)\\det \\left \({L}^{-1}\\right \).$$ ](A81414_1_En_5_Chapter_Equcc.gif) For the last property, select a basis x 1,..., x n for V. Then, ![ $$\\begin{array}{rcl} \\mathrm{vol}\\left \(L\\left \({x}_{1}\\right \),\\ldots,L\\left \({x}_{n}\\right \)\\right \)& =& \\det \\left \(L\\right \)\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \) \\\\ & \\neq & 0.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ36.gif) Thus, Lx 1,..., Lx n is also a basis for V. This implies that L is invertible. One can in fact show that any map ![ $$\\Delta:\\mathrm{ Hom}\\left \(V,V \\right \) \\rightarrow\\mathbb{F}$$ ](A81414_1_En_5_Chapter_IEq41.gif) such that ![ $$\\begin{array}{rcl} \\Delta \\left \(K \\circ L\\right \)& =& \\Delta \\left \(K\\right \)\\Delta \\left \(L\\right \) \\\\ \\Delta \\left \({1}_{V }\\right \)& =&1 \\end{array}$$ ](A81414_1_En_5_Chapter_Equ37.gif) depends only on the determinant of the operator (see also exercises). We have some further useful and interesting results for determinants of matrices. Proposition 5.5.3. If ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_5_Chapter_IEq42.gif) can be written in block form ![ $$A = \\left \[\\begin{array}{cc} {A}_{11} & {A}_{12} \\\\ 0 &{A}_{22} \\end{array} \\right \],$$ ](A81414_1_En_5_Chapter_Equcd.gif) where ![ $${A}_{11} \\in \\mathrm{{ Mat}}_{{n}_{1}\\times {n}_{1}}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_5_Chapter_IEq43.gif) ![ $${A}_{12} \\in \\mathrm{{ Mat}}_{{n}_{1}\\times {n}_{2}}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_5_Chapter_IEq44.gif) , and ![ $${A}_{22} \\in \\mathrm{{ Mat}}_{{n}_{2}\\times {n}_{2}}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_5_Chapter_IEq45.gif) n 1 \+ n 2 = n, then ![ $$\\det A =\\det {A}_{11}\\det {A}_{22}.$$ ](A81414_1_En_5_Chapter_Equce.gif) Proof. Write the canonical basis for ![ $${\\mathbb{F}}^{n}$$ ](A81414_1_En_5_Chapter_IEq46.gif) as ![ $${e}_{1},\\ldots,{e}_{{n}_{1}}$$ ](A81414_1_En_5_Chapter_IEq47.gif), ![ $${f}_{1},\\ldots,{f}_{{n}_{2}}$$ ](A81414_1_En_5_Chapter_IEq48.gif) according to the block decomposition. Next, observe that A can be written as a composition in the following way: ![ $$\\begin{array}{rcl} A& =& \\left \[\\begin{array}{cc} {A}_{11} & {A}_{12} \\\\ 0 &{A}_{22} \\end{array} \\right \] \\\\ & =& \\left \[\\begin{array}{cc} 1&{A}_{12} \\\\ 0&{A}_{22} \\end{array} \\right \]\\left \[\\begin{array}{cc} {A}_{11} & 0\\\\ 0 &1 \\end{array} \\right \] \\\\ & =& BC \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ38.gif) Thus, it suffices to show that ![ $$\\begin{array}{rcl} \\det \\left \[\\begin{array}{cc} 1&{A}_{12} \\\\ 0&{A}_{22} \\end{array} \\right \]& =& \\det B \\\\ & =& \\det \\left \({A}_{22}\\right \)\\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ39.gif) and ![ $$\\begin{array}{rcl} \\det \\left \[\\begin{array}{cc} {A}_{11} & 0\\\\ 0 &1 \\end{array} \\right \]& =& \\det C \\\\ & =& \\det \\left \({A}_{11}\\right \).\\end{array}$$ ](A81414_1_En_5_Chapter_Equ40.gif) To prove the last formula, note that for fixed ![ $${f}_{1},\\ldots,{f}_{{n}_{2}}$$ ](A81414_1_En_5_Chapter_IEq49.gif) and ![ $${x}_{1},\\ldots,{x}_{{n}_{1}} \\in \\mathrm{ span}\\left \\{{e}_{1},\\ldots,{e}_{{n}_{1}}\\right \\}$$ ](A81414_1_En_5_Chapter_Equcf.gif) the volume form ![ $$\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{{n}_{1}},{f}_{1},\\ldots,{f}_{{n}_{2}}\\right \)$$ ](A81414_1_En_5_Chapter_Equcg.gif) defines the usual volume form on ![ $$\\mathrm{span}\\left \\{{e}_{1},\\ldots,{e}_{{n}_{1}}\\right \\} = {\\mathbb{F}}^{{n}_{1}}.$$ ](A81414_1_En_5_Chapter_IEq50.gif) Thus, ![ $$\\begin{array}{rcl} \\det C& =& \\mathrm{vol}\\left \(C\\left \({e}_{1}\\right \),\\ldots,C\\left \({e}_{{n}_{1}}\\right \),C\\left \({f}_{1}\\right \),\\ldots,C\\left \({f}_{{n}_{2}}\\right \)\\right \) \\\\ & =& \\mathrm{vol}\\left \({A}_{11}\\left \({e}_{1}\\right \),\\ldots,{A}_{11}\\left \({e}_{{n}_{1}}\\right \),{f}_{1},\\ldots,{f}_{{n}_{2}}\\right \) \\\\ & =& \\det {A}_{11}.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ41.gif) For the first equation, we observe ![ $$\\begin{array}{rcl} \\det B& =& \\mathrm{vol}\\left \(B\\left \({e}_{1}\\right \),\\ldots,B\\left \({e}_{{n}_{1}}\\right \),B\\left \({f}_{1}\\right \),\\ldots,B\\left \({f}_{{n}_{2}}\\right \)\\right \) \\\\ & =& \\mathrm{vol}\\left \({e}_{1},\\ldots,{e}_{{n}_{1}},{A}_{12}\\left \({f}_{1}\\right \) + {A}_{22}\\left \({f}_{1}\\right \),\\ldots,{A}_{12}\\left \({f}_{{n}_{2}}\\right \) + {A}_{22}\\left \({f}_{{n}_{2}}\\right \)\\right \) \\\\ & =& \\mathrm{vol}\\left \({e}_{1},\\ldots,{e}_{{n}_{1}},{A}_{22}\\left \({f}_{1}\\right \),\\ldots,{A}_{22}\\left \({f}_{{n}_{2}}\\right \)\\right \) \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ42.gif) since ![ $${A}_{12}\\left \({f}_{j}\\right \) \\in \\mathrm{ span}\\left \\{{e}_{1},\\ldots,{e}_{{n}_{1}}\\right \\}.$$ ](A81414_1_En_5_Chapter_IEq51.gif) Then, we get detB = detA 22 as before. Proposition 5.5.4. If ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_5_Chapter_IEq52.gif) then det A = det A t. Proof. First note that the result is obvious if A is upper triangular. Using row operations, we can always find an invertible P such that PA is upper triangular and where P is a product of the elementary matrices of the types I ij and R ij α. The row interchange matrices I ij are symmetric, i.e., I ij t = I ij and have detI ij = − 1. While R ji α is upper or lower triangular with 1s on the diagonal. Hence R ij α t = R ji α and detR ij α = 1. In particular, it follows that detP = detP t = ± 1. Thus, ![ $$\\begin{array}{rcl} \\det A& =& \\frac{\\det \\left \(PA\\right \)} {\\det P} \\\\ & =& \\frac{\\det \\left \({\\left \(PA\\right \)}^{t}\\right \)} {\\det {\\left \(P\\right \)}^{t}} \\\\ & =& \\frac{\\det \\left \({A}^{t}{P}^{t}\\right \)} {\\det {\\left \(P\\right \)}^{t}} \\\\ & =& \\det \\left \({A}^{t}\\right \) \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ43.gif) Remark 5.5.5. This last proposition tells us that the determinant map A -> A defined on ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_5_Chapter_IEq53.gif) is linear and alternating in both columns and rows. This can be extremely useful when calculating determinants. It also tells us that one can do Laplace expansions along columns as well as rows. ### 5.5.1 Exercises 1. Find the determinant of ![ $$\\begin{array}{rcl} L :\\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)& \\rightarrow \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)& \\\\ L\\left \(X\\right \)& = {X}^{t}. & \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ44.gif) 2. Find the determinant of L : P n -> P n where (a) Lpt = p − t (b) Lpt = pt + p − t (c) Lp = Dp = p ′ 3. Find the determinant of L = pD, for ![ $$p \\in\\mathbb{C}\\left \[t\\right \]$$ ](A81414_1_En_5_Chapter_IEq54.gif) when restricted to the spaces (a) V = P n (b) V = spanexpλ1 t,..., expλ n t 4. Let L : V -> V be an operator on a finite-dimensional inner product space. Show that ![ $$\\overline{\\det \\left \(L\\right \)} =\\det \\left \({L}^{{_\\ast}}\\right \).$$ ](A81414_1_En_5_Chapter_Equch.gif) 5. Let V be an n-dimensional inner product space and vol a volume form so that vole 1,..., e n = 1 for some orthonormal basis e 1,..., e n . (a) Show that if L : V -> V is an isometry, then detL = 1. (b) Show that the set of isometries L with detL = 1 is a group. 6. Show that O ∈ O n has type I if and only if detO = 1. Conclude that SO n is a group. 7. Given ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_5_Chapter_IEq55.gif), consider the two linear operators L A X = AX and R A X = XA on ![ $$\\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \).$$ ](A81414_1_En_5_Chapter_IEq56.gif) Compute the determinant for these operators in terms of the determinant for A (see Example 1.7.6). 8. Show that if L : V -> V is a linear operator and vol a volume form on V, then ![ $$\\begin{array}{rcl} \\mathrm{tr}\\left \(A\\right \)\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)& =& \\mathrm{vol}\\left \(L\\left \({x}_{1}\\right \),\\ldots,{x}_{n}\\right \) \\\\ & & +\\mathrm{vol}\\left \({x}_{1},L\\left \({x}_{2}\\right \),\\ldots,{x}_{n}\\right \) \\\\ & & \\vdots \\\\ & & +\\mathrm{vol}\\left \({x}_{1},\\ldots,L\\left \({x}_{n}\\right \)\\right \).\\end{array}$$ ](A81414_1_En_5_Chapter_Equ45.gif) 9. Show that ![ $$p\\left \(t\\right \) =\\det \\left \[\\begin{array}{cccc} 1 &\\cdots & 1 & 1\\\\ {\\lambda }_{ 1} & \\cdots & {\\lambda }_{n} & t\\\\ \\vdots & & \\vdots & \\vdots\\\\ {\\lambda }_{ 1}^{n}&\\cdots &{\\lambda }_{ n}^{n}&{t}^{n} \\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_Equci.gif) defines a polynomial of degree n whose roots are λ1,..., λ n . Compute k where ![ $$p\\left \(t\\right \) = k\\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{n}\\right \)$$ ](A81414_1_En_5_Chapter_Equcj.gif) by doing a Laplace expansion along the last column. 10. Assume that the n ×n matrix A has a block decomposition ![ $$A = \\left \[\\begin{array}{cc} {A}_{11} & {A}_{12} \\\\ {A}_{21} & {A}_{22}\\end{array} \\right \],$$ ](A81414_1_En_5_Chapter_Equck.gif) where A 11 is an invertible matrix. Show that ![ $$\\det \\left \(A\\right \) =\\det \\left \({A}_{11}\\right \)\\det \\left \({A}_{22} - {A}_{21}{A}_{11}^{-1}{A}_{ 12}\\right \).$$ ](A81414_1_En_5_Chapter_Equcl.gif) Hint: Select a suitable product decomposition of the form ![ $$\\left \[\\begin{array}{cc} {A}_{11} & {A}_{12} \\\\ {A}_{21} & {A}_{22}\\end{array} \\right \] = \\left \[\\begin{array}{cc} {B}_{11} & 0 \\\\ {B}_{21} & {B}_{22}\\end{array} \\right \]\\left \[\\begin{array}{cc} {C}_{11} & {C}_{12} \\\\ 0 &{C}_{22}\\end{array} \\right \].$$ ](A81414_1_En_5_Chapter_Equcm.gif) 11. (Jacobi's Theorem) Let A be an invertible n ×n matrix. Assume that A and A − 1 have block decompositions ![ $$\\begin{array}{rcl} A& =& \\left \[\\begin{array}{cc} {A}_{11} & {A}_{12} \\\\ {A}_{21} & {A}_{22}\\end{array} \\right \], \\\\ {A}^{-1}& =& \\left \[\\begin{array}{cc} {A}_{11}^{{\\prime}}&{A}_{12}^{{\\prime}} \\\\ {A}_{21}^{{\\prime}}&{A}_{22}^{{\\prime}}\\end{array} \\right \].\\end{array}$$ ](A81414_1_En_5_Chapter_Equ46.gif) Show ![ $$\\det \\left \(A\\right \)\\det \\left \({A}_{22}^{{\\prime}}\\right \) =\\det \\left \({A}_{ 11}\\right \).$$ ](A81414_1_En_5_Chapter_Equcn.gif) Hint: Compute the matrix product ![ $$\\left \[\\begin{array}{cc} {A}_{11} & {A}_{12} \\\\ {A}_{21} & {A}_{22}\\end{array} \\right \]\\left \[\\begin{array}{cc} 1&{A}_{12}^{{\\prime}} \\\\ 0&{A}_{22}^{{\\prime}}\\end{array} \\right \].$$ ](A81414_1_En_5_Chapter_Equco.gif) 12. Let ![ $$A =\\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \).$$ ](A81414_1_En_5_Chapter_IEq57.gif) We say that A has an LU decomposition if A = LU, where L is lower triangular with 1s on the diagonal and U is upper triangular. Show that A has an LU decomposition if all the leading principalminors have nonzero determinants. The leading principal k ×k minor is the k ×k submatrix gotten from A by eliminating the last n − k rows and columns. (See also Exercise 4 in Sect. 1.13.) 13. (Sylvester's Criterion) Let A be a real and symmetric n ×n matrix. Show that A has positive eigenvalues if and only if all leading principal minors have positive determinant. Hint: As with the A = LU decomposition in the previous exercise, show by induction on n that A = U ∗ U, where U is upper triangular. Such a decomposition is also called a Cholesky factorization. 14. (Characterization of Determinant Functions) Let ![ $$\\Delta:\\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \) \\rightarrow\\mathbb{F}$$ ](A81414_1_En_5_Chapter_IEq58.gif) be a function such that ![ $$\\begin{array}{rcl} \\Delta \\left \(AB\\right \)& =& \\Delta \\left \(A\\right \)\\Delta \\left \(B\\right \), \\\\ \\Delta \\left \({1}_{{\\mathbb{F}}^{n}}\\right \)& =&1. \\end{array}$$ ](A81414_1_En_5_Chapter_Equ47.gif) (a) Show that there is a function ![ $$f : \\mathbb{F} \\rightarrow\\mathbb{F}$$ ](A81414_1_En_5_Chapter_IEq59.gif) satisfying ![ $$f\\left \(\\alpha \\beta \\right \) = f\\left \(\\alpha \\right \)f\\left \(\\beta \\right \)$$ ](A81414_1_En_5_Chapter_Equcp.gif) such that ΔA = fdetA. Hint: Use Exercise 8 in Sect. 1.13 to show that ![ $$\\begin{array}{rcl} \\Delta \\left \({I}_{ij}\\right \)& =& \\pm 1, \\\\ \\Delta \\left \({M}_{i}\\left \(\\alpha \\right \)\\right \)& =& \\Delta \\left \({M}_{1}\\left \(\\alpha \\right \)\\right \), \\\\ \\Delta \\left \({R}_{kl}\\left \(\\alpha \\right \)\\right \)& =& \\Delta \\left \({R}_{kl}\\left \(1\\right \)\\right \) = \\Delta \\left \({R}_{12}\\left \(1\\right \)\\right \), \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ48.gif) and define fα = ΔM 1α. (b) If ![ $$\\mathbb{F} = \\mathbb{R}$$ ](A81414_1_En_5_Chapter_IEq60.gif) and n is even, show that ΔA = detA defines a function such that ![ $$\\begin{array}{rcl} \\Delta \\left \(AB\\right \)& =& \\Delta \\left \(A\\right \)\\Delta \\left \(B\\right \), \\\\ \\Delta \\left \(\\lambda {1}_{{\\mathbb{R}}^{n}}\\right \)& =& {\\lambda }^{n}.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ49.gif) (c) If ![ $$\\mathbb{F} = \\mathbb{C}$$ ](A81414_1_En_5_Chapter_IEq61.gif) and in addition Δλ1 ℂ n = λ n , then show that ΔA = detA. (d) If ![ $$\\mathbb{F} = \\mathbb{R}$$ ](A81414_1_En_5_Chapter_IEq62.gif) and in addition Δλ1 ℝ n = λ n , where n is odd, then show that ΔA = detA. ## 5.6 Linear Equations Cramer's rule is a formula for the solution to n linear equations in n variables when we know that only one solution exists. We will generalize this construction a bit so as to see that it can be interpreted as an inverse to the isomorphism ![ $$\\left \[\\begin{array}{ccc} {x}_{1} & \\cdot s &{x}_{n} \\end{array} \\right \] : {\\mathbb{F}}^{n} \\rightarrow V$$ ](A81414_1_En_5_Chapter_Equcq.gif) when x 1,..., x n is a basis. Theorem 5.6.1. Let V be an n-dimensional vector space and vol a volume form. If x 1 ,...,x n is a basis for V and x = x 1 α 1 \+ ⋯ \+ x n α n is the expansion of x ∈ V with respect to that basis, then ![ $$\\begin{array}{rcl} {\\alpha }_{1}& =& \\frac{\\mathrm{vol}\\left \(x,{x}_{2},\\ldots,{x}_{n}\\right \)} {\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)}, \\\\ \\vdots& & \\vdots \\\\ {\\alpha }_{i}& =& \\frac{\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{i-1},x,{x}_{i+1},\\ldots,{x}_{n}\\right \)} {\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)}, \\\\ \\vdots& & \\vdots \\\\ {\\alpha }_{n}& =& \\frac{\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n-1},x\\right \)} {\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)}. \\end{array}$$ ](A81414_1_En_5_Chapter_Equ50.gif) Proof. First note that each ![ $$\\frac{\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{i-1},x,{x}_{i+1},\\ldots,{x}_{n}\\right \)} {\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)}$$ ](A81414_1_En_5_Chapter_Equcr.gif) is linear in x. Thus, ![ $$L\\left \(x\\right \) = \\left \[\\begin{array}{c} \\frac{\\mathrm{vol}\\left \(x,{x}_{2},\\ldots,{x}_{n}\\right \)} {\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)}\\\\ \\vdots \\\\ \\frac{\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{i-1},x,{x}_{i+1},\\ldots,{x}_{n}\\right \)} {\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)}\\\\ \\vdots \\\\ \\frac{\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n-1},x\\right \)} {\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)} \\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_Equcs.gif) is a linear map ![ $$V \\rightarrow{\\mathbb{F}}^{n}.$$ ](A81414_1_En_5_Chapter_IEq63.gif) This means that we only need to check what happens when x is one of the vectors in the basis. If x = x i , then ![ $$\\begin{array}{rcl} 0& =& \\frac{\\mathrm{vol}\\left \({x}_{i},{x}_{2},\\ldots,{x}_{n}\\right \)} {\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)}, \\\\ \\vdots& & \\vdots \\\\ 1& =& \\frac{\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{i-1},{x}_{i},{x}_{i+1},\\ldots,{x}_{n}\\right \)} {\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)}, \\\\ \\vdots& & \\vdots \\\\ 0& =& \\frac{\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n-1},{x}_{i}\\right \)} {\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)}.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ51.gif) Showing that ![ $$L\\left \({x}_{i}\\right \) = {e}_{i} \\in{\\mathbb{F}}^{n}$$ ](A81414_1_En_5_Chapter_IEq64.gif). But this shows that L is the inverse to ![ $$\\begin{array}{rcl} \\left \[{x}_{1}\\cdot s {x}_{n}\\right \] : {\\mathbb{F}}^{n} \\rightarrow V.& & \\\\ & & \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ52.gif) Cramer's rule is not necessarily very practical when solving equations, but it is often a useful abstract tool. It also comes in handy, as we shall see in Sect. 5.8 when solving inhomogeneous linear differential equations. Cramer's rule can also be used to solve linear equations Lx = b, as long as L : V -> V is an isomorphism. In particular, it can be used to compute the inverse of L as is done in one of the exercises. To see how we can solve Lx = b, we first select a basis x 1,..., x n for V and then consider the problem of solving ![ $$\\left \[\\begin{array}{ccc} L\\left \({x}_{1}\\right \)&\\cdots &L\\left \({x}_{n}\\right \) \\end{array} \\right \]\\left \[\\begin{array}{c} {\\alpha }_{1}\\\\ \\vdots \\\\ {\\alpha }_{n} \\end{array} \\right \] = b.$$ ](A81414_1_En_5_Chapter_Equct.gif) Since Lx 1,..., Lx n is also a basis, we know that this forces ![ $$\\begin{array}{rcl} {\\alpha }_{1}& =& \\frac{\\mathrm{vol}\\left \(b,L\\left \({x}_{2}\\right \),\\ldots,L\\left \({x}_{n}\\right \)\\right \)} {\\mathrm{vol}\\left \(L\\left \({x}_{1}\\right \),\\ldots,L\\left \({x}_{n}\\right \)\\right \)}, \\\\ & & \\vdots \\\\ {\\alpha }_{i}& =& \\frac{\\mathrm{vol}\\left \(L\\left \({x}_{1}\\right \),\\ldots,L\\left \({x}_{i-1}\\right \),b,L\\left \({x}_{i+1}\\right \),\\ldots,L\\left \({x}_{n}\\right \)\\right \)} {\\mathrm{vol}\\left \(L\\left \({x}_{1}\\right \),\\ldots,L\\left \({x}_{n}\\right \)\\right \)}, \\\\ & & \\vdots \\\\ {\\alpha }_{n}& =& \\frac{\\mathrm{vol}\\left \(L\\left \({x}_{1}\\right \),\\ldots,L\\left \({x}_{n-1}\\right \),b\\right \)} {\\mathrm{vol}\\left \(L\\left \({x}_{1}\\right \),\\ldots,L\\left \({x}_{n}\\right \)\\right \)} \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ53.gif) with x = x 1α1 + ⋯ + x n α n being the solution. If we use b = x 1,..., x n , then we get the matrix representation for L − 1 by finding the coordinates to the solutions of Lx = x i . Example 5.6.2. As an example, let us see how we can solve ![ $$\\left \[\\begin{array}{cccc} 0&1&\\cdots &0\\\\ 0 &0 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots &1\\\\ 1 &0 &\\cdots&0 \\end{array} \\right \]\\left \[\\begin{array}{c} {\\xi }_{1} \\\\ {\\xi }_{2}\\\\ \\vdots \\\\ {\\xi }_{n}\\end{array} \\right \] = \\left \[\\begin{array}{c} {\\beta }_{1} \\\\ {\\beta }_{2}\\\\ \\vdots \\\\ {\\beta }_{n} \\end{array} \\right \].$$ ](A81414_1_En_5_Chapter_Equcu.gif) First, we see directly that ![ $$\\begin{array}{rcl}{ \\xi }_{2}& =& {\\beta }_{1}, \\\\ {\\xi }_{3}& =& {\\beta }_{2}, \\\\ & & \\vdots \\\\ {\\xi }_{1}& =& {\\beta }_{n}.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ54.gif) From Cramer's rule, we get that ![ $${\\xi }_{1} = \\frac{\\left \\vert \\begin{array}{cccc} {\\beta }_{1} & 1&\\cdots &0 \\\\ {\\beta }_{2} & 0& \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots &1\\\\ {\\beta }_{ n}&0&\\cdots &0 \\end{array} \\right \\vert } {\\left \\vert \\begin{array}{cccc} 0&1&\\cdots &0\\\\ 0 &0 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots &1\\\\ 1 &0 &\\cdots&0 \\end{array} \\right \\vert }$$ ](A81414_1_En_5_Chapter_Equcv.gif) A Laplace expansion along the first column tells us that ![ $$\\begin{array}{rcl} \\left \\vert \\begin{array}{cccc} {\\beta }_{1} & 1&\\cdots &0 \\\\ {\\beta }_{2} & 0& \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots &1\\\\ {\\beta }_{ n}&0&\\cdots &0\\end{array} \\right \\vert & =& {\\beta }_{1}\\left \\vert \\begin{array}{cccc} 0&1&\\cdots &0\\\\ 0 &0 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots &1\\\\ 0 &0 &\\cdots&0 \\end{array} \\right \\vert - {\\beta }_{2}\\left \\vert \\begin{array}{cccc} 1&0&\\cdots &0\\\\ 0 &0 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots &1\\\\ 0 &0 &\\cdots&0 \\end{array} \\right \\vert \\\\ & & \\cdots+{ \\left \(-1\\right \)}^{n+1}{\\beta }_{ n}\\left \\vert \\begin{array}{cccc} 1&0&\\cdots &0\\\\ 0 &1 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots &0\\\\ 0 &0 &\\cdots&1 \\end{array} \\right \\vert \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ55.gif) here all of the determinants are upper triangular and all but the last has zeros on the diagonal. Thus, ![ $$\\left \\vert \\begin{array}{cccc} {\\beta }_{1} & 1&\\cdots &0 \\\\ {\\beta }_{2} & 0& \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots &1\\\\ {\\beta }_{ n}&0&\\cdots &0 \\end{array} \\right \\vert ={ \\left \(-1\\right \)}^{n+1}{\\beta }_{ n}$$ ](A81414_1_En_5_Chapter_Equcw.gif) Similarly, ![ $$\\left \\vert \\begin{array}{cccc} 0&1&\\cdots &0\\\\ 0 &0 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots &1\\\\ 1 &0 &\\cdots&0 \\end{array} \\right \\vert ={ \\left \(-1\\right \)}^{n+1},$$ ](A81414_1_En_5_Chapter_Equcx.gif) so ![ $${\\xi }_{1} = {\\beta }_{n}.$$ ](A81414_1_En_5_Chapter_Equcy.gif) Similar calculations will confirm our answers for ξ2,..., ξ n . By using b = e 1,..., e n , we can also find the inverse ![ $${\\left \[\\begin{array}{cccc} 0&1&\\cdots &0\\\\ 0 &0 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots &1\\\\ 1 &0 &\\cdots&0 \\end{array} \\right \]}^{-1} = \\left \[\\begin{array}{cccc} 0& 0 &\\cdots &1\\\\ 1 & 0 & \\ddots & \\vdots \\\\ \\vdots & \\ddots & \\ddots &0\\\\ 0 &\\cdots& 1 &0 \\end{array} \\right \].$$ ](A81414_1_En_5_Chapter_Equcz.gif) ### 5.6.1 Exercises 1. Let ![ $${A}_{n} = \\left \[\\begin{array}{ccccc} 2 & - 1& 0 & \\cdots& 0\\\\ - 1 & 2 & - 1 & \\cdots& 0 \\\\ 0 & - 1& 2 & \\ddots & \\vdots\\\\ \\vdots & \\vdots & \\ddots & \\ddots & - 1 \\\\ 0 & 0 & \\cdots& - 1& 2\\end{array} \\right \].$$ ](A81414_1_En_5_Chapter_Equda.gif) (a) Compute detA n for n = 1, 2, 3, 4. (b) Compute A n − 1 for n = 1, 2, 3, 4. (c) Find detA n and A n − 1 for general n. 2. Given a nontrivial volume form vol on an n-dimensional vector space V, a linear operator L : V -> V and a basis x 1,..., x n for V, define the classical adjoint adjL : V -> V by ![ $$\\begin{array}{rcl} \\mathrm{adj}\\left \(L\\right \)\\left \(x\\right \)& =& \\mathrm{vol}\\left \(x,L\\left \({x}_{2}\\right \),\\ldots,L\\left \({x}_{n}\\right \)\\right \){x}_{1} \\\\ & & +\\mathrm{vol}\\left \(L\\left \({x}_{1}\\right \),x,L\\left \({x}_{3}\\right \),\\ldots,L\\left \({x}_{n}\\right \)\\right \){x}_{2} \\\\ & & \\vdots \\\\ & & +\\mathrm{vol}\\left \(L\\left \({x}_{1}\\right \),\\ldots,L\\left \({x}_{n-1}\\right \),x\\right \){x}_{n}.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ56.gif) (a) Show that L ∘ adjL = adjL ∘ L = detL1 V . (b) Show that if L is an n ×n matrix, then adjL = cofA t , where cofA is the cofactor matrix whose ij entry is − 1 i + j detA ij , where A ij is the n − 1 ×n − 1 matrix obtained from A by deleting the ith row and jth column (see Definition 5.4.2) (c) Show that adjL does not depend on the choice of basis x 1,..., x n or volume form vol. 3. (Lagrange Interpolation) Use Cramer's rule and ![ $$p\\left \(t\\right \) =\\det \\left \[\\begin{array}{cccc} 1 &\\cdots & 1 & 1\\\\ {\\lambda }_{ 1} & \\cdots & {\\lambda }_{n} & t\\\\ \\vdots & & \\vdots & \\vdots\\\\ {\\lambda }_{ 1}^{n}&\\cdots &{\\lambda }_{ n}^{n}&{t}^{n} \\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_Equdb.gif) to find p ∈ P n such that pt 0 = b 0,.... , pt n = b n where ![ $${t}_{0},\\ldots,{t}_{n} \\in\\mathbb{C}$$ ](A81414_1_En_5_Chapter_IEq65.gif) are distinct. 4. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_5_Chapter_IEq66.gif) where ![ $$\\mathbb{F}$$ ](A81414_1_En_5_Chapter_IEq67.gif) is ![ $$\\mathbb{R}$$ ](A81414_1_En_5_Chapter_IEq68.gif) or ![ $$\\mathbb{C}.$$ ](A81414_1_En_5_Chapter_IEq69.gif) Show that there is a constant C n depending only on n such that if A is invertible, then ![ $$\\left \\Vert {A}^{-1}\\right \\Vert \\leq{C}_{ n}\\frac{{\\left \\Vert A\\right \\Vert }^{n-1}} {\\left \\vert \\det \\left \(A\\right \)\\right \\vert }.$$ ](A81414_1_En_5_Chapter_Equdc.gif) 5. Let A be an n ×n matrix whose entries are integers. If A is invertible show that A − 1 has integer entries if and only if detA = ± 1. 6. Decide when the system ![ $$\\left \[\\begin{array}{cc} \\alpha & - \\beta \\\\ \\beta& \\alpha \\end{array} \\right \]\\left \[\\begin{array}{c} {\\xi }_{1} \\\\ {\\xi }_{2}\\end{array} \\right \] = \\left \[\\begin{array}{c} {\\beta }_{1} \\\\ {\\beta }_{2}\\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_Equdd.gif) can be solved for all β1, β2. Write down a formula for the solution. 7. For which α is the matrix invertible ![ $$\\left \[\\begin{array}{ccc} \\alpha &\\alpha &1\\\\ \\alpha& 1 &1 \\\\ 1 & 1 &1\\end{array} \\right \]?$$ ](A81414_1_En_5_Chapter_Equde.gif) 8. In this exercise, we will see how Cramer used his rule to study Leibniz's problem of when Ax = b can be solved assuming that ![ $$A \\in \\mathrm{{ Mat}}_{\\left \(n+1\\right \)\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_5_Chapter_IEq70.gif) and ![ $$b \\in{\\mathbb{F}}^{n+1}$$ ](A81414_1_En_5_Chapter_IEq71.gif) (see Exercise 1 in Sect. 5.3). Assume in addition that rankA = n. Then, delete one row from A | b so that the resulting system A ′ | b ′ has a unique solution. Use Cramer's rule to solve A ′ x = b ′ and then insert this solution in the equation that was deleted. Show that this equation is satisfied if and only if detA | b = 0. Hint: The last equation is equivalent to a Laplace expansion of detA | b = 0 along the deleted row. 9. For ![ $$a,b,c \\in\\mathbb{C}$$ ](A81414_1_En_5_Chapter_IEq72.gif) consider the real equation aξ + bυ = c, where ![ $$\\xi,\\upsilon\\in\\mathbb{R}.$$ ](A81414_1_En_5_Chapter_IEq73.gif) (a) Write this as a system of the real equations. (b) Show that this system has a unique solution when ![ $$\\mathrm{Im}\\left \(\\bar{a}b\\right \)\\neq 0.$$ ](A81414_1_En_5_Chapter_IEq74.gif) (c) Use Cramer's rule to find a formula for ξ and υ that depends ![ $$\\mathrm{Im}\\left \(\\bar{a}b\\right \),$$ ](A81414_1_En_5_Chapter_IEq75.gif) ![ $$\\mathrm{Im}\\left \(\\bar{a}c\\right \),$$ ](A81414_1_En_5_Chapter_IEq76.gif) ![ $$\\mathrm{Im}\\left \(\\bar{b}c\\right \).$$ ](A81414_1_En_5_Chapter_IEq77.gif) ## 5.7 The Characteristic Polynomial Now that we know that the determinant of a linear operator characterizes whether or not it is invertible, it would seem perfectly natural to define the characteristic polynomial as follows. Definition 5.7.1. The characteristic polynomial of L : V -> V is ![ $${\\chi }_{L}\\left \(t\\right \) =\\det \\left \(t{1}_{V } - L\\right \).$$ ](A81414_1_En_5_Chapter_Equdf.gif) Clearly, a zero for the function χ L t corresponds a value of t where t1 V − L is not invertible and thus kert1 V − L≠0, but this means that such a t is an eigenvalue. We now need to justify why this definition yields the same polynomial we constructed using Gauss elimination in Sect. 2.3. Theorem 5.7.2. Let ![ $$A \\in \\mathrm{{ Mat}}_{n\\times n}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_5_Chapter_IEq78.gif) , then ![ $${\\chi }_{A}\\left \(t\\right \) =\\det \\left \(t{1}_{{\\mathbb{F}}^{n}} - A\\right \)$$ ](A81414_1_En_5_Chapter_IEq79.gif) is a monic polynomial of degree n whose roots in ![ $$\\mathbb{F}$$ ](A81414_1_En_5_Chapter_IEq80.gif) are the eigenvalues for ![ $$A : {\\mathbb{F}}^{n} \\rightarrow{\\mathbb{F}}^{n}.$$ ](A81414_1_En_5_Chapter_IEq81.gif) Moreover, this definition for the characteristic polynomial agrees with the one given using Gauss elimination. Proof. First we show that if L : V -> V is a linear operator on an n-dimensional vector space, then χ L t = dett1 V − L defines a monic polynomial of degree n. To see this, consider ![ $$\\begin{array}{rcl} \\det \\left \(t{1}_{V } - L\\right \)& =& \\mathrm{vol}\\left \(\\left \(t{1}_{V } - L\\right \){e}_{1},\\ldots,\\left \(t{1}_{V } - L\\right \){e}_{n}\\right \) \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ57.gif) and use linearity of vol to separate each of the terms t1 V − Le k = te k − Le k . When doing this, we get to factor out t several times so it is easy to see that we get a polynomial in t. To check the degree, we group terms involving powers of t that are lower than n in the expression Ot n − 1 ![ $$\\begin{array}{rcl} \\det \\left \(t{1}_{V } - A\\right \)& =& \\mathrm{vol}\\left \(\\left \(t{1}_{V } - L\\right \){e}_{1},\\ldots,\\left \(t{1}_{V } - L\\right \){e}_{n}\\right \) \\\\ & =& t\\mathrm{vol}\\left \({e}_{1},\\left \(t{1}_{V } - L\\right \){e}_{2},\\ldots,\\left \(t{1}_{V } - L\\right \){e}_{n}\\right \) \\\\ & & -\\mathrm{vol}\\left \(L\\left \({e}_{1}\\right \),\\left \(t{1}_{V } - L\\right \){e}_{2},\\ldots,\\left \(t{1}_{V } - L\\right \){e}_{n}\\right \) \\\\ & =& t\\mathrm{vol}\\left \({e}_{1},\\left \(t{1}_{V } - L\\right \){e}_{2},\\ldots,\\left \(t{1}_{V } - L\\right \){e}_{n}\\right \) + O\\left \({t}^{n-1}\\right \) \\\\ & =& {t}^{2}\\mathrm{vol}\\left \({e}_{ 1},{e}_{2},\\ldots,\\left \(t{1}_{V } - L\\right \){e}_{n}\\right \) + O\\left \({t}^{n-1}\\right \) \\\\ & & \\vdots \\\\ & =& {t}^{n}\\mathrm{vol}\\left \({e}_{ 1},{e}_{2},\\ldots,{e}_{n}\\right \) + O\\left \({t}^{n-1}\\right \) \\\\ & =& {t}^{n} + O\\left \({t}^{n-1}\\right \).\\end{array}$$ ](A81414_1_En_5_Chapter_Equ58.gif) In Theorem 2.3.6, we proved that ![ $$\\left \(t{1}_{{\\mathbb{F}}^{n}} - A\\right \) = PU,$$ ](A81414_1_En_5_Chapter_IEq82.gif) where ![ $$U = \\left \[\\begin{array}{llll} {r}_{1}\\left \(t\\right \)&{_\\ast} &\\cdots &{_\\ast} \\\\ 0 &{r}_{2}\\left \(t\\right \)&\\cdots &{_\\ast}\\\\ \\vdots &\\vdots &\\ddots &\\vdots \\\\ 0 &0 &\\cdots &{r}_{n}\\left \(t\\right \) \\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_Equdg.gif) and P is the product of the elementary matrices: (1) I kl interchanging rows, (2) R kl rt which multiplies row l by a function rt and adds it to row k, and (3) M k α which simply multiplies row k by ![ $$\\alpha\\in\\mathbb{F} -\\left \\{0\\right \\}$$ ](A81414_1_En_5_Chapter_IEq83.gif). For each fixed t, we have ![ $$\\begin{array}{rcl} \\det \\left \({I}_{kl}\\right \)& =& -1, \\\\ \\det \\left \({R}_{kl}\\left \(r\\left \(t\\right \)\\right \)\\right \)& =& 1, \\\\ \\det \\left \({M}_{k}\\left \(\\alpha \\right \)\\right \)& =& \\alpha.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ59.gif) This means that ![ $$\\begin{array}{rcl} \\det \\left \(t{1}_{{\\mathbb{F}}^{n}} - A\\right \)& =& \\det \\left \(PT\\right \) \\\\ & =& \\det \\left \(P\\right \)\\det \\left \(T\\right \) \\\\ & =& \\det \\left \(P\\right \){r}_{1}\\left \(t\\right \)\\cdots {r}_{n}\\left \(t\\right \), \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ60.gif) where detP is a nonzero scalar that does not depend on t and r 1 t⋯r n t is the function that we used to define the characteristic polynomial in Sect. 2.3. This shows that the two definitions have to agree. Remark 5.7.3. Recall that the Frobenius canonical form also lead us to a rigorous definition of the characteristic polynomial (see Sect. 2.7). Moreover, that definition definitely agrees with the definition from Sect. 2.3. It is also easy to see, using the above proof, that it agrees with the definition using determinants. With this new definition of the characteristic polynomial, we can establish some further interesting properties. Proposition 5.7.4. Assume that L : V -> V is a linear operator on an n-dimensional vector space with ![ $${\\chi }_{L}\\left \(t\\right \) = {t}^{n} + {\\alpha }_{ n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{ 1}t + {\\alpha }_{0}.$$ ](A81414_1_En_5_Chapter_Equdh.gif) Then ![ $$\\begin{array}{rcl}{ \\alpha }_{n-1}& =& -\\mathrm{tr}L, \\\\ {\\alpha }_{0}& =&{ \\left \(-1\\right \)}^{n}\\det L.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ61.gif) Proof. To show the last property, just note that ![ $$\\begin{array}{rcl}{ \\alpha }_{0}& =& {\\chi }_{L}\\left \(0\\right \) \\\\ & =& \\det \\left \(-L\\right \) \\\\ & =&{ \\left \(-1\\right \)}^{n}\\det \\left \(L\\right \).\\end{array}$$ ](A81414_1_En_5_Chapter_Equ62.gif) The first property takes a little more thinking. We use the calculation that lead to the formula ![ $$\\begin{array}{rcl} \\det \\left \(t{1}_{V } - A\\right \)& =& \\mathrm{vol}\\left \(\\left \(t{1}_{V } - L\\right \){x}_{1},\\ldots,\\left \(t{1}_{V } - L\\right \){x}_{n}\\right \) \\\\ & =& {t}^{n} + O\\left \({t}^{n-1}\\right \) \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ63.gif) from the previous proof. Evidently, we have to calculate the coefficient in front of t n − 1. That term must look like ![ $${t}^{n-1}\\left \(\\mathrm{vol}\\left \(-L\\left \({e}_{ 1}\\right \),{e}_{2},\\ldots,{e}_{n}\\right \) + \\cdots+\\mathrm{ vol}\\left \({e}_{1},{e}_{2},\\ldots,-L\\left \({e}_{n}\\right \)\\right \)\\right \).$$ ](A81414_1_En_5_Chapter_Equdi.gif) Thus, we have to show ![ $$\\mathrm{tr}\\left \(L\\right \) =\\mathrm{ vol}\\left \(L\\left \({e}_{1}\\right \),{e}_{2},\\ldots,{e}_{n}\\right \) + \\cdots+\\mathrm{ vol}\\left \({e}_{1},{e}_{2},\\ldots,L\\left \({e}_{n}\\right \)\\right \).$$ ](A81414_1_En_5_Chapter_Equdj.gif) To see this, expand ![ $$L\\left \({e}_{i}\\right \) ={ \\sum\\nolimits }_{j=1}^{n}{e}_{ j}{\\alpha }_{ji}$$ ](A81414_1_En_5_Chapter_Equdk.gif) so that α ji = L and trL = α11 + ⋯ + α nn . Next note that if we insert that expansion in, say, volLe 1, e 2,..., e n , then we have ![ $$\\begin{array}{rcl} \\mathrm{vol}\\left \(L\\left \({e}_{1}\\right \),{e}_{2}\\ldots,{e}_{n}\\right \)& =& \\mathrm{vol}\\left \({\\sum\\nolimits }_{j=1}^{n}{e}_{ j}{\\alpha }_{j1},{e}_{2},\\ldots,{e}_{n}\\right \) \\\\ & =& \\mathrm{vol}\\left \({e}_{1}{\\alpha }_{11},{e}_{2},\\ldots,{e}_{n}\\right \) \\\\ & =& {\\alpha }_{11}\\mathrm{vol}\\left \({e}_{1},{e}_{2},\\ldots,{e}_{n}\\right \) \\\\ & =& {\\alpha }_{11}.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ64.gif) This implies that ![ $$\\begin{array}{rcl} \\mathrm{tr}\\left \(L\\right \)& =& {\\alpha }_{11} + \\cdots+ {\\alpha }_{nn} \\\\ & =& \\mathrm{vol}\\left \(L\\left \({e}_{1}\\right \),{e}_{2},\\ldots,{e}_{n}\\right \) + \\\\ & & \\cdots+\\mathrm{ vol}\\left \({e}_{1},{e}_{2},\\ldots,L\\left \({e}_{n}\\right \)\\right \). \\\\ & & \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ65.gif) Proposition 5.7.5. Assume that L : V -> V is a linear operator on a finite-dimensional vector space. If M ⊂ V is an L-invariant subspace, then ![ $${\\chi }_{L{\\vert }_{M}}\\left \(t\\right \)$$ ](A81414_1_En_5_Chapter_IEq84.gif) divides χ L t. Proof. Select a basis x 1,..., x n for V such that x 1,..., x k form a basis for M. Then, the matrix representation for L in this basis looks like ![ $$\\left \[L\\right \] = \\left \[\\begin{array}{cc} {A}_{11} & {A}_{12} \\\\ 0 &{A}_{22} \\end{array} \\right \],$$ ](A81414_1_En_5_Chapter_Equdl.gif) where ![ $${A}_{11} \\in \\mathrm{{ Mat}}_{k\\times k}\\left \(\\mathbb{F}\\right \),$$ ](A81414_1_En_5_Chapter_IEq85.gif) ![ $${A}_{12} \\in \\mathrm{{ Mat}}_{k\\times \\left \(n-k\\right \)}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_5_Chapter_IEq86.gif), and ![ $${A}_{22} \\in \\mathrm{{ Mat}}_{\\left \(n-k\\right \)\\times \\left \(n-k\\right \)}\\left \(\\mathbb{F}\\right \).$$ ](A81414_1_En_5_Chapter_IEq87.gif) This means that ![ $$t{1}_{{\\mathbb{F}}^{n}}-\\left \[L\\right \] = \\left \[\\begin{array}{cc} t{1}_{{\\mathbb{F}}^{k}} - {A}_{11} & {A}_{12} \\\\ 0 &t{1}_{{\\mathbb{F}}^{n-k}} - {A}_{22} \\end{array} \\right \].$$ ](A81414_1_En_5_Chapter_Equdm.gif) Thus, we have ![ $$\\begin{array}{rcl}{ \\chi }_{L}\\left \(t\\right \)& =& {\\chi }_{\\left \[L\\right \]}\\left \(t\\right \) \\\\ & =& \\det \\left \(t{1}_{{\\mathbb{F}}^{n}} -\\left \[L\\right \]\\right \) \\\\ & =& \\det \\left \(t{1}_{{\\mathbb{F}}^{k}} - {A}_{11}\\right \)\\det \\left \(t{1}_{{\\mathbb{F}}^{n-k}} - {A}_{22}\\right \).\\end{array}$$ ](A81414_1_En_5_Chapter_Equ66.gif) Now, A 11 is the matrix representation for L | M , so we have proven ![ $${\\chi }_{L}\\left \(t\\right \) = {\\chi }_{L{\\vert }_{M}}\\left \(t\\right \)p\\left \(t\\right \),$$ ](A81414_1_En_5_Chapter_Equdn.gif) where pt is some polynomial. ### 5.7.1 Exercises 1. Let K, L : V -> V be linear operators on a finite-dimensional vector space. (a) Show that detK − tL is a polynomial in t. (b) If K or L is invertible show that dettI − L ∘ K = dettI − K ∘ L. (c) Show part b in general. 2. Let V be a finite-dimensional real vector space and L : V -> V a linear operator. (a) Show that the number of complex roots of the characteristic polynomial is even. Hint: They come in conjugate pairs. (b) If ![ $${\\dim }_{\\mathbb{R}}V$$ ](A81414_1_En_5_Chapter_IEq88.gif) is odd, then L has a real eigenvalue whose sign is the same as that of detL. (c) If ![ $${\\dim }_{\\mathbb{R}}V$$ ](A81414_1_En_5_Chapter_IEq89.gif) is even and detL < 0, then L has two real eigenvalues, one negative and one positive. 3. Let ![ $$A = \\left \[\\begin{array}{cc} \\alpha &\\gamma\\\\ \\beta&\\delta \\end{array} \\right \].$$ ](A81414_1_En_5_Chapter_Equdo.gif) Show that ![ $$\\begin{array}{rcl}{ \\chi }_{A}\\left \(t\\right \)& =& {t}^{2} -\\left \(\\mathrm{tr}A\\right \)t +\\det A \\\\ & =& {t}^{2} -\\left \(\\alpha+ \\delta \\right \)t + \\left \(\\alpha \\delta- \\beta \\gamma \\right \).\\end{array}$$ ](A81414_1_En_5_Chapter_Equ67.gif) 4. Let ![ $$A \\in \\mathrm{{ Mat}}_{3\\times 3}\\left \(\\mathbb{F}\\right \)$$ ](A81414_1_En_5_Chapter_IEq90.gif) and A = α ij . Show that ![ $${\\chi }_{A}\\left \(t\\right \) = {t}^{3} -\\left \(\\mathrm{tr}A\\right \){t}^{2} + \\left \(\\left \\vert {A}_{ 11}\\right \\vert + \\left \\vert {A}_{22}\\right \\vert + \\left \\vert {A}_{33}\\right \\vert \\right \)t -\\det A,$$ ](A81414_1_En_5_Chapter_Equdp.gif) where A ii is the companion matrix we get from eliminating the i th row and column in A. 5. Show that if L is invertible, then ![ $${\\chi }_{{L}^{-1}}\\left \(t\\right \) = \\frac{{\\left \(-t\\right \)}^{n}} {\\det L} {\\chi }_{L}\\left \({t}^{-1}\\right \).$$ ](A81414_1_En_5_Chapter_Equdq.gif) 6. Let L : V -> V be a linear operator on a finite-dimensional inner product space with ![ $${\\chi }_{L}\\left \(t\\right \) = {t}^{n} + {a}_{ n-1}{t}^{n-1} + \\cdots+ {a}_{ 1}t + {a}_{0}.$$ ](A81414_1_En_5_Chapter_Equdr.gif) Show that ![ $${\\chi }_{{L}^{{_\\ast}}}\\left \(t\\right \) = {t}^{n} +\\bar{ {a}}_{ n-1}{t}^{n-1} + \\cdots+\\bar{ {a}}_{ 1}t +\\bar{ {a}}_{0}.$$ ](A81414_1_En_5_Chapter_Equds.gif) 7. Let ![ $${\\chi }_{L}\\left \(t\\right \) = {t}^{n} + {a}_{ n-1}{t}^{n-1} + \\cdots+ {a}_{ 1}t + {a}_{0}$$ ](A81414_1_En_5_Chapter_Equdt.gif) be the characteristic polynomial for L : V -> V. If vol is a volume form on V, show that ![ $$\\begin{array}{rcl} &{ \\left \(-1\\right \)}^{k}{a}_{n-k}\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \) & \\\\ & \\quad ={ \\sum\\nolimits }_{{i}_{1}<{i}_{2}<\\cdots <{i}_{k}}\\mathrm{vol}\\left \(\\ldots,{x}_{{i}_{1}-1},L\\left \({x}_{{i}_{1}}\\right \),{x}_{{i}_{1}+1},\\ldots,{x}_{{i}_{k}-1},L\\left \({x}_{{i}_{k}}\\right \),{x}_{{i}_{k}+1},\\ldots \\right \),& \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ68.gif) i.e., we are summing over all possible choices of i 1 < i 2 < ⋯ < i k and in each summand replacing ![ $${x}_{{i}_{j}}$$ ](A81414_1_En_5_Chapter_IEq91.gif) by ![ $$L\\left \({x}_{{i}_{j}}\\right \).$$ ](A81414_1_En_5_Chapter_IEq92.gif) 8. Suppose we have a sequence ![ $${V }_{1}\\frac{{L}_{1}} {\\rightarrow }{V }_{2}\\frac{{L}_{2}} {\\rightarrow }{V }_{3}$$ ](A81414_1_En_5_Chapter_IEq93.gif) of linear maps, where L 1 is one-to-one, L 2 is onto, and imL 1 = kerL 2. Show that dimV 2 = dimV 1dimV 3. Assume furthermore that we have linear operators K i : V i -> V i such that the diagram commutes ![ $$\\begin{array}{ccccc} {V }_{1} & \\frac{{L}_{1}} {\\rightarrow } & {V }_{2} & \\frac{{L}_{2}} {\\rightarrow } & {V }_{3} \\\\ {K}_{1} \\uparrow & &{K}_{2} \\uparrow & &{K}_{3} \\uparrow\\\\ {V }_{1} & \\frac{{L}_{1}} {\\rightarrow } & {V }_{2} & \\frac{{L}_{2}} {\\rightarrow } & {V }_{3}\\end{array}$$ ](A81414_1_En_5_Chapter_Equdu.gif) Show that ![ $${\\chi }_{{K}_{2}}\\left \(t\\right \) = {\\chi }_{{K}_{1}}\\left \(t\\right \){\\chi }_{{K}_{3}}\\left \(t\\right \).$$ ](A81414_1_En_5_Chapter_Equdv.gif) 9. Using the definition ![ $$\\det A = \\sum\\nolimits \\mathrm{sign}\\left \({i}_{1},\\ldots,{i}_{n}\\right \){\\alpha }_{{i}_{1}1}\\cdots {\\alpha }_{{i}_{n}n}$$ ](A81414_1_En_5_Chapter_Equdw.gif) reprove the results from this section for matrices. 10. (The Newton Identities) In this exercise, we wish to generalize the formulae α n − 1 = − trL, α0 = − 1 n detL, for the characteristic polynomial ![ $${t}^{n} + {\\alpha }_{ n-1}{t}^{n-1} + \\cdots+ {\\alpha }_{ 1}t + {\\alpha }_{0} = \\left \(t - {\\lambda }_{1}\\right \)\\cdots \\left \(t - {\\lambda }_{n}\\right \)$$ ](A81414_1_En_5_Chapter_Equdx.gif) of L. (a) Prove that ![ $${\\alpha }_{k} ={ \\left \(-1\\right \)}^{n-k}{ \\sum\\nolimits }_{{i}_{1}<\\cdots <{i}_{n-k}}{\\lambda }_{{i}_{1}}\\cdots {\\lambda }_{{i}_{n-k}}.$$ ](A81414_1_En_5_Chapter_Equdy.gif) (b) Prove that ![ $$\\begin{array}{rcl}{ \\left \(\\mathrm{tr}L\\right \)}^{k}& =&{ \\left \({\\lambda }_{ 1} + \\cdots+ {\\lambda }_{n}\\right \)}^{k}, \\\\ \\mathrm{tr}\\left \({L}^{k}\\right \)& =& {\\lambda }_{ 1}^{k} + \\cdots+ {\\lambda }_{ n}^{k}.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ69.gif) (c) Prove ![ $$\\begin{array}{rcl}{ \\left \(\\mathrm{tr}L\\right \)}^{2}& =& \\mathrm{tr}\\left \({L}^{2}\\right \) + 2{\\sum\\nolimits }_{i<j}{\\lambda }_{i}{\\lambda }_{j} \\\\ & =& \\mathrm{tr}\\left \({L}^{2}\\right \) + 2{\\alpha }_{ n-2}.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ70.gif) (d) Prove more generally that ![ $$\\begin{array}{rcl}{ \\left \(\\mathrm{tr}L\\right \)}^{k}& =& k!{\\left \(-1\\right \)}^{k}{\\alpha }_{ n-k} \\\\ & & +\\left \({ k \\atop 2} \\right \){\\left \(\\mathrm{tr}L\\right \)}^{k-2}\\mathrm{tr}{L}^{2} \\\\ & & +\\left \(\\left \({ k \\atop 3} \\right \) -\\left \({ k \\atop 2} \\right \)\\right \){\\left \(\\mathrm{tr}L\\right \)}^{k-3}\\mathrm{tr}{L}^{3} \\\\ & & +\\left \(\\left \({ k \\atop 4} \\right \) -\\left \({ k \\atop 3} \\right \) + \\left \({ k \\atop 2} \\right \)\\right \){\\left \(\\mathrm{tr}L\\right \)}^{n-4}\\mathrm{tr}{L}^{4} \\\\ & & \\vdots \\\\ & & +\\left \(\\left \({ k \\atop k} \\right \) -\\left \({ k \\atop k - 1} \\right \) + \\cdots+{ \\left \(-1\\right \)}^{k}\\left \({ k \\atop 2} \\right \)\\right \)\\mathrm{tr}{L}^{k}.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ71.gif) (e) If trL = 0, then ![ $$\\left \(\\left \({ n \\atop n} \\right \) -\\left \({ n \\atop n - 1} \\right \) + \\cdots+{ \\left \(-1\\right \)}^{n}\\left \({ n \\atop 2} \\right \)\\right \)\\mathrm{tr}{L}^{n} = n!\\det L.$$ ](A81414_1_En_5_Chapter_Equdz.gif) (f) If trL = trL 2 = ⋯ = trL n = 0, then χ L t = t n . ## 5.8 Differential Equations* We are now going to apply the theory of determinants to the study of linear differential equations. We start with the system ![ $$L\\left \(x\\right \) =\\dot{ x} - Ax = b,$$ ](A81414_1_En_5_Chapter_IEq94.gif) where ![ $$\\begin{array}{rcl} x\\left \(t\\right \)& \\in & {\\mathbb{C}}^{n}, \\\\ b& \\in & {\\mathbb{C}}^{n} \\\\ A& \\in & \\mathrm{{Mat}}_{n\\times n}\\left \(\\mathbb{C}\\right \) \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ72.gif) and xt is the vector-valued function we need to find. We know that the homogeneous problem Lx = 0 has n linearly independent solutions x 1,..., x n . More generally, we can show something quite interesting about collections of solutions. Lemma 5.8.1. Let x 1 ,...,x n be solutions to the homogeneous problem Lx = 0; then ![ $$\\frac{d} {\\mathit{dt}}\\left \(\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\right \) =\\mathrm{ tr}\\left \(A\\right \)\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \).$$ ](A81414_1_En_5_Chapter_Equea.gif) In particular ![ $$\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \(t\\right \) =\\mathrm{ vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \({t}_{0}\\right \)\\exp \\left \(\\mathrm{tr}\\left \(A\\right \)\\left \(t - {t}_{0}\\right \)\\right \).$$ ](A81414_1_En_5_Chapter_Equeb.gif) Moreover, x 1 ,...,x n are linearly independent solutions if and only if x 1 t 0 ,... ![ $${x}_{n}\\left \({t}_{0}\\right \) \\in{\\mathbb{C}}^{n}$$ ](A81414_1_En_5_Chapter_IEq95.gif) are linearly independent. Each of these two conditions in turn imply that x 1 t,... ![ $${x}_{n}\\left \(t\\right \) \\in{\\mathbb{C}}^{n}$$ ](A81414_1_En_5_Chapter_IEq96.gif) are linearly independent for all t. Proof. To compute the derivative, we find the Taylor expansion for ![ $$\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \(t + h\\right \)$$ ](A81414_1_En_5_Chapter_Equec.gif) in terms of h and then identify the term that is linear in h. This is done along the lines of our proof in the previous section that α n − 1 = − trA, where α n − 1 is the coefficient in front of t n − 1 in the characteristic polynomial. ![ $$\\begin{array}{rcl} & \\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \(t + h\\right \) & \\\\ & \\quad =\\mathrm{ vol}\\left \({x}_{1}\\left \(t + h\\right \),\\ldots,{x}_{n}\\left \(t + h\\right \)\\right \) & \\\\ & \\quad =\\mathrm{ vol}\\left \({x}_{1}\\left \(t\\right \) + A{x}_{1}\\left \(t\\right \)h + o\\left \(h\\right \),\\ldots,{x}_{n}\\left \(t\\right \) + A{x}_{n}\\left \(t\\right \)h + o\\left \(h\\right \)\\right \)& \\\\ & \\quad =\\mathrm{ vol}\\left \({x}_{1}\\left \(t\\right \),\\ldots,{x}_{n}\\left \(t\\right \)\\right \) & \\\\ & \\quad + h\\mathrm{vol}\\left \(A{x}_{1}\\left \(t\\right \),\\ldots,{x}_{n}\\left \(t\\right \)\\right \) & \\\\ & \\quad \\vdots & \\\\ & \\quad + h\\mathrm{vol}\\left \({x}_{1}\\left \(t\\right \),\\ldots,A{x}_{n}\\left \(t\\right \)\\right \) & \\\\ & \\quad + o\\left \(h\\right \) & \\\\ & \\quad =\\mathrm{ vol}\\left \({x}_{1}\\left \(t\\right \),\\ldots,{x}_{n}\\left \(t\\right \)\\right \) + h\\mathrm{tr}\\left \(A\\right \)\\mathrm{vol}\\left \({x}_{1}\\left \(t\\right \),\\ldots,{x}_{n}\\left \(t\\right \)\\right \) + o\\left \(h\\right \).& \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ73.gif) Thus, ![ $$v\\left \(t\\right \) =\\mathrm{ vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \(t\\right \)$$ ](A81414_1_En_5_Chapter_Equed.gif) solves the differential equation ![ $$\\dot{v} =\\mathrm{ tr}\\left \(A\\right \)v$$ ](A81414_1_En_5_Chapter_Equee.gif) implying that ![ $$v\\left \(t\\right \) = v\\left \({t}_{0}\\right \)\\exp \\left \(\\mathrm{tr}\\left \(A\\right \)\\left \(t - {t}_{0}\\right \)\\right \).$$ ](A81414_1_En_5_Chapter_Equef.gif) In particular, we see that vt≠0 for all t provided vt 0≠0. It remains to prove that x 1,..., x n are linearly independent solutions if and only if x 1 t 0,..., ![ $${x}_{n}\\left \({t}_{0}\\right \) \\in{\\mathbb{C}}^{n}$$ ](A81414_1_En_5_Chapter_IEq97.gif) are linearly independent. It is obvious that x 1,..., x n are linearly independent if x 1 t 0,..., ![ $${x}_{n}\\left \({t}_{0}\\right \) \\in{\\mathbb{C}}^{n}$$ ](A81414_1_En_5_Chapter_IEq98.gif) are linearly independent. Conversely, if we assume that x 1 t 0,..., ![ $${x}_{n}\\left \({t}_{0}\\right \) \\in{\\mathbb{C}}^{n}$$ ](A81414_1_En_5_Chapter_IEq99.gif) are linearly dependent, then we can find ![ $${\\alpha }_{1},\\ldots,{\\alpha }_{n} \\in{\\mathbb{C}}^{n}$$ ](A81414_1_En_5_Chapter_IEq100.gif) not all zero so that ![ $${\\alpha }_{1}{x}_{1}\\left \({t}_{0}\\right \) + \\cdots+ {\\alpha }_{n}{x}_{n}\\left \({t}_{0}\\right \) = 0.$$ ](A81414_1_En_5_Chapter_Equeg.gif) Uniqueness of solutions to the initial value problem Lx = 0, xt 0 = 0, then implies that ![ $$x\\left \(t\\right \) = {\\alpha }_{1}{x}_{1}\\left \(t\\right \) + \\cdots+ {\\alpha }_{n}{x}_{n}\\left \(t\\right \) \\equiv0$$ ](A81414_1_En_5_Chapter_Equeh.gif) for all t. We now claim that the inhomogeneous problem can be solved provided we have found a linearly independent set of solutions x 1,..., x n to the homogeneous equation.The formula comes from Cramer's rule but is known as the variations of constants method. We assume that the solution x to ![ $$\\begin{array}{rcl} L\\left \(x\\right \)& =& \\dot{x} - Ax = b, \\\\ x\\left \({t}_{0}\\right \)& =&0 \\end{array}$$ ](A81414_1_En_5_Chapter_Equ74.gif) looks like ![ $$x\\left \(t\\right \) = {c}_{1}\\left \(t\\right \){x}_{1}\\left \(t\\right \) + \\cdots+ {c}_{n}\\left \(t\\right \){x}_{n}\\left \(t\\right \),$$ ](A81414_1_En_5_Chapter_Equei.gif) where ![ $${c}_{1}\\left \(t\\right \),\\ldots,{c}_{n}\\left \(t\\right \) \\in{C}^{\\infty }\\left \(\\mathbb{R}, \\mathbb{C}\\right \)$$ ](A81414_1_En_5_Chapter_IEq101.gif) are functions rather than constants. Then, ![ $$\\begin{array}{rcl} \\dot{x}& =& {c}_{1}\\dot{{x}}_{1} + \\cdots+ {c}_{n}\\dot{{x}}_{n} +\\dot{ {c}}_{1}{x}_{1} + \\cdots+\\dot{ {c}}_{n}{x}_{n} \\\\ & =& {c}_{1}A{x}_{1} + \\cdots+ {c}_{n}A{x}_{n} +\\dot{ {c}}_{1}{x}_{1} + \\cdots+\\dot{ {c}}_{n}{x}_{n} \\\\ & =& A\\left \(x\\right \) +\\dot{ {c}}_{1}{x}_{1} + \\cdots+\\dot{ {c}}_{n}{x}_{n}.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ75.gif) In other words, ![ $$L\\left \(x\\right \) =\\dot{ {c}}_{1}{x}_{1} + \\cdots+\\dot{ {c}}_{n}{x}_{n}.$$ ](A81414_1_En_5_Chapter_Equej.gif) This means that for each t, the values ![ $$\\dot{{c}}_{1}\\left \(t\\right \),\\ldots,\\dot{{c}}_{n}\\left \(t\\right \)$$ ](A81414_1_En_5_Chapter_IEq102.gif) should solve the linear equation ![ $$\\dot{{c}}_{1}{x}_{1} + \\cdots+\\dot{ {c}}_{n}{x}_{n} = b.$$ ](A81414_1_En_5_Chapter_Equek.gif) Cramer's rule for solutions to linear systems (see Sect. 5.6) then tells us that ![ $$\\begin{array}{rcl} \\dot{{c}}_{1}\\left \(t\\right \)& =& \\frac{\\mathrm{vol}\\left \(b,\\ldots,{x}_{n}\\right \)\\left \(t\\right \)} {\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \(t\\right \)}, \\\\ \\vdots& & \\vdots \\\\ \\dot{{c}}_{n}\\left \(t\\right \)& =& \\frac{\\mathrm{vol}\\left \({x}_{1},\\ldots,b\\right \)\\left \(t\\right \)} {\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \(t\\right \)}, \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ76.gif) implying that ![ $$\\begin{array}{rcl} {c}_{1}\\left \(t\\right \)& =& {\\int\\nolimits \\nolimits }_{{t}_{0}}^{t} \\frac{\\mathrm{vol}\\left \(b,\\ldots,{x}_{n}\\right \)\\left \(s\\right \)} {\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \(s\\right \)}\\mathrm{d}s, \\\\ \\vdots& & \\vdots \\\\ {c}_{n}\\left \(t\\right \)& =& {\\int\\nolimits \\nolimits }_{{t}_{0}}^{t} \\frac{\\mathrm{vol}\\left \({x}_{1},\\ldots,b\\right \)\\left \(s\\right \)} {\\mathrm{vol}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \(s\\right \)}\\mathrm{d}s.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ77.gif) In practice, there are more efficient methods that can be used when we know something about b. These methods also use linear algebra. Having dealt with systems, we next turn to higher order equations: Lx = pDx = f, where ![ $$p\\left \(D\\right \) = {D}^{n} + {\\alpha }_{ n-1}{D}^{n-1} + \\cdots+ {\\alpha }_{ 1}D + {\\alpha }_{0}$$ ](A81414_1_En_5_Chapter_Equel.gif) is a polynomial with complex or real coefficients and ![ $$f\\left \(t\\right \) \\in{C}^{\\infty }\\left \(\\mathbb{R}, \\mathbb{C}\\right \).$$ ](A81414_1_En_5_Chapter_IEq103.gif) This can be translated into a system ![ $$\\dot{z} - Az = b,$$ ](A81414_1_En_5_Chapter_IEq104.gif) or ![ $$\\dot{z}-\\left \[\\begin{array}{cccc} 0 & 1 & \\cdots& 0\\\\ \\vdots & \\ddots & \\vdots & \\vdots\\\\ 0 &\\cdots& 0 & 1 \\\\ - {\\alpha }_{0} & \\cdots & - {\\alpha }_{n-2} & - {\\alpha }_{n-1} \\end{array} \\right \]z = \\left \[\\begin{array}{c} 0\\\\ \\vdots\\\\ 0 \\\\ f \\end{array} \\right \],$$ ](A81414_1_En_5_Chapter_Equem.gif) by using ![ $$z = \\left \[\\begin{array}{c} x\\\\ Dx\\\\ \\vdots \\\\ {D}^{n-1}x \\end{array} \\right \].$$ ](A81414_1_En_5_Chapter_Equen.gif) If we have n functions ![ $${x}_{1},\\ldots,{x}_{n} \\in{C}^{\\infty }\\left \(\\mathbb{R}, \\mathbb{C}\\right \),$$ ](A81414_1_En_5_Chapter_IEq105.gif) then the Wronskian is defined as ![ $$\\begin{array}{rcl} \\mathrm{W}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \(t\\right \)& =& \\mathrm{vol}\\left \({z}_{1},\\ldots,{z}_{n}\\right \)\\left \(t\\right \) \\\\ & =& \\det \\left \[\\begin{array}{lll} {x}_{1}\\left \(t\\right \) &\\cdots &{x}_{n}\\left \(t\\right \) \\\\ \\left \(D{x}_{1}\\right \)\\left \(t\\right \) &\\cdots &\\left \(D{x}_{n}\\right \)\\left \(t\\right \)\\\\ \\vdots &\\ddots &\\vdots \\\\ \\left \({D}^{k-1}{x}_{1}\\right \)\\left \(t\\right \)&\\cdots &\\left \({D}^{k-1}{x}_{n}\\right \)\\left \(t\\right \) \\end{array} \\right \].\\end{array}$$ ](A81414_1_En_5_Chapter_Equ78.gif) In the case where x 1,..., x n solve Lx = pDx = 0, this tells us that ![ $$\\mathrm{W}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \(t\\right \) =\\mathrm{ W}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \({t}_{0}\\right \)\\exp \\left \(-{\\alpha }_{n-1}\\left \(t - {t}_{0}\\right \)\\right \).$$ ](A81414_1_En_5_Chapter_Equeo.gif) Finally, we can again try the variation of constants method to solve the inhomogeneous equation. It is slightly tricky to do this directly by assuming that ![ $$x\\left \(t\\right \) = {c}_{1}\\left \(t\\right \){x}_{1}\\left \(t\\right \) + \\cdots+ {c}_{n}\\left \(t\\right \){x}_{n}\\left \(t\\right \).$$ ](A81414_1_En_5_Chapter_Equep.gif) Instead, we use the system ![ $$\\dot{z} - Az = b,$$ ](A81414_1_En_5_Chapter_IEq106.gif) and guess that ![ $$z = {c}_{1}\\left \(t\\right \){z}_{1}\\left \(t\\right \) + \\cdots+ {c}_{n}\\left \(t\\right \){z}_{n}\\left \(t\\right \).$$ ](A81414_1_En_5_Chapter_Equeq.gif) This certainly implies that ![ $$x\\left \(t\\right \) = {c}_{1}\\left \(t\\right \){x}_{1}\\left \(t\\right \) + \\cdots+ {c}_{n}\\left \(t\\right \){x}_{n}\\left \(t\\right \),$$ ](A81414_1_En_5_Chapter_Equer.gif) but the converse is not true. As above, we get ![ $$\\begin{array}{rcl} {c}_{1}\\left \(t\\right \)& =& {\\int\\nolimits \\nolimits }_{{t}_{0}}^{t} \\frac{\\mathrm{vol}\\left \(b,\\ldots,{z}_{n}\\right \)\\left \(s\\right \)} {\\mathrm{vol}\\left \({z}_{1},\\ldots,{z}_{n}\\right \)\\left \(s\\right \)}\\mathrm{d}s, \\\\ \\vdots& & \\vdots \\\\ {c}_{n}\\left \(t\\right \)& =& {\\int\\nolimits \\nolimits }_{{t}_{0}}^{t} \\frac{\\mathrm{vol}\\left \({z}_{1},\\ldots,b\\right \)\\left \(s\\right \)} {\\mathrm{vol}\\left \({z}_{1},\\ldots,{z}_{n}\\right \)\\left \(s\\right \)}\\mathrm{d}s.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ79.gif) Here ![ $$\\mathrm{vol}\\left \({z}_{1},\\ldots,{z}_{n}\\right \) =\\mathrm{ W}\\left \({x}_{1},\\ldots,{x}_{n}\\right \).$$ ](A81414_1_En_5_Chapter_Eques.gif) The numerator can also be simplified by using a Laplace expansion along the column vector b. This gives us ![ $$\\begin{array}{rcl} \\mathrm{vol}\\left \(b,{z}_{2},\\ldots,{z}_{n}\\right \)& =& \\left \\vert \\begin{array}{cccc} 0& {x}_{1} & \\cdots & {x}_{n} \\\\ \\vdots & \\vdots &\\cdots & \\vdots \\\\ 0&{D}^{n-2}{x}_{2} & \\cdots &{D}^{n-2}{x}_{n} \\\\ b&{D}^{n-1}{x}_{2} & \\cdots &{D}^{n-1}{x}_{n} \\end{array} \\right \\vert \\\\ & =&{ \\left \(-1\\right \)}^{n+1}b\\left \\vert \\begin{array}{ccc} {x}_{1} & \\cdots & {x}_{n} \\\\ \\vdots &\\cdots & \\vdots \\\\ {D}^{n-2}{x}_{2} & \\cdots &{D}^{n-2}{x}_{n} \\end{array} \\right \\vert \\\\ & =&{ \\left \(-1\\right \)}^{n+1}b\\mathrm{W}\\left \({x}_{ 2},\\ldots,{x}_{n}\\right \).\\end{array}$$ ](A81414_1_En_5_Chapter_Equ80.gif) Thus, ![ $$\\begin{array}{rcl} {c}_{1}\\left \(t\\right \)& =&{ \\left \(-1\\right \)}^{n+1}{ \\int\\nolimits \\nolimits }_{{t}_{0}}^{t}\\frac{b\\left \(s\\right \)\\mathrm{W}\\left \({x}_{2},\\ldots,{x}_{n}\\right \)\\left \(s\\right \)} {\\mathrm{W}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \(s\\right \)} \\mathrm{d}s, \\\\ \\vdots& & \\vdots \\\\ {c}_{n}\\left \(t\\right \)& =&{ \\left \(-1\\right \)}^{n+n}{ \\int\\nolimits \\nolimits }_{{t}_{0}}^{t}\\frac{b\\left \(s\\right \)\\mathrm{W}\\left \({x}_{1},\\ldots,{x}_{n-1}\\right \)\\left \(s\\right \)} {\\mathrm{W}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \(s\\right \)} \\mathrm{d}s, \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ81.gif) and therefore, a solution to the inhomogeneous equation is given by ![ $$\\begin{array}{rcl} x\\left \(t\\right \)& =& \\left \({\\left \(-1\\right \)}^{n+1}{ \\int\\nolimits \\nolimits }_{{t}_{0}}^{t}\\frac{b\\left \(s\\right \)\\mathrm{W}\\left \({x}_{2},\\ldots,{x}_{n}\\right \)\\left \(s\\right \)} {\\mathrm{W}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \(s\\right \)} ds\\right \){x}_{1}\\left \(t\\right \) + \\cdots\\\\ & & +\\left \({\\left \(-1\\right \)}^{n+n}{ \\int\\nolimits \\nolimits }_{{t}_{0}}^{t}\\frac{b\\left \(s\\right \)\\mathrm{W}\\left \({x}_{1},\\ldots,{x}_{n-1}\\right \)\\left \(s\\right \)} {\\mathrm{W}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \(s\\right \)} ds\\right \){x}_{n}\\left \(t\\right \) \\\\ & =& {\\sum\\nolimits }_{k=1}^{n}{\\left \(-1\\right \)}^{n+k}{x}_{ k}\\left \(t\\right \){\\int\\nolimits \\nolimits }_{{t}_{0}}^{t}\\frac{b\\left \(s\\right \)\\mathrm{W}\\left \({x}_{1},\\ldots,\\hat{{x}}_{k},\\ldots,{x}_{n}\\right \)\\left \(s\\right \)} {\\mathrm{W}\\left \({x}_{1},\\ldots,{x}_{n}\\right \)\\left \(s\\right \)} \\mathrm{d}s.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ82.gif) Let us try to solve a concrete problem using these methods. Example 5.8.2. Find the complete set of solutions to ![ $$\\ddot{x} - 2\\dot{x} + x =\\exp \\left \(t\\right \).$$ ](A81414_1_En_5_Chapter_IEq107.gif) We see that ![ $$\\ddot{x} - 2\\dot{x} + x ={ \\left \(D - 1\\right \)}^{2}x,$$ ](A81414_1_En_5_Chapter_IEq108.gif) thus the characteristic equation is λ − 12 = 1. This means that we only get one solution x 1 = expt from the eigenvalue λ = 1. The other solution is then given by x 2 t = texpt. We now compute the Wronskian to check that they are linearly independent: ![ $$\\begin{array}{rcl} \\mathrm{W}\\left \({x}_{1},{x}_{2}\\right \)& =& \\left \\vert \\begin{array}{cc} \\exp \\left \(t\\right \)& t\\exp \\left \(t\\right \)\\\\ \\exp \\left \(t\\right \) &\\left \(1 + t \\right \)\\exp \\left \(t\\right \) \\end{array} \\right \\vert \\\\ & =& \\exp \\left \(2t\\right \)\\left \\vert \\begin{array}{cc} 1& t\\\\ 1 &\\left \(1 + t\\right \) \\end{array} \\right \\vert \\\\ & =& \\left \(\\left \(1 + t\\right \) - t\\right \)\\exp \\left \(2t\\right \) \\\\ & =& \\exp \\left \(2t\\right \).\\end{array}$$ ](A81414_1_En_5_Chapter_Equ83.gif) Note, we could also have found x 2 from our knowledge that ![ $$\\mathrm{W}\\left \({x}_{1},{x}_{2}\\right \)\\left \(t\\right \) =\\mathrm{ W}\\left \({x}_{1},{x}_{2}\\right \)\\left \({t}_{0}\\right \)\\exp \\left \(2\\left \(t - {t}_{0}\\right \)\\right \).$$ ](A81414_1_En_5_Chapter_Equet.gif) Assuming that t 0 = 0 and we want Wx 1, x 2 t 0 = 1, we simply need to solve ![ $$\\mathrm{W}\\left \({x}_{1},{x}_{2}\\right \)\\left \(t\\right \) = {x}_{1}\\dot{{x}}_{2} -\\dot{ {x}}_{1}{x}_{2} =\\exp \\left \(2t\\right \).$$ ](A81414_1_En_5_Chapter_Equeu.gif) Since x 1 = expt, this implies that ![ $$\\dot{{x}}_{2} - {x}_{2} =\\exp \\left \(t\\right \).$$ ](A81414_1_En_5_Chapter_Equev.gif) Hence, ![ $$\\begin{array}{rcl}{ x}_{2}\\left \(t\\right \)& =& \\exp \\left \(t\\right \){\\int\\nolimits \\nolimits }_{0}^{t}\\exp \\left \(-s\\right \)\\exp \\left \(t\\right \)\\mathrm{d}s \\\\ & =& t\\exp \\left \(t\\right \) \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ84.gif) as expected. The variation of constants formula now tells us to compute ![ $$\\begin{array}{rcl}{ c}_{1}\\left \(t\\right \)& =&{ \\left \(-1\\right \)}^{2+1}{ \\int\\nolimits \\nolimits }_{0}^{t} \\frac{f\\left \(s\\right \){x}_{2}\\left \(s\\right \)} {\\mathrm{W}\\left \({x}_{1},{x}_{2}\\right \)\\left \(s\\right \)}\\mathrm{d}s \\\\ & =& -{\\int\\nolimits \\nolimits }_{0}^{t}\\frac{\\exp \\left \(s\\right \)\\left \(s\\exp \\left \(s\\right \)\\right \)} {\\mathrm{\\exp }\\left \(2s\\right \)} \\mathrm{d}s \\\\ & =& -{\\int\\nolimits \\nolimits }_{0}^{t}s\\mathrm{d}s \\\\ & =& -\\frac{1} {2}{t}^{2} \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ85.gif) and ![ $$\\begin{array}{rcl}{ c}_{2}\\left \(t\\right \)& =&{ \\left \(-1\\right \)}^{2+2}{ \\int\\nolimits \\nolimits }_{0}^{t} \\frac{f\\left \(s\\right \){x}_{1}\\left \(s\\right \)} {\\mathrm{W}\\left \({x}_{1},{x}_{2}\\right \)\\left \(s\\right \)}\\mathrm{d}s \\\\ & =& {\\int\\nolimits \\nolimits }_{0}^{t}1\\mathrm{d}s \\\\ & =& t.\\end{array}$$ ](A81414_1_En_5_Chapter_Equ86.gif) Thus, ![ $$\\begin{array}{rcl} x& =& -\\frac{1} {2}{t}^{2}{x}_{ 1}\\left \(t\\right \) + t{x}_{2}\\left \(t\\right \) \\\\ & =& -\\frac{1} {2}{t}^{2}\\exp \\left \(t\\right \) + t\\left \(t\\exp \\left \(t\\right \)\\right \) \\\\ & =& \\frac{1} {2}{t}^{2}\\exp \\left \(t\\right \) \\\\ \\end{array}$$ ](A81414_1_En_5_Chapter_Equ87.gif) solves the inhomogeneous problem and ![ $$x = {\\alpha }_{1}\\exp \\left \(t\\right \) + {\\alpha }_{2}t\\exp \\left \(t\\right \) + \\frac{1} {2}{t}^{2}\\exp \\left \(t\\right \)$$ ](A81414_1_En_5_Chapter_IEq109.gif) represents the complete set of solutions. ### 5.8.1 Exercises (1) Let ![ $${p}_{0}\\left \(t\\right \),\\ldots,{p}_{n}\\left \(t\\right \) \\in\\mathbb{C}\\left \[t\\right \]$$ ](A81414_1_En_5_Chapter_IEq110.gif) and assume that ![ $$t \\in\\mathbb{R}.$$ ](A81414_1_En_5_Chapter_IEq111.gif) If ![ $${p}_{i}\\left \(t\\right \) = {\\alpha }_{ni}{t}^{n} + \\cdots+ {\\alpha }_{ 1i}t + {\\alpha }_{0i},$$ ](A81414_1_En_5_Chapter_Equew.gif) show that ![ $$\\begin{array}{rcl} \\mathrm{W}\\left \({p}_{0},\\ldots,{p}_{n}\\right \)& =& \\det \\left \[\\begin{array}{lll} {p}_{0}\\left \(t\\right \) &\\cdots &{p}_{n}\\left \(t\\right \) \\\\ \\left \(D{p}_{0}\\right \)\\left \(t\\right \) &\\cdots &\\left \(D{p}_{n}\\right \)\\left \(t\\right \)\\\\ \\vdots & &\\vdots \\\\ \\left \({D}^{n}{p}_{0}\\right \)\\left \(t\\right \)&\\cdots &\\left \({D}^{n}{p}_{n}\\right \)\\left \(t\\right \)\\end{array} \\right \] \\\\ & =& \\det \\left \[\\begin{array}{lll} {\\alpha }_{00} & \\cdots &{\\alpha }_{0n} \\\\ {\\alpha }_{10} & \\cdots &{\\alpha }_{1n} \\\\ 2{\\alpha }_{20} & \\cdots &2{\\alpha }_{2n}\\\\ \\vdots & &\\vdots \\\\ n!{\\alpha }_{n0} & \\cdots &n!{\\alpha }_{nn}\\end{array} \\right \] \\\\ & =& n! \\cdot \\left \(n - 1\\right \)! \\cdot \\cdots\\cdot2 \\cdot1\\det \\left \[\\begin{array}{lll} {\\alpha }_{00} & \\cdots &{\\alpha }_{0n} \\\\ {\\alpha }_{10} & \\cdots &{\\alpha }_{1n} \\\\ {\\alpha }_{20} & \\cdots &{\\alpha }_{2n}\\\\ \\vdots & &\\vdots \\\\ {\\alpha }_{n0} & \\cdots &{\\alpha }_{nn}\\end{array} \\right \].\\end{array}$$ ](A81414_1_En_5_Chapter_Equ88.gif) (2) Let x 1,..., x n be linearly independent solutions to ![ $$p\\left \(D\\right \)\\left \(x\\right \) = \\left \({D}^{n} + {\\alpha }_{ n-1}{D}^{n-1} + \\cdots+ {\\alpha }_{ 0}\\right \)\\left \(x\\right \) = 0.$$ ](A81414_1_En_5_Chapter_Equex.gif) Attempt the following questions without using what we know about existence and uniqueness of solutions to differential equations. (a) Show that ![ $$p\\left \(D\\right \)\\left \(x\\right \) = \\frac{\\mathrm{W}\\left \({x}_{1}\\ldots,{x}_{n},x\\right \)} {\\mathrm{W}\\left \({x}_{1}\\ldots,{x}_{n}\\right \)}.$$ ](A81414_1_En_5_Chapter_Equey.gif) (b) Conclude that pDx = 0 if and only if Wx, x 1..., x n = 0. (c) If Wx, x 1..., x n = 0, then x is a linear combination of x 1,..., x n . (d) If x, y are solutions with the same initial values: x0 = y0, Dx0 = Dy0,..., D n − 1 x0 = D n − 1 y0, then x = y. (3) Assume two monic polynomials ![ $$p,q \\in\\mathbb{C}\\left \[t\\right \]$$ ](A81414_1_En_5_Chapter_IEq112.gif) have the property that pDx = 0 and qDx = 0 have the same solutions. Is it true that p = q ? Hint: If pDx = 0 = qDx, then gcdp, qDx = 0. (4) Assume that x is a solution to pDx = 0, where pD = D n + ⋯ + α1 D + α0. (a) Show that the phase shifts x ω t = xt + ω are also solutions. (b) If the vectors ![ $$\\left \[\\begin{array}{c} x\\left \({\\omega }_{1}\\right \) \\\\ Dx\\left \({\\omega }_{1}\\right \)\\\\ \\vdots \\\\ {D}^{n-1}x\\left \({\\omega }_{1}\\right \)\\end{array} \\right \],\\ldots,\\left \[\\begin{array}{c} x\\left \({\\omega }_{n}\\right \) \\\\ Dx\\left \({\\omega }_{n}\\right \)\\\\ \\vdots \\\\ {D}^{n-1}x\\left \({\\omega }_{n}\\right \)\\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_Equez.gif) form a basis for ![ $${\\mathbb{C}}^{n}$$ ](A81414_1_En_5_Chapter_IEq113.gif) for some choice of ![ $${\\omega }_{1},\\ldots,{\\omega }_{n} \\in\\mathbb{R}$$ ](A81414_1_En_5_Chapter_IEq114.gif), then all solutions to pDx = 0 are linear combinations of the phase-shifted solutions ![ $${x}_{{\\omega }_{1}},\\ldots,{x}_{{\\omega }_{n}}.$$ ](A81414_1_En_5_Chapter_IEq115.gif) (c) If the vectors ![ $$\\left \[\\begin{array}{c} x\\left \({\\omega }_{1}\\right \) \\\\ Dx\\left \({\\omega }_{1}\\right \)\\\\ \\vdots \\\\ {D}^{n-1}x\\left \({\\omega }_{1}\\right \)\\end{array} \\right \],\\ldots,\\left \[\\begin{array}{c} x\\left \({\\omega }_{n}\\right \) \\\\ Dx\\left \({\\omega }_{n}\\right \)\\\\ \\vdots \\\\ {D}^{n-1}x\\left \({\\omega }_{n}\\right \)\\end{array} \\right \]$$ ](A81414_1_En_5_Chapter_Equfa.gif) never form a basis for ![ $${\\mathbb{C}}^{n}$$ ](A81414_1_En_5_Chapter_IEq116.gif) for any choice of ![ $${\\omega }_{1},\\ldots,{\\omega }_{n} \\in\\mathbb{R}$$ ](A81414_1_En_5_Chapter_IEq117.gif), then x is a solution to a kth equation for k < n. Hint: If x is not a solution to a lower order equation, the x, Dx,..., D n − 1 x is a (cyclic) basis for the solution space. (5) Find a formula for the real solutions to the system ![ $$\\left \[\\begin{array}{c} \\dot{{x}}_{1} \\\\ \\dot{{x}}_{2}\\end{array} \\right \]-\\left \[\\begin{array}{cc} a& - b\\\\ b & a\\end{array} \\right \]\\left \[\\begin{array}{c} {x}_{1} \\\\ {x}_{2}\\end{array} \\right \] = \\left \[\\begin{array}{c} {b}_{1} \\\\ {b}_{2}\\end{array} \\right \],$$ ](A81414_1_En_5_Chapter_Equfb.gif) where ![ $$a,b \\in\\mathbb{R}$$ ](A81414_1_En_5_Chapter_IEq118.gif) and ![ $${b}_{1},{b}_{2} \\in{C}^{\\infty }\\left \(\\mathbb{R}, \\mathbb{R}\\right \).$$ ](A81414_1_En_5_Chapter_IEq119.gif) (6) Find a formula for the real solutions to the equation ![ $$\\ddot{x} + a\\dot{x} + bx = f,$$ ](A81414_1_En_5_Chapter_Equfc.gif) where ![ $$a,b \\in\\mathbb{R}$$ ](A81414_1_En_5_Chapter_IEq120.gif) and ![ $$f \\in{C}^{\\infty }\\left \(\\mathbb{R}, \\mathbb{R}\\right \).$$ ](A81414_1_En_5_Chapter_IEq121.gif) References [Axler]. Axler, S.: Linear Algebra Done Right. Springer-Verlag, New York (1997) [Bretscher]. Bretscher, O.: Linear Algebra with Applications, 2nd edn. Prentice-Hall, Upper Saddle River (2001) [Curtis]. Curtis, C.W.: Linear Algebra: An Introductory Approach. Springer-Verlag, New York (1984) [Greub]. Greub, W.: Linear Algebra, 4th edn. Springer-Verlag, New York (1981) [Halmos]. Halmos, P.R.: Finite-Dimensional Vector Spaces. Springer-Verlag, New York (1987) [Hoffman-Kunze]. Hoffman, K., Kunze, R.: Linear Algebra. Prentice-Hall, Upper Saddle River (1961) [Lang]. Lang, S.: Linear Algebra, 3rd edn. Springer-Verlag, New York (1987) [Roman]. Roman, S.: Advanced Linear Algebra, 2nd edn. Springer-Verlag, New York (2005) [Serre]. Serre, D.: Matrices, Theory and Applications. Springer-Verlag, New York (2002) Peter PetersenUndergraduate Texts in MathematicsLinear Algebra201210.1007/978-1-4614-3612-6(C) Springer Science+Business Media New York 2012 References Axler. Axler, S.: Linear Algebra Done Right. Springer-Verlag, New York (1997) Bretscher. Bretscher, O.: Linear Algebra with Applications, 2nd edn. Prentice-Hall, Upper Saddle River (2001) Curtis. Curtis, C.W.: Linear Algebra: An Introductory Approach. Springer-Verlag, New York (1984) Greub. Greub, W.: Linear Algebra, 4th edn. Springer-Verlag, New York (1981) Halmos. Halmos, P.R.: Finite-Dimensional Vector Spaces. Springer-Verlag, New York (1987) Hoffman-Kunze. Hoffman, K., Kunze, R.: Linear Algebra. Prentice-Hall, Upper Saddle River (1961) Lang. Lang, S.: Linear Algebra, 3rd edn. Springer-Verlag, New York (1987) Roman. Roman, S.: Advanced Linear Algebra, 2nd edn. Springer-Verlag, New York (2005) Serre. Serre, D.: Matrices, Theory and Applications. Springer-Verlag, New York (2002) Index A Adjoint Affine subspace Algebraic multiplicity Algebraic numbers Annihilator Augmented matrix Axiom of choice Axioms Field Vector Space B Basis Bessel Inequality Bianchi's identity C Canonical form Frobenius Jordan rational Cauchy-Schwarz inequality Cayley-Hamilton theorem Change of basis Characteristic Equation Polynomial Value Column equivalence Column operations Column rank Companion matrix Complement existence of uniqueness of Complementary subspaces Completely reducible Complexification Conjugate vector space Cramer's rule Cyclic subspace Cyclic subspace decomposition D Darboux vector Degree Descartes Rule of Signs Determinant Diagonalizable Differential Differentiation operator Dimension Dimension formula for subspaces Direct sum Dual basis Dual map Dual space E Eigenspace Eigenvalue Eigenvector Elementary Column operations Matrices Row operations Elementary divisors Equivalence Expansion Exponential function F Field Fredholm alternative Fundamental Theorem of Algebra G Gauss elimination Gauss-Jordan elimination General linear group Geometric multiplicity Gram matrix Gram-Schmidt procedure Graph of a linear map Greatest common divisor H Hermitian operator Homomorphism I Image Induction Injective Inner product Complex Real Integration operator Intersection Invariant factors Isometry Isomorphism J Jacobi identity Jordan block Jordan canonical form Jordan-Chevalley decomposition K Kernel Kronecker symbol L Lagrange interpolation Laplace expansion Least common multiple Legendre polynomials Linear Combination Dependence characterization of Differential equations Function(al) Independence characterization of of functions Maps Operator M Matrix representations Matrix vector space Minimal polynomial Moore-Penrose inverse Multiplication operator N Newton's identities Nilpotent operator Normal operator Nullspace O One-to-one Onto Operator norm Ordered basis Orthogonal complement Orthogonal matrices Orthogonal matrix Orthogonal projection Orthonormal basis P Parallelogram rule Permutation matrix Polar decomposition Polarization Polynomial Irreducible Monic Prime Relatively prime vector space Positive operator Projection characterization of Pythagorean theorem Q QR-factorization Quadratic forms Quaternions Quotient spaces R Range Rank theorem Real forms Reduced row echelon form Uniqueness of Reflection Replacement Row echelon form Row equivalence Row rank Row reduction S Scalar Scalar product Schur's theorem Self-adjoint operator Semi-simple Similarity Similarity invariants Singular value decomposition Skew-adjoint operator Skew-symmetric operator Smith normal form Span Spectral resolution Spectral theorem Steinitz replacement Subspace Subspace Theorem Subspace theorem Sum of subspaces Surjective Symmetric operator T Trace Invariance of Transpose Triangle inequality Triangulability Trivial subspace U Unitary equivalence Unitary matrix V Vandermonde Variations of constants Vector Space Volume form W Wronskian
{ "redpajama_set_name": "RedPajamaBook" }
6,377
8 Jun 2018 // Market Trends // Comments OFF US-Canada market sees 4.0% capacity boost in 2017, Air Canada is leading carrier in S18, Toronto Pearson top origin airport Air Canada is the dominant airline in the US-Canada market, accounting for 46% of available weekly seats in S18. On 17 May the Canadian carrier launched services from Montreal to Baltimore/Washington. Seen taking part in the ribbon cutting are: Frédéric Tremblay, Director of the Quebec Government Office in Washington; Frank Stendardo, Transport Counsellor, Embassy of Canada to the US; Vincent Gauthier-Dore, Director of US Business Development, Air Canada; Ricky Smith, Executive Director, Baltimore/Washington Airport; and Calvin Peacock, Maryland Aviation Commission. The US and Canada share a large border and one of the most mature country pair markets for international air services. From 2008 to 2017 the two-way capacity available between these North American neighbours grew from 31.68 million seats to 37.15 million, representing an increase of 17.3%. The market witnessed the almost obligatory decline in capacity at the height of the global economic downturn in 2009, with seat numbers dropping by 4.8% compared to 2008 levels, but since then it has enjoyed eight consecutive years of steady growth. From 2009 to 2017 capacity grew by 23.2% with an average annual growth rate of 2.7% during this period. The largest single annual increases were experienced in 2012 and 2017, when seat numbers grew by 4.7% and 4.0% respectively. Source: OAG Schedules Analyser – non-stop flights only. The average capacity on trans-border services between the US and Canada has increased from 78 seats per flight in 2008 to 96 seats per departure in 2017. The proportion of overall capacity provided by narrow-body aircraft over this period remained unchanged at 44%. However, a recent interesting trend has seen the proportion of seats flown by wide-body types increase from just 2.1% in 2013 to 10.1% last year. The main driver of this has been an increase in wide-body operations by Air Canada, in large part due to the introduction of 767-300-operated services by its low-cost subsidiary Air Canada Rouge. In total, the capacity available on Air Canada wide-body services (including its LCC subsidiary) between the US and Canada increased from 375,000 seats in 2013 to more than 3.26 million in 2017. Air Canada Rouge 767-300s have been deployed on a number of routes including Montreal-Fort Lauderdale, Toronto Pearson–Orlando and Pearson-Las Vegas, and operated 2.44 million seats in the US-Canada market last year. Air Canada is capacity class apart To summarise the market in more detail, anna.aero analysed schedule data for a typical peak week in S18, in this case the week commencing 31 July. According to OAG, 13 airlines will offer at least one service between the US and Canada during the week in question. The top 12, based on weekly seats this summer, include five Canadian carriers, five US-based operators and two non-native airlines. Air Canada is by far the largest operator in the market offering more seats than the US Big Three (USB3) carriers plus Alaska Airlines combined. The Canadian flag carrier (including its regional and LCC operations) will account for 46% of the weekly seats available in this market alone. The largest US operator in the trans-border market is Delta Air Lines. Cathay Pacific Airways and Philippine Airlines both feature in the top 12, with each carrier offering seats between New York JFK and Vancouver under fifth freedom rights, as part of itineraries that start and finish at their respective home hubs of Hong Kong and Manila. Five of the top-ranked airlines have increased their weekly capacity in the US-Canada market for S18, compared to the equivalent week in 2017. The strongest growth has come from Air Canada and Delta with their respective weekly seat numbers up 6.0% and 5.5%. Cathay Pacific, Philippine Airlines and Sunwing Airlines have maintained the same capacity as last year while another three of the top 12 carriers (highlighted in red) have cut seat numbers this summer, with Air Transat seeing the biggest decline at 5.9%. Frontier Airlines is a new addition to the market for S18, with the ULCC having introduced a three times weekly service between Denver and Calgary on 31 May. Toronto Pearson tops airport rankings The top 12 airports for departing weekly seats in the US-Canada market this summer include five in Canada and seven in the US. The top four departure points are all in Canada. Pearson is the largest, accounting for 21% of the weekly departing seats in this market alone in peak S18. Seven airlines will offer a combined 1,730 departures from Pearson to 52 US airports. Toronto and New York are each represented by two different airports in the top 12. The largest US origin point for Canadian capacity is Los Angeles. There are a number of significant US hubs missing from the top 12 including Atlanta, the world's busiest airport, Houston Intercontinental, JFK and Dallas/Fort Worth. Most of the busiest airports for US-Canada capacity witnessed an increase in their weekly trans-border seats for S18. The largest growth has been experienced at New York LaGuardia (9.9% up versus S17), San Francisco (7.8%) and Seattle-Tacoma (7.5%). Three facilities (highlighted in red) have seen marginal reductions. Toronto City experienced the largest reduction in weekly seats (3.3% down) due to a marginal cut in flights made by Porter Airlines. The top 12 is occupied by the same airports as in S17, with the only change in ranking position being Seattle overtaking New York Newark for ninth place. Toronto and Vancouver are dominant destinations All but one of the top 12 routes from the US to Canada in peak S18 serve the cities of Toronto or Vancouver. Toronto is the destination city for seven of the top-ranked routes, including the three largest airport pairs. Six of these route operate to Pearson with one arriving at City. Between them, the seven Toronto connections account for 62% of the weekly one-way seats available on the top 12 routes. The largest airport pair in the market this summer is the connection between LaGuardia and Pearson which will be served by three airlines offering 163 weekly flights. The only connection that doesn't operate to Toronto or Vancouver is the Newark service to Montreal. LaGuardia, Chicago O'Hare, Los Angeles and San Francisco all feature twice as origin airports among the top 12 routes this summer. Eight of the top-ranked routes (highlighted in green) witnessed an increase in capacity for this summer, compared to the equivalent week in S17. The largest growth was experienced on services from Atlanta to Pearson (20.5% increase in weekly seats) and from O'Hare to Vancouver (20% increase). Four of the top 12 airport pairs (highlighted in red) have seen a decline in capacity with the largest cuts coming between Los Angeles and Vancouver (10.3% down). The positive growth of recent years looks set to continue. According to anna.aero's New Routes Database, airlines have already launched 13 new routes between the US and Canada in 2018 including seven with Air Canada or Air Canada Rouge. Another is scheduled to start in W18/19 when Air Canada Rouge launches flights between Edmonton and Las Vegas. The US-Canada market has enjoyed eight consecutive years of steady growth, with 37.15 million two-way seats available between the North American neighbours in 2017. Toronto is the largest origin airport for these trans-border links but the route map shows that facilities as far afield as Alaska, Hawaii and St John's are also connected. Air Canada Air Transat Alaska Airlines American Airlines ATL Canada Cathay Pacific Airways Delta Air Lines EWR Frontier Airlines LAS LAX LGA MCO ORD Philippine Airlines Porter Airlines SEA SFO Sunwing Airlines United Airlines US Westjet YTZ YUL YVR YYC YYZ
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
265
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>ikube</groupId> <artifactId>ikube-web</artifactId> <version>5.3.1</version> <packaging>war</packaging> <scm> <url>https://github.com/michaelcouck/ikube</url> <connection>scm:git:ssh://github.com/michaelcouck/ikube.git</connection> <developerConnection>scm:git:ssh://git@github.com/michaelcouck/ikube.git</developerConnection> <tag>HEAD</tag> </scm> <distributionManagement> <site> <id>ikube-releases</id> <name>ikube-releases</name> <url>file:/home/michael/.m2/repository</url> </site> <repository> <id>libs-release-local</id> <name>libs-release-local</name> <url>http://ikube.be/artifactory/libs-release-local</url> </repository> <snapshotRepository> <id>libs-snapshot-local</id> <name>libs-snapshot-local</name> <url>http://ikube.be/artifactory/libs-snapshot-local</url> </snapshotRepository> </distributionManagement> <properties> <webapps>webapps</webapps> <final.name>${project.build.finalName}.${project.packaging}</final.name> <maven.build.timestamp.format>dd-MM-yyyy HH:mm:ss</maven.build.timestamp.format> <version>9.4.12.v20180830</version> </properties> <build> <finalName>ikube-web</finalName> <resources> <resource> <directory>src/main/java</directory> </resource> <resource> <directory>src/main/resources</directory> </resource> </resources> <plugins> <!-- The default plugin for creating the war, we specify a version. --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.2</version> </plugin> <!-- This plugin starts a Jetty server before the integration tests and stops it after the integration tests. It can also be used to start a jetty server from the command line for testing. --> <plugin> <groupId>org.mortbay.jetty</groupId> <artifactId>jetty-maven-plugin</artifactId> <version>${jetty-version}</version> <configuration> <stopKey>stop</stopKey> <stopPort>9180</stopPort> <scanIntervalSeconds>10000</scanIntervalSeconds> <webApp> <contextPath>/ikube</contextPath> </webApp> <connectors> <connector implementation="org.eclipse.jetty.server.nio.SelectChannelConnector"> <port>9090</port> <maxIdleTime>60000</maxIdleTime> </connector> </connectors> <jvmArgs>-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx2048m -XX:+CMSClassUnloadingEnabled</jvmArgs> </configuration> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>org.eclipse.jetty</groupId> <artifactId>jetty-server</artifactId> <version>${jetty-version}</version> <type>jar</type> <scope>provided</scope> </dependency> <dependency> <groupId>org.eclipse.jetty</groupId> <artifactId>jetty-util</artifactId> <version>${jetty-version}</version> <type>jar</type> <scope>provided</scope> </dependency> <dependency> <groupId>org.eclipse.jetty</groupId> <artifactId>example-jetty-embedded</artifactId> <version>${jetty-version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.34</version> <scope>compile</scope> </dependency> </dependencies> </project>
{ "redpajama_set_name": "RedPajamaGithub" }
8,455
\section{Introduction} For an open interval $(a, b)$, we say that $f : (a, b) \to \mathbb{R}$ is matrix monotone (increasing) of order $n$ (or $n$-monotone) if for any $n \times n$ Hermitian matrices $A, B$ with spectra in $(a, b)$ and $A \leq B$ we have $f(A) \leq f(B)$.\footnote{As usual, the space of Hermitian matrices is equipped with the Loewner order, i.e. the partial order induced by the convex cone of positive semi-definite matrices.} Analogously, $f : (a, b) \to \mathbb{R}$ is matrix convex of order $n$ (or $n$-convex) if for any $n \times n$ Hermitian matrices $A, B$ with spectra in $(a, b)$ and $\lambda \in [0, 1]$ we have $f(\lambda A + (1 - \lambda) B) \leq \lambda f(A) + (1 - \lambda) f(B)$. Ever since Charles Loewner (then known as Karl Löwner) introduced matrix monotone functions in 1934 \cite{Low}, this class has been characterized in various ways. See for example \cite{Chan, Han} for survey and recent progress. The famous theorem established in the Loewner's paper states that a function that is matrix monotone of all orders on an interval, extends to upper half-plane as a Pick-Nevanlinna function: an analytic function with non-negative imaginary part. Loewner's proof of this jewel is based on an important characterization in terms of divided differences here denoted by $[\cdot, \cdot, \ldots, \cdot]_{f}$. Recall that divided differences are defined recursively by $[\lambda]_{f} = f(\lambda)$ and for distinct $\lambda_{1}, \lambda_{2}, \ldots, \lambda_{n} \in (a, b)$, \begin{align*} [\lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}]_{f} = \frac{[\lambda_{1}, \lambda_{2}, \ldots, \lambda_{n - 1}]_{f} - [\lambda_{2}, \lambda_{3}, \ldots, \lambda_{n}]_{f}}{\lambda_{1} - \lambda_{n}}. \end{align*} If $f \in C^{n - 1}(a, b)$, divided difference has continuous extension to all tuples of not necessarily distinct $n$ numbers on the interval \cite{Boo}. \begin{lause}[Loewner]\label{basic_mon} A function $f : (a, b) \to \mathbb{R}$ is $n$-monotone (for $n \geq 2$) if and only if $f \in C^{1}(a, b)$ and the Loewner matrix \begin{align}\label{Ldef} L = ([\lambda_{i}, \lambda_{j}]_{f})_{1 \leq i, j \leq n} \end{align} is positive\footnote{Here and in the following, positivity of matrix means that it is positive semi-definite.} for any tuple of numbers $(\lambda_{i})_{i = 1}^{n}$ on the same interval. \end{lause} Similarly Kraus, a student of Loewner introduced the matrix convexity in \cite{Kra} and established similar characterization: \begin{lause}[Kraus]\label{basic_con} A function $f : (a, b) \to \mathbb{R}$ is $n$-convex (for $n \geq 2$) if and only if $f \in C^{2}(a, b)$ and the Kraus matrix \begin{align}\label{Krdef} Kr = ([\lambda_{i}, \lambda_{j}, \lambda_{0}]_{f})_{1 \leq i, j \leq n} \end{align} is positive for any tuple of numbers $(\lambda_{i})_{i = 1}^{n} \in (a, b)^{n}$ and $\lambda_{0} \in (\lambda_{i})_{i = 1}^{n}$. \end{lause} A different, local characterization for monotonicity was given by another student of Loewner, Dobsch in \cite{Dob}: \begin{lause}[Dobsch, Donoghue]\label{hankel_mon} A $C^{2 n - 1}$ function $f : (a, b) \to \mathbb{R}$ is $n$-monotone if and only if the Hankel matrix \begin{align}\label{Mdef} M(t) = \left(\frac{f^{(i + j - 1)}(t)}{(i + j - 1)!}\right)_{1 \leq i, j \leq n} \end{align} is positive for any $t \in (a, b)$. \end{lause} By employing standard regularization techniques, one could further extend this to merely $C^{2 n - 3}$ functions with convex derivative of order $(2 n - 3)$, a class of functions for which the property makes sense for almost every $t$, to obtain the complete local characterization of the matrix monotonicity of fixed order. The result has a striking consequence: $n$-monotonicity is a local property, meaning that if function has it in two overlapping intervals, it has it for their union. This property is actually used in the proof, and although it was noted by Loewner to be easy (\cite[p. 212, Theorem 5.6]{Low}), no rigorous proof was given until 40 years later in the monograph of Donoghue \cite{Don}, and the proof is rather long when $n > 2$. The main results of this paper establish novel integral representations connecting Hankel matrices to the Loewner and Kraus matrices. These identities give rise to a new simple proof for Theorem \ref{hankel_mon}, and more importantly, settle the conjecture in \cite{Tom} (see also \cite{Tom2}) by establishing similar local characterization for the matrix convex functions. \begin{lause}\label{hankel_con} A $C^{2 n}$ function $f : (a, b) \to \mathbb{R}$ is $n$-convex if and only if the Hankel matrix \begin{align}\label{Kdef} K(t) = \left(\frac{f^{(i + j)}(t)}{(i + j)!}\right)_{1 \leq i, j \leq n} \end{align} is positive for any $t \in (a, b)$. \end{lause} Again, with regularizations we may extend this to give a complete local description of matrix convexity of fixed order, which as an immediate corollary gives the expected local property theorem for convexity. \begin{kor}\label{con_loc} For any positive integer $n$, $n$-convexity is a local property. \end{kor} As another byproduct, we obtain a slight improvement to Theorem \ref{basic_con}, where $\lambda_{0}$ may now vary freely. This also implies through divided differences a rather direct connection between matrix monotonicity and convexity. \section{Matrix monotone functions} \subsection{Integral representation} In this section we construct the integral representations for the Loewner matrices alluded to in the introduction. Let $n \geq 2$, $(a, b)$ be an interval, and $\Lambda = (\lambda_{i})_{i = 1}^{n} \in (a, b)^{n}$ be an arbitrary sequence of distinct points in $(a, b)$. In the following the Loewner and respective Hankel matrices, introduced in the introduction in (\ref{Ldef}) and (\ref{Mdef}), for sufficiently smooth $f : (a, b) \to \mathbb{R}$ and $\lambda_{0} \in (a, b)$ are denoted by $L(\Lambda, f)$ and $M_{n}(t, f)$ respectively. Recall that as one easily verifies with Cauchy's integral formula and induction, the divided differences can be written as \begin{align}\label{cauchy_divided} [\lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}]_{f} = \frac{1}{2 \pi i} \int_{\gamma} \frac{f(z)}{(z - \lambda_{1})\cdots (z - \lambda_{n})}dz, \end{align} for analytic $f$ and suitable closed curve $\gamma$.\footnote{For our purposes, it is enough to consider $f$ analytic in an open half-plane and $\gamma$ a circle in this half-plane enclosing the points $\lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}$.} Divided differences also admit a natural generalization for the mean value theorem \cite{Boo}. Namely, for an open interval $(a, b)$, $f \in C^{n - 1}(a, b)$ and any tuple of (not necessarily distinct) real numbers $\Lambda = (\lambda_{i})_{i = 1}^{n} \in (a, b)^{n}$ we have \begin{align}\label{mean_value} [\lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}]_{f} = \frac{f^{(n - 1)}(\xi)}{(n - 1)!} \end{align} for some $\xi \in [\min(\Lambda), \max(\Lambda)]$. We shall also need the very basic properties of regularizations. Namely for even, non-negative and smooth function $\phi$ supported on $[-1, 1]$ and with integral $1$, and integrable $f : (a, b) \to \mathbb{R}$, regularization (or $\varepsilon$-regularization, to be precise) of $f$, denoted by $f_{\varepsilon} : (a + \varepsilon, b - \varepsilon) \to \mathbb{R}$ is the convolution \begin{align*} f_{\varepsilon}(x) = \int_{-\infty}^{\infty} f(x - \varepsilon y) \phi(y) d y. \end{align*} This is a smooth function, and for any continuity point $x \in (a, b)$ of $f$ we clearly have $\lim_{\varepsilon \to 0} f_{\varepsilon}(x) = f(x)$. Note that regularizations of matrix monotone (convex) functions are obviously matrix monotone (convex) functions on a slightly smaller interval. Define the functions $g_{j}$ for $1 \leq j \leq n$ by \begin{align}\label{gdef} g_{j, \Lambda}(t, y) = \prod_{k \neq j}(1 + y (t - \lambda_{k})). \end{align} Define also the matrix $C(t) := C(t, \Lambda)$ by setting $C_{i, j}$ to be the coefficient of $y^{i - 1}$ in the polynomial $g_{j}(t, y)$, i.e. we have \begin{align}\label{Cdef} g_{j}(t, y) = C_{1, j}(t) + C_{2, j}(t) y + \ldots + C_{n, j}(t) y^{n - 1}. \end{align} Define polynomial $p_{\Lambda}$ with $p_{\Lambda}(t) := \prod_{i = 1}^{n}(t - \lambda_{i})$. Also for any $z \in \mathbb{C}$ define function $h_{z}$ by setting $h_{z}(x) = (z-x)^{-1}$. \begin{lem}\label{inverse_d} For $\Lambda = (\lambda_{i})_{i = 1}^{n}$ as before, $t \in \mathbb{R}$, and $z \in \mathbb{C}$ distinct from $t$, we have \begin{align*} C^{T}\left(t, \Lambda\right)M_{n}\left(t, h_{z} \right)C\left(t, \Lambda \right) = L(\Lambda, h_{z}) \frac{p_{\Lambda}(z)^2}{(z - t)^{2 n}}. \end{align*} \end{lem} \begin{proof} Write $D = C^{T}(t, \Lambda)M_{n}(t, h_{z})C(t, \Lambda)$. Note that as we have $h_{z}^{(k)}(t)/k! = (z - t)^{-k - 1}$, we may write $M_{n}(t, h_{z}) = \frac{1}{(z - t)^{2}}v v^{T}$ with $v = (1, \frac{1}{z - t}, \frac{1}{(z - t)^{2}}, \ldots, \frac{1}{(z - t)^{n - 1}})^{T}$. Thus \begin{align*} D = \frac{1}{(z - t)^{2}} (C(t,\Lambda)^{T} v) (C(t,\Lambda)^{T} v)^{T}. \end{align*} One also easily sees that $(C(t,\Lambda)^{T} v)_{i} = g_{i}(t, \frac{1}{z - t})$ so that finally \begin{align*} D_{i, j} = \frac{g_{i}(t, \frac{1}{z - t})g_{j}(t, \frac{1}{z - t}) }{(z - t)^2} = \frac{1}{(z - t)^{2}}\prod_{k \neq i}\left(1 + \frac{t - \lambda_{k}}{z - t}\right)\prod_{k \neq j}\left(1 + \frac{t - \lambda_{k}}{z - t}\right) = [\lambda_{i}, \lambda_{j}]_{h_{z}} \frac{p_{\Lambda}(z)^2}{(z - t)^{2 n}} . \end{align*} \end{proof} Consider now the function \begin{align*} S(z, t) := S_{\Lambda}(z, t) := -\frac{(z - t)^{2 n - 2}}{p_{\Lambda}(z)^{2}}. \end{align*} As $S(z, t)$ decays as $z^{-2}$, with the residue theorem we see that for suitable closed curve $\gamma$ we have \begin{align*} 0 = \frac{1}{2 \pi i}\int_{\gamma} S(z, t) d z = \sum_{i = 1}^{n} \Res_{z = \lambda_{i}} S(z, t). \end{align*} Defining now the weight functions $I_{i} := I_{i, \Lambda}$ for $1 \leq i \leq n$ by \begin{align*} I_{i}(t) = \Res_{z = \lambda_{i}} S(z, t), \end{align*} and \begin{align*} I(t) := I_{\Lambda}(t) := \sum_{\atop{1 \leq i \leq n}{\lambda_{i} < t}}I_{i}(t), \end{align*} we see by simple computation that $I_{i}$`s are polynomials such that $I_{i}(\lambda_{i}) = 0$ and $I$ is hence piecewise polynomial, continuous function supported on $[\min(\Lambda), \max(\Lambda)]$. Note that with Cauchy's integral formula we can also write $I$ in the form \begin{align*} I(t) = \frac{1}{2 \pi i}\int_{t - i \infty}^{t + i \infty} S(z, t) d z, \end{align*} whenever $t \notin \Lambda$. \begin{huom} The weight function $I$ and the analogous weight $J$ to be introduced in the convex setting are examples of weights called Peano kernels or B-splines. The properties of these kernels are discussed for example in \cite{Boo2}. To stay self-contained, we give proofs of the crucial properties used in our discussion. \end{huom} \begin{lem}\label{crazy_lemma} For $\Lambda = (\lambda_{i})_{i = 1}^{n}$ as before and $z \in \mathbb{C}$ outside the interval $[\min(\Lambda), \max(\Lambda)]$, we have \begin{align*} (2 n - 1)\int_{-\infty}^{\infty} \frac{I(t)}{(z - t)^{2 n}} d t = \frac{1}{p_{\Lambda}(z)^2}. \end{align*} \end{lem} \begin{proof} We simply compute that \begin{eqnarray*} (2 n - 1)\int_{-\infty}^{\infty} \frac{I(t)}{(z - t)^{2 n}} d t &=& (2 n - 1)\sum_{i = 1}^{n} \int_{\lambda_{i}}^{\infty}\frac{I_{i}(t)}{(z - t)^{2 n}} d t \\ &=& -(2 n - 1)\sum_{i = 1}^{n}\Res_{w = \lambda_{i}} \int_{\lambda_{i}}^{\infty} \frac{(w - t)^{2 n - 2}}{p_{\Lambda}(w)^2 (z - t)^{2 n}} d t \\ &=& \sum_{i = 1}^{n}\Res_{w = \lambda_{i}} \frac{(1 - \frac{z - w}{z - \lambda_{i}})^{2 n - 1} - 1}{(w - z) p_{\Lambda}(w)^2} \\ &=& - \sum_{i = 1}^{n}\Res_{w = \lambda_{i}} \frac{1}{(w - z) p_{\Lambda}(w)^2} \\ &=& \Res_{w = z} \frac{1}{(w - z) p_{\Lambda}(w)^2} - \frac{1}{2 \pi i}\int_{\gamma} \frac{d w}{(w - z) p_{\Lambda}(w)^2} \\ &=& \frac{1}{p_{\Lambda}(z)^2}, \end{eqnarray*} where we used the residue theorem for the function $(w \mapsto (w - z)^{-1}p_{\Lambda}(w)^{-2})$. \end{proof} We are then ready to formulate and prove the integral representation of the Loewner matrix. \begin{lause}\label{crazy_formula} For $f \in C^{2 n - 1}(a, b)$ and $\Lambda$ as before, we have \begin{align*} L(\Lambda, f) = (2 n - 1)\int_{-\infty}^{\infty} C^{T}(t, \Lambda)M_{n}(t, f)C(t, \Lambda)I_{\Lambda}(t)d t. \end{align*} \end{lause} \begin{proof} For entire $f$, by Lemmas \ref{inverse_d}, \ref{crazy_lemma}, Fubini and (\ref{cauchy_divided}) we have \begin{eqnarray*} (2 n - 1)\int_{-\infty}^{\infty} C^{T}(t)M_{n}(t, f)C(t)I(t)d t &=& \frac{1}{2 \pi i} \int_{\gamma} \left((2 n - 1)\int_{-\infty}^{\infty} C^{T}(t)M_{n}(t, h_{z})C(t)I(t)d t \right) f(z) d z \\ &=& \frac{1}{2 \pi i} \int_{\gamma} L(\Lambda, h_{z}) \left((2 n - 1)\int_{-\infty}^{\infty} \frac{p_{\Lambda}(z)^2}{(z - t)^{2 n}}I(t)d t \right) f(z) d z \\ &=& \frac{1}{2 \pi i} \int_{\gamma} L(\Lambda, h_{z}) f(z) d z \\ &=& L(\Lambda, f). \end{eqnarray*} The general case now follows by uniformly approximating $f$ and its derivatives up to order $(2 n - 1)$ by entire functions on $[\min(\Lambda), \max(\Lambda)]$, say, by polynomials with a suitable application of Weierstrass approximation theorem. \end{proof} \subsection{Positivity of the weight} In this section we prove the non-negativity of the weight function $I$ introduced in the previous section. We begin with a simple lemma. \begin{lem}\label{simple_lemma} Let $n$ be a positive integer and numbers $Z = (\zeta_{i})_{i = 1}^{n}$ non-negative. Now if $f(t) = \prod_{i = 1}^{n}(\zeta_{i} - t)^{-1}$, then for any non-negative integer $k$ and $t < 0$ we have \begin{align*} f^{(k)}(t) \geq 0. \end{align*} \end{lem} \begin{proof} The case of $n = 1$ is trivial; the general case follows now immediately from the product rule. \end{proof} \begin{lem}\label{mon_pos} For $\Lambda$ as before, $I_{\Lambda}$ is non-negative. \end{lem} \begin{proof} We may clearly assume that $\Lambda$ is strictly increasing. When checking the non-negativity at a point $t$, we may without loss of generality assume that $t = 0 \in [\lambda_{1}, \lambda_{n}]$. Also by continuity we may further assume that all the $\lambda_{i}$`s are non-zero. We are left to investigate \begin{align*} \frac{1}{2 \pi i}\int_{-i \infty}^{i \infty} S(z, 0) d z = -\frac{1}{2 \pi i}\int_{-i \infty}^{i \infty} \frac{z^{2 n - 2} d z}{p_{\Lambda}(z)^{2}}. \end{align*} Making the change of variable $w = \frac{1}{z}$, we are to check that \begin{align*} \frac{1}{2 \pi i}\int_{-i \infty}^{i \infty} \frac{d w}{p_{Z}(w)^{2}} \geq 0, \end{align*} where $Z = \frac{1}{\Lambda}$, that is $\zeta_{i} = \frac{1}{\lambda_{i}}$. Let $k$ $(< n)$ be the number of the negative $\zeta_{i}$`s and denote $Z_{-} = (\zeta_{i})_{i = 1}^{k}$. Note that if we further write $f(t) = \left(\prod_{i > k}(t - \zeta_{i})\right)^{-2}$, we have by suitable variant of (\ref{cauchy_divided}) \begin{align*} \frac{1}{2 \pi i}\int_{-i \infty}^{i \infty} \frac{d w}{p_{Z}(w)^{2}} = \frac{1}{2 \pi i}\int_{-i \infty}^{i \infty}\frac{f(w) d w}{p_{Z_{-}}(w)^{2}} = [\zeta_{1}, \zeta_{1}, \zeta_{2}, \zeta_{2}, \ldots, \zeta_{k}, \zeta_{k}]_{f}, \end{align*} which is positive in the view of (\ref{mean_value}) and Lemma \ref{simple_lemma}. \end{proof} \subsection{Characterizations for the matrix monotonicity} \begin{proof}[Proof of Theorem \ref{hankel_mon}] The necessity of the condition can be found in \cite{Dob}. For sufficiency note that by Theorem \ref{crazy_formula} we can write \begin{align*} L(\Lambda) = (2 n - 1)\int_{-\infty}^{\infty} C^{T}(t)M(t)C(t)I(t)d t \end{align*} Now if $M(t) \geq 0$ for any $t \in (a, b)$, also $C^{T}(t)M(t)C(t) \geq 0$ for any $t \in (a, b)$. It follows from Lemma \ref{mon_pos} that the integrand is a positive matrix, so indeed, $L$ is positive as an integral of positive matrices. But now $f$ is $n$-monotone by Theorem \ref{basic_mon}. \end{proof} Putting everything together we obtain complete characterizations of the class of $n$-monotone functions. \begin{lause}[Loewner, Dobsch, Donoghue]\label{loewner-dobsch-donoghue} Let $n \geq 2$, and $(a, b)$ be an open interval. Now for $f : (a, b) \to \mathbb{R}$ the following are equivalent \begin{enumerate}[(i)] \item $f$ is $n$-monotone. \item $f \in C^{1}(a, b)$ and the Loewner matrix $L(\Lambda, f)$ is positive for any tuple $\Lambda \in (a, b)^{n}$. \item $f \in C^{2 n - 3}(a, b)$, $f^{(2 n - 3)}$ is convex, and the Hankel matrix $M_{n}(t, f)$, which makes sense almost everywhere, is positive for almost every $t \in (a, b)$. \end{enumerate} \end{lause} \begin{proof} As noted before, $(i) \Leftrightarrow (ii)$ was proven in the original paper of Loewner \cite{Low}. For $C^{2 n - 1}$ functions, $(i) \Leftrightarrow (iii)$ is Theorem \ref{hankel_mon}, and for merely $C^{2 n - 3}$ functions the claim follows from standard regularization procedure, details of which can be found in \cite{Don}. For an alternate approach to the latter equivalence, see again \cite{Don}. \end{proof} \begin{kor}\label{mon_loc} For any positive integer $n$, $n$-monotonicity is a local property. \end{kor} \section{Matrix convex functions} \subsection{Integral representation} In this section we construct the integral representations for the Kraus matrices alluded to in the introduction. Again, let $n \geq 2$, $(a, b)$ be an interval, and $\Lambda = (\lambda_{i})_{i = 1}^{n} \in (a, b)^{n}$ be an arbitrary sequence of distinct points in $(a, b)$. In the following the Kraus and the respective Hankel matrices, introduced in the introduction, for sufficiently smooth $f : (a, b) \to \mathbb{R}$ and $\lambda_{0} \in (a, b)$ are denoted by $Kr(\lambda_{0}, \Lambda, f)$ and $K_{n}(t, f)$, respectively. The integral representation for the Kraus matrix is similar to that of the Loewner matrix. Fix again $n \geq 2$, open interval $(a, b)$ and $\Lambda = (\lambda_{i})_{i = 1}^{n} \in (a, b)^{n}$, an arbitrary sequence of distinct points on $(a, b)$. For fixed $\lambda_{0} \in (a, b)$ the weights $J_{i, \lambda_{0}} := J_{i, \lambda_{0}, \Lambda}$, now for $0 \leq i \leq n$, are defined analogously as the residues at $\lambda_{i}$'s of \begin{align*} T_{\lambda_{0}}(z, t) := T_{\lambda_{0}, \Lambda}(z, t) := -\frac{(z - t)^{2 n - 1}}{(z - \lambda_{0}) p_{\Lambda}(z)^{2}} \end{align*} and \begin{align*} J_{\lambda_{0}}(t) := J_{\lambda_{0}, \Lambda}(t) := \sum_{\atop{0 \leq i \leq n}{\lambda_{i} < t}}J_{i, \lambda_{0}}(t). \end{align*} \begin{lem}\label{crazy_convex_lemma} For $\Lambda = (\lambda_{i})_{i = 1}^{n}$, as before, $\lambda_{0} \in (a, b)$ and $z \in \mathbb{C}$ outside the interval $[\min(\Lambda), \max(\Lambda)]$, we have \begin{align*} 2 n\int_{-\infty}^{\infty} \frac{J_{\lambda_{0}}(t)}{(z - t)^{2 n + 1}} d t = \frac{1}{(z - \lambda_{0})p_{\Lambda}(z)^2}. \end{align*} \end{lem} \begin{proof} Proof is almost identical to that of Lemma \ref{crazy_lemma}; we just perform the residue trick with the map $(w \mapsto (w - z)^{-1}(w - \lambda_{0})^{-1}p_{\Lambda}(w)^{-2})$ instead. \end{proof} \begin{lause}\label{convex_crazy_formula} For $f \in C^{2 n}(a, b)$, $\Lambda$ as before, and $\lambda_{0} \in (a, b)$, we have \begin{align*} Kr(\lambda_{0}, \Lambda, f) = 2 n \int_{-\infty}^{\infty} C^{T}(t, \Lambda)K_{n}(t, f)C(t, \Lambda)J_{\lambda_{0}, \Lambda}(t) d t. \end{align*} \end{lause} \begin{proof} After noting that $K_{n}(t, h_{z}) = \frac{1}{z - t} M_{n}(t, h_{z})$, the calculation is carried out as in the proof of Theorem \ref{crazy_formula}, using Lemma \ref{crazy_convex_lemma} instead of Lemma \ref{crazy_lemma}. \end{proof} \subsection{Positivity of the weight} \begin{lem}\label{con_pos} For $\Lambda = (\lambda_{i})_{i = 1}^{n}$ as before and $\lambda_{0} \in (a, b)$, $J_{\lambda_{0}, \Lambda}$ is non-negative. \end{lem} \begin{proof} As in the proof of Lemma \ref{mon_pos}, we can assume that $t = 0$ is our point of inspection and that $\Lambda$ is strictly increasing. We also make the same change of variables $Z = \frac{1}{\Lambda}$. Note that we may well assume that $\zeta_{0} > 0$, since the other case would follow by reflecting the variables, that is considering the sequence $-Z$ and $-\lambda_{0}$, instead. Now the inequality is reduced to an equivalent form \begin{align*} \frac{1}{2 \pi i}\int_{-i \infty}^{i \infty} \frac{d w}{(\zeta_{0} - w)p_{Z}(w)^{2}} \geq 0. \end{align*} But as in the proof of Lemma \ref{mon_pos}, the left hand side can be again written as \begin{align*} [\zeta_{1}, \zeta_{1}, \zeta_{2}, \zeta_{2}, \ldots, \zeta_{k}, \zeta_{k}]_{f} \end{align*} where $f(t) = (\zeta_{0} - t)^{-1}\left(\prod_{i > k}(t - \zeta_{i})\right)^{-2}$ and $k$ is the number of negative $\zeta_{i}$`s. \end{proof} \subsection{Characterizations for the matrix convexity} \begin{proof}[Proof of Theorem \ref{hankel_con}] The necessity of the condition was proven in \cite{Tom}. For the other direction, by Lemma \ref{convex_crazy_formula} we can write \begin{align*} Kr(\lambda, \Lambda) = 2 n\int_{-\infty}^{\infty} C^{T}(t)K(t)C(t)J_{\lambda_{0}}(t)d t. \end{align*} But as in the proof of Theorem \ref{hankel_mon}, we see now that the Kraus matrix is an integral of positive matrices, hence positive, and Theorem \ref{basic_con} finishes the claim. \end{proof} The next theorem finally completes the characterization of $n$-convex functions. The original characterization of Kraus is also improved. \begin{lause}\label{convex_complete} Let $n \geq 2$, and $(a, b)$ be an open interval. Now for $f : (a, b) \to \mathbb{R}$ the following are equivalent \begin{enumerate}[(i)] \item $f$ is $n$-convex. \item $f \in C^{2}(a, b)$ and the Kraus matrix $Kr(\lambda_{0}, \Lambda, f)$ is positive for any tuple $\Lambda \in (a, b)^{n}$ and $\lambda_{0} \in \Lambda$. \item $f \in C^{2}(a, b)$ and the Kraus matrix $Kr(\lambda_{0}, \Lambda, f)$ is positive for any tuple $\Lambda \in (a, b)^{n}$ and $\lambda_{0} \in (a, b)$. \item $f \in C^{2 n - 2}(a, b)$, $f^{(2 n - 2)}$ is convex, and the Hankel matrix $K_{n}(t, f)$, which makes sense almost everywhere, is positive for almost every $t \in (a, b)$. \end{enumerate} \end{lause} \begin{proof} $(i) \Leftrightarrow (ii)$ was proven in \cite{Kra}. For $C^{2 n}$ functions $(i) \Leftrightarrow (iv)$ is Theorem \ref{hankel_con}; the proof of Theorem \ref{hankel_con} also gives $(iv) \Rightarrow (iii)$ in this case. For merely $C^{2 n - 2}$ functions these claims follow from regularization techniques as in the monotone case. $(iii) \Rightarrow (ii)$ is trivial. \end{proof} We also get an interesting corollary connecting the monotonicity to convexity, extending a result in \cite{Ben}. \begin{kor}\label{divided_connection} Let $n \geq 2$, and $(a, b)$ be an open interval. If $f : (a, b) \to \mathbb{R}$ is $n$-convex, then for any $\lambda_{0} \in (a, b)$ the function $g = (x \mapsto [x, \lambda_{0}]_{f})$ is $n$-monotone. \end{kor} \begin{proof} Simply note that $L(\Lambda, g) = Kr(\lambda_{0}, \Lambda, f)$. \end{proof} \begin{huom} The ideas introduced in the paper can be generalized to characterize more general class of functions called matrix $k$-tone functions, introduced in \cite{Franz}. A paper discussing related questions in this more general setting is in preparation. \end{huom} \section{Acknowledgements} We thank the open-source mathematical software Sage \cite{Sag} for invaluable support in discovering the main identities of this paper. We are also truly grateful to O. Hirviniemi, J. Junnila and E. Saksman, and anonymous reviewers for their helpful comments on the earlier versions of the manuscript.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,921
<!DOCTYPE HTML> <html> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <link type="text/css" href="http://www.nomad.so/ddoc/css/theme.css" rel="stylesheet" media="all" /> <script type="text/javascript" src="http://www.nomad.so/ddoc/javascript/jquery-2.0.3.min.js"></script> <script type="text/javascript" src="http://www.nomad.so/ddoc/javascript/jquery.scrollTo.min.js"></script> <script type="text/javascript" src="http://www.nomad.so/ddoc/javascript/index.js"></script> <title>tkd.widget.keyboardfocus</title> </head> <body> <h1>tkd.widget.keyboardfocus</h1> <!-- Generated by Ddoc from tkd/widget/keyboardfocus.d --> <div class="sections"><p>Widget module. </p> <h3>License</h3><p>MIT. See LICENSE for full details.</p> </div> <div class="module-members"><h2><a name="KeyboardFocus"></a>enum <span class="symbol">KeyboardFocus</span>: string; </h2> <div class="declaration-description"><div class="sections"><p>Keyboard values values of widgets.</p> </div> <div class="enum-members"><h2><a name="KeyboardFocus.normal"></a><span class="symbol">normal</span></h2> <div class="declaration-description"><div class="sections"><p>Default focus setting.</p> </div> </div> <h2><a name="KeyboardFocus.enable"></a><span class="symbol">enable</span></h2> <div class="declaration-description"><div class="sections"><p>Enable widget focus.</p> </div> </div> <h2><a name="KeyboardFocus.disable"></a><span class="symbol">disable</span></h2> <div class="declaration-description"><div class="sections"><p>Disable widget focus.</p> </div> </div> </div> </div> </div> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
5,875
\section{Introduction} \begin{defn} For a subset~$S$ of a finite group~$G$, the \emph{Cayley digraph} $\mathop{\dir{\mathrm{Cay}}}(G;S)$\refnote{S=ab} is the directed graph whose vertices are the elements of~$G$, and with a directed edge $g \to gs$ for every $g \in G$ and $s \in S$. The corresponding \emph{Cayley graph} is the underlying undirected graph that is obtained by removing the orientations from all the directed edges. \end{defn} It has been conjectured that every (nontrivial) connected Cayley graph has a hamiltonian cycle. (See the bibliography of \cite{KutnarEtAl-SmallOrder} for some of the literature on this problem.) This conjecture does not extend to the directed case, because there are many examples of connected Cayley digraphs that do not have hamiltonian cycles. In fact, infinitely many Cayley digraphs do not even have a hamiltonian path: \begin{prop}[{attributed to J.\,Milnor \cite[p.~201]{Nathanson-PartProds}}] \label{23NoHP} Assume the finite group~$G$ is generated by two elements $a$ and~$b$, such that $a^2 = b^3 = e$.\refnote{ProveMilnor} If\/ $|G| \ge 9 |ab^2|$, then the Cayley digraph\/ $\mathop{\dir{\mathrm{Cay}}}(G; a,b)$ does not have a hamiltonian path. \end{prop} The examples in the above \lcnamecref{23NoHP} are very constrained, because the order of one generator must be exactly~$2$, and the order of the other generator must be exactly~$3$. In this note, we provide an infinite family of examples in which the orders of the generators are not restricted in this way. In fact, $a$ and~$b$ can both be of arbitrarily large order: \begin{thm} \label{NewNonTraceable} For any $n \in \natural$, there is a connected Cayley digraph\/ $\mathop{\dir{\mathrm{Cay}}}(G;a,b)$, such that \noprelistbreak \begin{enumerate} \item $\mathop{\dir{\mathrm{Cay}}}(G;a,b)$ does not have a hamiltonian path, and \item $a$ and~$b$ both have order greater than~$n$. \end{enumerate} Furthermore, if $p$ is any prime number such that $p > 3$ and $p \equiv 3 \pmod{4}$, then we may construct the example so that the commutator subgroup of~$G$ has order~$p$. More precisely, $G = \mathbb{Z}_m \ltimes \mathbb{Z}_p$ is a semidirect product of two cyclic groups, so $G$ is metacyclic. \end{thm} \begin{rems} \ \noprelistbreak \begin{enumerate} \itemsep=\smallskipamount \item The above results show that connected Cayley digraphs on solvable groups do not always have hamiltonian paths. On the other hand, it is an open question whether connected Cayley digraphs on \emph{nilpotent} groups always have hamiltonian paths. (See \cite{Morris-2gen} for recent results on the nilpotent case.) \item The above results always produce a digraph with an even number of vertices. Do there exist infinitely many connected Cayley digraphs of odd order that do not have hamiltonian paths? \item We conjecture that the assumption ``$p \equiv 3 \pmod{4}$'' can be eliminated from the statement of \cref{NewNonTraceable}. On the other hand, it is necessary to require that $p > 3$ \csee{GGle3}. \item If $G$ is abelian, then it is easy to show that every connected Cayley digraph on~$G$ has a hamiltonian path. However, some abelian Cayley digraphs do not have a hamiltonian cycle. See \cref{NonhamAbelian} for more discussion of this. \item The proof of \cref{NewNonTraceable} appears in \cref{MainThmPfSect}, after some preliminaries in \cref{PrelimSect}. \end{enumerate} \end{rems} \section{Preliminaries} \label{PrelimSect} We recall some standard notation, terminology, and basic facts. \begin{notation} Let $G$ be a group, and let $H$ be a subgroup of~$G$. (All groups in this paper are assumed to be finite.) \noprelistbreak \begin{itemize} \item $e$ is the identity element of~$G$. \item $x^g = g^{-1} x g$, for $x,g \in G$. \item We write $H \trianglelefteq G$ to say that $H$ is a \emph{normal} subgroup of~$G$. \item $H^G = \langle\, h^g \mid h \in H, \, g \in G \,\rangle$ is the \emph{normal closure} of~$H$ in~$G$, so $H^G \trianglelefteq G$. \end{itemize} \end{notation} \begin{defn} Let $S$ be a subset of the group~$G$. \noprelistbreak \begin{itemize} \item $H = \langle S S^{-1} \rangle$ is the \emph{arc-forcing subgroup}, where $S S^{-1} = \{\, s t^{-1} \mid s,t \in S \,\}$. \item For any $a \in S$, $a^{-1} H$ is called the \emph{terminal coset}. (This is independent of the choice of~$a$.)\refnote{TermIndep} \item Any left coset of~$H$ that is not the terminal coset is called a \emph{regular coset}. \item For $g \in G$ and $s_1,\ldots,s_n \in S$, we use $[g](s_i)_{i=1}^n$ to denote the walk in $\mathop{\dir{\mathrm{Cay}}}(G;S)$ that visits (in order) the vertices $$ g, \ g s_1, \ g s_1 s_2, \ \ldots, \ g s_1 s_2 \cdots s_n .$$ We usually omit the prefix $[g]$ when $g = e$. Also, we often abuse notation when sequences are to be concatenated. For example, $$ \bigl( a^4, (s_i)_{i=1}^3, t_j \bigr)_{j=1}^2 = (a, a, a, a, s_1, s_2, s_3, t_1, a, a, a, a, s_1, s_2, s_3, t_2 ) .$$ \end{itemize} \end{defn} \begin{rems} \label{ArcForcingRems} \ \noprelistbreak \begin{enumerate} \item \label{ArcForcingRems-Sg} It is important to note that $\langle S S^{-1} \rangle \subseteq \langle Sg \rangle$, for every $g \in G$. Furthermore, we have $\langle S S^{-1} \rangle = \langle Sa^{-1} \rangle$, for every $a \in S$.\refnote{Sa} \item \label{ArcForcingRems-conj} It is sometimes more convenient to define the arc-forcing subgroup to be $\langle S^{-1} S \rangle$, instead of $\langle S S^{-1} \rangle$. (For example, this is the convention used in \cite[p.~42]{Morris-2gen}.) The difference is minor, because the two subgroups are conjugate: for any $a \in S$, we have\refnotelower0{aS} $$ \langle S^{-1} S \rangle = \langle a^{-1} S \rangle = \langle S a^{-1} \rangle^ a = \langle S S^{-1} \rangle^a.$$ \end{enumerate} \end{rems} \begin{defn} Suppose $L$~is a hamiltonian path in a Cayley digraph $\mathop{\dir{\mathrm{Cay}}}(G; S)$, and $s \in S$. \noprelistbreak \begin{itemize} \item A vertex $g \in G$ \emph{travels by~$s$} if $L$ contains the directed edge $g \to gs$. \item A subset $X$ of~$G$ \emph{travels by~$s$} if every element of~$X$ travels by~$s$. \end{itemize} \end{defn} \begin{lem}[{}{Housman \cite[p.~42]{Housman-enumeration}}] \label{HousmanThm} Suppose $L$ is a hamiltonian path in $\mathop{\dir{\mathrm{Cay}}}(G;a,b)$, with initial vertex~$e$, and let $H = \langle a b^{-1} \rangle$ be the arc-forcing subgroup.\refnote{HousmanHAid} Then: \noprelistbreak \begin{enumerate} \item \label{HousmanThm-terminal} The terminal vertex of~$L$ belongs to the terminal coset $a^{-1} H$.\refnotelower{-10}{HousmanAid} \item \label{HousmanThm-regular} Each regular coset either travels by~$a$ or travels by~$b$. \end{enumerate} \end{lem} \section{\texorpdfstring{Proof of \cref{NewNonTraceable}}{Proof of the Main Theorem}} \label{MainThmPfSect} Let \noprelistbreak \begin{itemize} \itemsep=\smallskipamount \item $\alpha$ be an even number that is relatively prime to $(p-1)/2$, with $\alpha > n$, \item $\beta$ be a multiple of $(p-1)/2$ that is relatively prime to~$\alpha$, with $\beta > n$, \item $\overline{a}$ be a generator of~$\mathbb{Z}_\alpha$, \item $\overline{b}$~be a generator of~$\mathbb{Z}_\beta$, \item $z$~be a generator of~$\mathbb{Z}_p$, \item $r$ be a primitive root modulo~$p$, \item $G = (\mathbb{Z}_\alpha \times \mathbb{Z}_\beta) \ltimes \mathbb{Z}_p$, where $z^{\overline{a}} = z^{-1}$ and $z^{\overline{b}} = z^{r^2}$, \item $a = \overline{a} z$, so $|a| = \alpha$, and $a$ inverts~$\mathbb{Z}_p$, \item $b = \overline{b} z$, so $|b| = \beta$, and $b$ acts on $\mathbb{Z}_p$ via an automorphism of order $(p-1)/2$, and \item $H = \bigl\langle a b^{-1} \bigr\rangle = \bigl\langle \overline{a} \, \overline{b}^{-1} \bigr\rangle = \mathbb{Z}_\alpha \times \mathbb{Z}_\beta$.\refnote{SetupH} \end{itemize} Suppose $L$ is a hamiltonian path in $\mathop{\dir{\mathrm{Cay}}}(G;a,b)$. This will lead to a contradiction. It is well known (and easy to see) that Cayley digraphs are vertex-transitive,\refnote{verttrans} so there is no harm in assuming that the initial vertex of~$L$ is~$e$. Note that: \noprelistbreak \begin{itemize} \item the terminal coset is $a^{-1} H = z^{-1} H$,\refnote{TermCosetz/MinOne} and \item since $p \equiv 3 \pmod{4}$, we have $ \mathbb{Z}_p^\times = \langle -1 , r^2 \rangle$. \end{itemize} \setcounter{case}{0} \begin{case} Assume at most one regular coset travels by~$a$ in~$L$. \end{case} Choose $z' \in \mathbb{Z}_p$, such that $z' H$\refnote{AllzH} is a regular coset, and assume it is the coset that travels by~$a$, if such exists. For $g \in G$, let $$ \mathcal{B}_g = \{\, g b^k H \mid k \in \mathbb{Z} \,\} .$$ Letting $p' = (p-1)/2$, we have $$ (r^2)^{p'-1} + (r^2)^{p'-2} + \cdots + (r^2)^{1} + 1 = \frac{(r^2)^{p'}-1}{r^2-1} = \frac{r^{p-1} -1}{r^2-1} \equiv 0 \pmod{p} ,$$ so $$b^{(p-1)/2} = (\overline{b} z)^{p'} = \overline{b}^{p'} z^{(r^2)^{p'-1} + (r^2)^{p'-2} + \cdots + (r^2)^{1} + 1} = \overline{b}^{p'} \in \mathbb{Z}_\alpha \times \mathbb{Z}_\beta = H .$$ Therefore $\# \mathcal{B}_e \le (p-1)/2 \le p-2$, so we can choose two cosets $z^i H$ and $z^j H$ that do not belong to~$\mathcal{B}_e$. Recall that, by definition, $z' H$ is not the terminal coset $z^{-1} H$, so $z' z$ is a nontrivial element of~$\mathbb{Z}_p$. Then, since $\mathbb{Z}_p^\times = \langle -1 , r^2 \rangle$, we can choose some $h \in \langle \overline{a} , \overline{b} \rangle = H$, such that $(z^{j-i})^h = z'z$. Now, since $$ z^i H, z^j H \notin \mathcal{B}_e ,$$ and $$z^{-1} h^{-1} z^{j-i} \in z^{-1} (z^{j-i})^h H = z^{-1} (z'z) H = z' H ,$$ we may multiply on the left by $g = z^{-1}h^{-1} z^{-i}$ to see that $$ z^{-1} H, z' H \notin \mathcal{B}_g .$$ Therefore, no element of~$\mathcal{B}_g$ is either the terminal coset or the regular coset that travels by~$a$. This means that every coset in $\mathcal{B}_g$ travels by~$b$, so $L$ contains the cycle $ [g](b^\beta)$, which contradicts the fact that $L$ is a (hamiltonian) path. \begin{case} \label{TwoCosetsCase} Assume at least two regular cosets travel by~$a$ in~$L$. \end{case} Let $z^i H$ and $z^j H$ be two regular cosets that both travel by~$a$. Since $\mathbb{Z}_p^\times = \langle -1 , r^2 \rangle$, we can choose some $h \in \langle \overline{a} , \overline{b} \rangle = H$, such that $(z^{-1})^h = z^{j-i}$. Note that $z^i h^{-1} a^k$ travels by~$a$, for every $k \in \mathbb{Z}$: \noprelistbreak \begin{itemize} \itemsep=\smallskipamount \item If $k = 2 \ell$ is even, then $$ a^k = (\overline{a} z)^{2\ell} = \bigl( \overline{a} z \overline{a} z \bigr)^\ell = \bigl( \overline{a}^2 z^{\overline{a}} z \bigr)^\ell = \bigl( \overline{a}^2 z^{-1} z \bigr)^\ell = \overline{a}^{2\ell} \in H ,$$ so $z^i h^{-1} a^k \in z^i H$ travels by~$a$. \item If $k = 2 \ell + 1$ is odd, then $$ a^k = (\overline{a} z)^{2\ell+1} = (\overline{a} z)^{2\ell} (\overline{a} z) = \overline{a}^{2\ell} (\overline{a} z) = \overline{a}^k z ,$$ so $$z^i h^{-1} a^k = z^i h^{-1} (\overline{a}^k z) = z^i h^{-1} z^{-1} \overline{a}^k = z^i (z^{-1})^h h^{-1} \overline{a}^k \in z^i (z^{j-i}) H = z^j H $$ travels by~$a$. \end{itemize} Therefore $L$ contains the cycle $[z^i h^{-1}](a^\alpha)$, which contradicts the fact that $L$ is a (hamiltonian) path. \qed \section{Cyclic commutator subgroups of very small order} \label{VerySmallSect} It is known that if $|[G,G]| = 2$, then every connected Cayley digraph on~$G$ has a hamiltonian path. (Namely, we have $[G,G] \subseteq Z(G)$, so $G$ is nilpotent,\refnote{CentralIsNilp} and the conclusion therefore follows from \fullcref{2gen}{prime} below.) In this \lcnamecref{VerySmallSect}, we prove the same conclusion when $|[G,G]| = 3$. We also provide counterexamples to show that the conclusion is not always true when $|[G,G]| = 4$ or $|[G,G]| = 5$. We begin with several lemmas. The first three each provide a way to convert a hamiltonian path in a Cayley digraph on an appropriate subgroup of~$G$ to a hamiltonian path in a Cayley digraph on all of~$G$. \begin{lem} \label{MayAssumeS=2} Assume \noprelistbreak \begin{itemize} \item $G$ is a finite group, such that\/ $[G,G] \cong \mathbb{Z}_{p^k}$, where $p$ is prime and $k \in \natural$, \item $S$ is a generating set for~$G$, \item $a,b \in S$, such that $\langle [a,b] \rangle = [G,G]$, and \item $N = \langle a,b \rangle$. \end{itemize} If\/ $\mathop{\dir{\mathrm{Cay}}}(N;a,b)$ has a hamiltonian path, then\/ $\mathop{\dir{\mathrm{Cay}}}(G;S)$ has a hamiltonian path. \end{lem} \begin{proof} Since $[G,G] \subseteq N$, we know that $G/N$ is an abelian group, so there is a hamiltonian path $(s_i)_{i=1}^m$ in $\mathop{\dir{\mathrm{Cay}}}(G/N;S)$ \cseebelow{abelianpath}. Also, by assumption, there is a hamiltonian path $(t_j)_{j=1}^n$ in $\mathop{\dir{\mathrm{Cay}}}(N;a,b)$. Then $$ \Bigl( \bigl( (t_j)_{j=1}^n, s_i \bigr)_{i=1}^m , (t_j)_{j=1}^n \Bigr)$$ is a hamiltonian path in $\mathop{\dir{\mathrm{Cay}}}(G;S)$.\refnote{MayAssumeS=2Aid} \end{proof} \begin{defn} If $K$ is a subgroup of~$G$, then $K \backslash {\mathop{\dir{\mathrm{Cay}}}(G;S)}$ denotes the digraph whose vertices are the right cosets of~$K$ in~$G$, and with a directed edge $Kg \to Kgs$ for each $g \in G$ and $s \in S$. Note that $K \backslash {\mathop{\dir{\mathrm{Cay}}}(G;S)} = \mathop{\dir{\mathrm{Cay}}}(G/K; S)$ if $K \trianglelefteq G$. \end{defn} \begin{lem}[``Skewed-Generators Argument,'' {cf.\ \cite[Lem.~2.6]{Morris-2gen}, \cite[Lem.~5.1]{Witte-pgrp}}] \label{SkewedGens} Assume: \noprelistbreak \begin{itemize} \item $S$ is a generating set for the group~$G$, \item $K$ is a subgroup of~$G$, such that every connected Cayley digraph on~$K$ has a hamiltonian path, \item $(s_i)_{i=1}^n$ is a hamiltonian cycle in $K \backslash {\mathop{\dir{\mathrm{Cay}}}(G;S)}$, and \item $\langle S s_2 s_3 \cdots s_n \rangle = K$. \end{itemize} Then\/ $\mathop{\dir{\mathrm{Cay}}}(G;S)$ has a hamiltonian path. \end{lem} \begin{proof} Since $\langle S s_2 s_3 \cdots s_n \rangle = K$, we know that $\mathop{\dir{\mathrm{Cay}}}(K; S s_2 s_3 \cdots s_n)$ is connected, so, by assumption, it has a hamiltonian path $(t_j s_2 s_3 \cdots s_n)_{j=1}^m$. Then $$ \Bigl( \bigl( t_j , (s_i)_{i=2}^n \bigr)_{j=1}^{m-1} , t_m, (s_i)_{i=2}^{n-1} \Bigr) $$ is a hamiltonian path in $\mathop{\dir{\mathrm{Cay}}}(G;S)$.\refnote{SkewedGensAid} \end{proof} \begin{lem} \label{GGPrimeHG} Assume: \noprelistbreak \begin{itemize} \item $S$ is a generating set of~$G$, with arc-forcing subgroup $H = \langle S S^{-1} \rangle$, \item there is a hamiltonian path in every connected Cayley digraph on $H^G$, and \item either $H = H^G$, or $H$~is contained in a \underline{unique} maximal subgroup of~$H^G$. \end{itemize} Then\/ $\mathop{\dir{\mathrm{Cay}}}(G;S)$ has a hamiltonian path. \end{lem} \begin{proof} It suffices to show \begin{align} \tag{$*$} \label{Ssss=HG} \begin{matrix} \text{there exists a hamiltonian cycle $(s_i)_{i=1}^n$ in $\mathop{\dir{\mathrm{Cay}}}(G/H^G; S)$,} \\[3pt] \text{such that $H^G = \langle Ss_2 \cdots s_n \rangle$} , \end{matrix} \end{align} for then \cref{SkewedGens} provides the desired hamiltonian path in $\mathop{\dir{\mathrm{Cay}}}(G;S)$. If $H^G = H$, then every hamiltonian cycle in $\mathop{\dir{\mathrm{Cay}}}(G/H^G; S)$ satisfies \pref{Ssss=HG} \fullcsee{ArcForcingRems}{Sg}. Thus, we may assume $H^G \neq H$, so, by assumption, $H$ is contained in a unique maximal subgroup~$M$ of~$H^G$. Since $H^G$ is generated by conjugates of~$S^{-1}S$ \fullcsee{ArcForcingRems}{conj}, there exist $a,b,c \in S$, such that $(a^{-1}b)^c \notin M$. We may also assume $H^G \neq G$ (since, by assumption, every Cayley digraph on~$H^G$ has a hamiltonian path), so, letting $n = |G:H^G| \ge 2$, we have the two hamiltonian cycles $(a^{n-1},c)$ and $(a^{n-2},b,c)$ in $\mathop{\dir{\mathrm{Cay}}}(G/H^G; S)$.\refnote{GGPrimeHGan} Since $$(a^{n-1} c)^{-1} (a^{n-2}bc) = (a^{-1}b)^c \notin M ,$$ the two products $a^{n-1} c$ and $a^{n-2}bc$ cannot both belong to~$M$. Hence, either $(a^{n-1},c)$ or $(a^{n-2},b,c)$ is a hamiltonian cycle $(s_i)_{i=1}^n$ in $\mathop{\dir{\mathrm{Cay}}}(G/H^G; S)$, such that $s_1s_2 \cdots s_n \notin M$. Since $M$ is the unique maximal subgroup of~$H^G$ that contains~$H$, this implies\refnotelower0{GGPrimeHGM} $$ H^G = \langle H, s_1s_2 \cdots s_n \rangle = \langle S s_2 s_3 \cdots s_n \rangle , $$ as desired. \end{proof} The final hypothesis of the preceding \lcnamecref{GGPrimeHG} is automatically satisfied when $[G,G]$ is cyclic of prime-power order: \begin{lem} \label{UniqueMax} If\/ $[G,G]$ is cyclic of order~$p^k$, where $p$~is prime, and $H$ is any subgroup of~$G$, then either $H = H^G$, or $H$~is contained in a unique maximal subgroup of~$H^G$. \end{lem} \begin{proof} Note that the normal closure $H^G$ is the (unique) smallest normal subgroup of~$G$ that contains~$H$.\refnote{UniqueMaxClosure} Therefore $H^G \subseteq H \, [G,G]$ (since $H \, [G,G]$ is normal in~$G$). This implies that if $M$ is any proper subgroup of~$H^G$ that contains~$H$, then\refnotelower0{UniqueMaxHK} $$M = H \cdot \bigl( M \cap [G,G] \bigr) \subseteq H \cdot \bigl( H^G \cap [G,G] \bigr)^p .$$ Therefore, $H \cdot \bigl( H^G \cap [G,G] \bigr)^p$ is the unique maximal subgroup of~$H^G$ that contains~$M$. \end{proof} The following known result handles the case where $G$ is nilpotent: \begin{thm}[Morris \cite{Morris-2gen}] \label{2gen} Assume $G$ is nilpotent, and $S$ generates~$G$. If either \noprelistbreak \begin{enumerate} \item \label{2gen-2} $\#S \le 2$, or \item\label{2gen-prime} $|[G,G]| = p^k$, where $p$~is prime and $k \in \natural$, \end{enumerate} then $\mathop{\dir{\mathrm{Cay}}}(G;S)$ has a hamiltonian path. \end{thm} We now state the main result of this \lcnamecref{VerySmallSect}: \begin{thm} \label{OnlyInvertHP} Suppose \noprelistbreak \begin{itemize} \item $[G,G]$ is cyclic of prime-power order, and \item every element of~$G$ either centralizes\/ $[G,G]$ or inverts it. \end{itemize} Then every connected Cayley digraph on~$G$ has a hamiltonian path. \end{thm} \begin{proof} Let $S$ be a generating set for~$G$. Write $[G,G] = \mathbb{Z}_{p^k}$ for some $p$ and~$k$. Since every minimal generating set of $\mathbb{Z}_{p^k}$ has only one element, there exist $a,b \in S$, such that $\langle [a,b] \rangle = [G,G]$.\refnote{OnlyInvertHPGG} Then $\langle a,b \rangle$ is normal in~$G$ (since it contains $[G,G]$), so, by \cref{MayAssumeS=2}, we may assume $S = \{a,b\}$. Let $H = \langle ba^{-1} \rangle$ be the arc-forcing subgroup. We may assume $H^G = G$, for otherwise we could assume, by induction on~$|G|$, that every connected Cayley digraph on $H^G$ has a hamiltonian path, and then \cref{GGPrimeHG} would apply (since \cref{UniqueMax} verifies the remaining hypothesis). So $$H \, \mathbb{Z}_{p^k} = H \, [G,G] \supset H^G = G .$$ If $a$ and~$b$ both invert~$\mathbb{Z}_{p^k}$, then $H = \langle b a^{-1} \rangle$ centralizes $\mathbb{Z}_{p^k} = [G,G]$, so $G$ is nilpotent. Then \cref{2gen} applies. Therefore, we may now assume that $a$ does not invert~$\mathbb{Z}_{p^k}$. Then, by assumption, $a$ centralizes~$\mathbb{Z}_{p^k}$. Let $n = |G:H|$, and write $a = \overline{a} z$, where $\overline{a} \in H$ and $z \in \mathbb{Z}_{p^k}$. Then $a = \overline{a} z \in H z$ and $b = (ba^{-1}) (\overline{a} z) \in H z$. Since $\langle a,b \rangle = G$, this implies $H \langle z \rangle = G$.\refnote{OnlyInvertHPHz} Therefore $$[H](a^n) = [H, Hz, Hz^2,\ldots,Hz^{n-1}, H]$$ is a hamiltonian cycle in $H \backslash {\mathop{\dir{\mathrm{Cay}}}(G;S)}$, so \cref{SkewedGens} applies. \end{proof} \begin{cor} \label{GGle3} If\/ $|[G,G]| \le 3$, or\/ $[G,G] \cong \mathbb{Z}_4$, then every connected Cayley digraph on~$G$ has a hamiltonian path. \end{cor} \begin{proof} \Cref{OnlyInvertHP} applies, because inversion is the only nontrivial automorphism of $\{e\}$, $\mathbb{Z}_2$, $\mathbb{Z}_3$, or~$\mathbb{Z}_4$. \end{proof} \begin{rem}[{}{\cite[p.~266]{HolsztynskiStrube}}] In the statement of \cref{GGle3}, the assumption that $[G,G] \cong \mathbb{Z}_4$ cannot be replaced with the weaker assumption that $|[G,G]| = 4$. For a counterexample, let $G = A_4 \times \mathbb{Z}_2$. Then $|[G,G]| = 4$, but it can be shown without much difficulty that $\mathop{\dir{\mathrm{Cay}}}(G; a,b)$ does not have a hamiltonian path when $a = \bigl( (1\, 2)(3 \, 4), 1 \bigr)$ and $b = \bigl( (1\, 2 \, 3), 0 \bigr)$.\refnote{A4xZ2NoPath} \end{rem} Here is a counterexample when $|[G,G]| = 5$: \begin{eg} \label{G'=5NoHP} Let $G = \mathbb{Z}_{12} \ltimes \mathbb{Z}_5 = \langle h \rangle \ltimes \langle z \rangle$, where $z^h = z^3$. Then $|[G,G]| = 5$, and the Cayley digraph $\mathop{\dir{\mathrm{Cay}}}(G; h^2z, h^3z)$ is connected, but does not have a hamiltonian path. \end{eg} \begin{proof} A computer search can confirm the nonexistence of a hamiltonian path very quickly, but, for completeness, we provide a human-readable proof. Let $a = h^2z = z^4 h^2$ and $b = h^3z = z^3 h^3$. The argument in \cref{TwoCosetsCase} of the proof of \cref{NewNonTraceable} shows that no more than one regular coset travels by~$a$ in any hamiltonian path. On the other hand, since a hamiltonian path cannot contain any cycle of the form $[g](b^4)$, we know that at least $\bigl\lfloor \bigl(|G| - 1 \bigr)/4 \bigr\rfloor = 14$ vertices must travel by~$a$. Since $|ab^{-1}| = 12 < 14$, this implies that some regular coset travels by~$a$. So exactly one regular coset travels by~$a$ in any hamiltonian path. For $0 \le i \le 3$ and $0\le m \le 11$, let $L_{i,m}$ be the spanning subdigraph of $\mathop{\dir{\mathrm{Cay}}}(G; a,b)$ in which: \noprelistbreak \begin{itemize} \item all vertices have outvalence~$1$, except $b^{-1}(a b^{-1})^m = z^4 h^{9-m}$, which has outvalence~$0$, \item the vertices in the regular coset $z^i H$ travel by~$a$, \item a vertex $b^{-1}h^{-j} = z^4 h^{9-j}$ in the terminal coset travels by~$a$ if $0 \le j < m$, and \item all other vertices travel by~$b$. \end{itemize} An observation of D.\,Housman \cite[Lem.~6.4(b)]{CurranWitte} tells us that if $L$ is a hamiltonian path from $e$ to $b^{-1}(a b^{-1})^m$, in which $z^i H$ is the regular coset that travels by~$a$, then $L = L_{i,m}$.\refnote{TermCosetTravels} Thus, from the conclusion of the preceding paragraph, we see that every hamiltonian path (with initial vertex~$e$) must be equal to $L_{i,m}$, for some~$i$ and~$m$. However, $L_{i,m}$ is not a (hamiltonian) path. More precisely, for each possible value of~$i$ and~$m$, the following list displays a cycle that is contained in~$L_{i,m}$: \noprelistbreak \begin{itemize} \item if $i = 0$ and $0 \le m \le 8$: $$z^2 h^3 \stackrel{\textstyle b}{\to} z h^6 \stackrel{\textstyle b}{\to} z^3 h^9 \stackrel{\textstyle b}{\to} z^4 \stackrel{\textstyle b}{\to} z^2 h^3 ;$$ \item if $i = 0$ and $9 \le m \le 11$: $$h^2 \stackrel{\textstyle a}{\to} z h^4 \stackrel{\textstyle b}{\to} z^4 h^7 \stackrel{\textstyle a}{\to} z h^9 \stackrel{\textstyle b}{\to} z^2 \stackrel{\textstyle b}{\to} h^3 \stackrel{\textstyle a}{\to} z^2 h^5 \stackrel{\textstyle b}{\to} z^3 h^8 \stackrel{\textstyle b}{\to} z h^{11} \stackrel{\textstyle b}{\to} h^2 ;$$ \item if $i = 1$ and $0 \le m \le 7$: $$h^4 \stackrel{\textstyle b}{\to} z^3 h^7 \stackrel{\textstyle b}{\to} z^2 h^{10} \stackrel{\textstyle b}{\to} z^4 h \stackrel{\textstyle b}{\to} h^4 ;$$ \item if $i = 1$ and $8 \le m \le 11$: $$h \stackrel{\textstyle b}{\to} z h^4 \stackrel{\textstyle a}{\to} h^6 \stackrel{\textstyle b}{\to} z^2 h^9 \stackrel{\textstyle b}{\to} z^3 \stackrel{\textstyle b}{\to} z h^3 \stackrel{\textstyle a}{\to} z^3 h^5 \stackrel{\textstyle b}{\to} z^4 h^8 \stackrel{\textstyle a}{\to} z^3 h^{10} \stackrel{\textstyle b}{\to} h ;$$ \item if $i = 2$ and $0 \le m \le 9$: $$h^5 \stackrel{\textstyle b}{\to} z h^8 \stackrel{\textstyle b}{\to} z^4 h^{11} \stackrel{\textstyle b}{\to} z^3 h^2 \stackrel{\textstyle b}{\to} h^5 ;$$ \item if $i = 2$ and $10 \le m \le 11$: $$z^2 h^3 \stackrel{\textstyle a}{\to} z^4 h^5 \stackrel{\textstyle a}{\to} z^2 h^7 \stackrel{\textstyle a}{\to} z^4 h^9 \stackrel{\textstyle a}{\to} z^2 h^{11} \stackrel{\textstyle a}{\to} z^4 h \stackrel{\textstyle a}{\to} z^2 h^3 ; $$ \item if $i = 3$ and $0 \le m \le 10$: $$h^7 \stackrel{\textstyle b}{\to} z^4 h^{10} \stackrel{\textstyle b}{\to} z h \stackrel{\textstyle b}{\to} z^2 h^4 \stackrel{\textstyle b}{\to} h^7 ;$$ \item if $i = 3$ and $m = 11$: $$z^3 h^2 \stackrel{\textstyle a}{\to} z^4 h^4 \stackrel{\textstyle a}{\to} z^3 h^6 \stackrel{\textstyle a}{\to} z^4 h^8 \stackrel{\textstyle a}{\to} z^3 h^{10} \stackrel{\textstyle a}{\to} z^4 \stackrel{\textstyle a}{\to} z^3 h^2 .$$ \end{itemize} Since $L_{i,m}$ is never a hamiltonian path, we conclude that $\mathop{\dir{\mathrm{Cay}}}(G;a,b)$ does not have a hamiltonian path. \end{proof} \section{Nonhamiltonian Cayley digraphs on abelian groups} \label{NonhamAbelian} When $G$ is abelian, it is easy to find a hamiltonian path in $\mathop{\dir{\mathrm{Cay}}}(G;S)$: \begin{prop}[{}{\cite[Thm.~3.1]{HolsztynskiStrube}}] \label{abelianpath} Every connected Cayley digraph on any abelian group has a hamiltonian path.\refnote{abelianpathaid} \end{prop} On the other hand, it follows from \fullcref{HousmanThm}{regular} that sometimes there is no hamiltonian cycle: \begin{prop}[Rankin {\cite[Thm.~4]{Rankin-Camp}}] \label{Rankin2gen} Assume $G = \langle a, b \rangle$ is abelian. Then there is a hamiltonian cycle in $\mathop{\dir{\mathrm{Cay}}}(G; a,b)$ if and only if\refnote{RankinAid} there exist $k,\ell \ge 0$, such that $\langle a^k b^\ell \rangle = \langle a b^{-1} \rangle$, and $k + \ell = |G : \langle a b^{-1} \rangle|$. \end{prop} \begin{eg} If $\gcd(a, n) > 1$ and $\gcd(a+1, n) > 1$, then $\mathop{\dir{\mathrm{Cay}}} \bigl( \mathbb{Z}_n ; a, a+1 \bigr)$ does not have a hamiltonian cycle.\refnote{EasyNoHam} \end{eg} The non-hamiltonian Cayley digraphs provided by \cref{Rankin2gen} are $2$-generated. A few $3$-generated examples are also known. Specifically, the following result lists (up to isomorphism) the only known examples of connected, non-hamiltonian Cayley digraphs $\mathop{\dir{\mathrm{Cay}}}(G; S)$, such that $\#S > 2$ (and $e \notin S$): \begin{thm}[Locke-Witte {\cite{LockeWitte}}] \label{LockeWitte} The following Cayley digraphs do not have hamiltonian cycles: \begin{enumerate} \item \label{LockeWitte-12k} $\mathop{\dir{\mathrm{Cay}}}(\mathbb{Z}_{12k}; 6k, 6k+2, 6k+3)$, for any $k \in \mathbb{Z}^+$, and \item \label{LockeWitte-2k} $\mathop{\dir{\mathrm{Cay}}}(\mathbb{Z}_{2k} ; a, b, b+k)$, for $a,b,k \in \mathbb{Z}^+$, such that certain technical conditions \pref{2kConds} are satisfied. \end{enumerate} \end{thm} \begin{rem} \label{2kConds} The precise conditions in \pref{LockeWitte-2k} are: (i)~either $a$ or~$k$ is odd, (ii)~either $a$~is even or $b$ and~$k$ are both even, (iii)~$\gcd(a-b,k) = 1$, (iv)~$\gcd(a,2k) \neq 1$, and (v)~$\gcd(b,k) \neq 1$. \end{rem} It is interesting to note that, in the examples provided by \cref{LockeWitte}, the group~$G$ is cyclic (either $\mathbb{Z}_{12k}$ or ~$\mathbb{Z}_{2k}$), and either \begin{enumerate} \item[\pref{LockeWitte-12k}] one of the generators has order~$2$, or \item[\pref{LockeWitte-2k}] two of the generators differ by an element of order~$2$. \end{enumerate} S.\,J.\,Curran (personal communication) asked whether the constructions could be generalized by allowing $G$ to be an abelian group that is not cyclic. We provide a negative answer for case \pref{LockeWitte-2k}: \begin{prop} \label{3genAbelMustBeCyclic} Let $G$ be an abelian group\/ {\upshape(}\!written additively{\upshape)}, and let $a,b,k \in G$, such that $k$ is an element of order\/~$2$. {\upshape(}Also assume $\{a,b,b+k\}$ consists of three distinct, nontrivial elements of~$G$.{\upshape)} If the Cayley digraph $\mathop{\dir{\mathrm{Cay}}}(G;a,b,b+k)$ is connected, but does not have a hamiltonian cycle, then $G$ is cyclic. \end{prop} \begin{proof} We prove the contrapositive: assume $G$ is not cyclic, and we will show that the Cayley digraph has a hamiltonian cycle (if it is connected). The argument is a modification of the proof of \cite[Thm.~4.1($\Leftarrow$)]{LockeWitte}.\refnote{3genAbelMustBeCyclicAid} Construct a subdigraph~$H_0$ of~$G$ as in \cite[Defn.~4.2]{LockeWitte}, but with $G$ in the place of~$\mathbb{Z}_{2k}$, with $|G|$ in the place of~$2k$, and with $|a|$ in the place of~$d$. (Case~1 is when $k \notin \langle a \rangle$; Case~2 is when $k \in \langle a \rangle$.) Every vertex of~$H_0$ has both invalence~$1$ and outvalence~$1$. The argument in Case~3 of the proof of \cite[Thm.~4.1($\Leftarrow$)]{LockeWitte} shows that $\mathop{\dir{\mathrm{Cay}}}(G; a,b,b+k)$ has a hamiltonian cycle if $\langle a - b, k \rangle \neq G$. Therefore, we may assume $\langle a - b, k \rangle = G$. On the other hand, we know $\langle a-b \rangle \neq G$ (because $G$ is not cyclic). Since $|k| = 2$, this implies $G = \langle a-b \rangle \oplus \langle k \rangle$. Since $G$ is not cyclic, this implies that $a-b$ has even order. Also, we may write $a = a' + k'$ and $b = b' + k''$ for some (unique) $a',b' \in \langle a - b \rangle$ and $k', k'' \in \langle k \rangle$. (Since $a' - b' \in \langle a - b \rangle$, it is easy to see that $k' = k''$, but we do not need this fact.) \begin{claim} $H_0$ has an odd number of connected components. \end{claim} Arguing as in the proof of \cite[Lem.~4.1]{LockeWitte} (except that, as before, Case~1 is when $k \notin \langle a \rangle$, and Case~2 is when $k \in \langle a \rangle$), we see that the number of connected components in~$H_0$ is $$ \begin{cases} |G : \langle a, k \rangle| + |G : \langle b, k \rangle| & \text{if $k \notin \langle a \rangle$} , \\ |G : \langle b, k \rangle| & \text{if $k \in \langle a \rangle$} . \end{cases}$$ Since $\langle a' - b' \rangle = \langle a - b \rangle$, we know that one of~$a'$ and~$b'$ is an even multiple of $a - b$, and the other is an odd multiple. (Otherwise, the difference would be an even multiple of $a - b$, so it would not generate $\langle a - b \rangle$.) Thus, one of $|G : \langle a, k \rangle|$ and $|G : \langle b, k \rangle|$ is even, and the other is odd. So $|G : \langle a, k \rangle| + |G : \langle b, k \rangle|$ is odd. This establishes the claim if $k \notin \langle a \rangle$. We may now assume $k \in \langle a \rangle$. This implies that the element $a'$ has odd order (and $k'$ must be nontrivial, but we do not need this fact). This means that $a'$ is an even multiple of $a-b$, so $b'$~must be an odd multiple of $a-b$ (since $\langle a' - b' \rangle = \langle a - b \rangle$). Therefore $|\langle a - b \rangle : \langle b' \rangle|$ is odd, which means $|G : \langle b,k \rangle|$ is odd. This completes the proof of the claim. \medbreak Now, if $|G : \langle b, k \rangle|$ is odd, we can apply a very slight modification of the argument in Case~4 of the proof of \cite[Thm.~4.1($\Leftarrow$)]{LockeWitte}. (Subcase~4.1 is when $k \notin \langle a \rangle$ and Subcase~4.2 is when $k \in \langle a \rangle$.) We conclude that $\mathop{\dir{\mathrm{Cay}}}(G;a,b,b+k)$ has a hamiltonian cycle, as desired. Finally, if $|G : \langle b, k \rangle|$ is even, then more substantial modifications to the argument in \cite{LockeWitte} are required. For convenience, let $m = |G : \langle a, k \rangle|$. Note that, since $|G : \langle b, k \rangle|$ is even, the proof of the claim shows that $m$ is odd and $k \notin \langle a \rangle$. Define $H_0'$ as in Subcase~4.1 of \cite[Thm.~4.1($\Leftarrow$)]{LockeWitte} (with $G$ in the place of~$\mathbb{Z}_{2k}$, and replacing $\gcd(b,k)$ with $|G : \langle b, k \rangle|$). Let $H_1 = H_0'$, and inductively construct, for $1 \le i \le (m+1)/2$, an element $H_i$ of $\mathcal{E}$, such that $$ \{\, v \mid \text{$z_v = 0$ and $0 \le y_v \le 2i-2$} \,\} \cup \{\, v \mid \text{$z_v = 1$ and $x_v = 0$ or~$1$ (mod $|G : \langle b, k \rangle|$)} \,\} $$ is a component of~$H_i$, and all other components are components of~$H_0$. The construction of~$H_i$ from~$H_{i-1}$ is the same as in Subcase~4.1, but with $2i$ replaced by~$2i-1$. We now let $K_1 = H_{(m+1)/2}$, and inductively construct, for $1 \le i \le |G : \langle b, k \rangle|/2$, an element~$K_i$ of~$\mathcal{E}$, such that $$ \{\, v \mid z_v = 0 \,\} \cup \{\, v \mid \text{$z_v = 1$ and $x_v \equiv 0,1,\ldots,$ or $2i-1$ (mod $|G : \langle b, k \rangle|$)} \,\} $$ is a single component of~$K_i$. Namely, \cite[Lem.~4.2]{LockeWitte} implies there is an element $K_i = K_{i-1}'$, such that $(2i-2)a$, $(2i-2)a + k$, and $(2i-1)a + k$ are all in the same component of~$K_i$. Then, for $i = |G : \langle b, k \rangle|/2$, we see that $K_i$ is a hamiltonian cycle. \end{proof} \begin{ack} I thank Stephen J.~Curran for asking the question that inspired \cref{3genAbelMustBeCyclic}. The other results in this paper were obtained during a visit to the School of Mathematics and Statistics at the University of Western Australia (partially supported by funds from Australian Research Council Federation Fellowship FF0770915). I am grateful to my colleagues there for making my visit so productive and enjoyable. \end{ack}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,339
Q: How to import theme in Grafana when building a new panel plugin? I am currently trying to build a new panel plugin as a ReactJS component for my Grafana application. I am following the official guide and trying to display a circle that changes color according to the option set by the user. The guide proposes the following code: import React from 'react'; import { PanelProps } from '@grafana/data'; import { SimpleOptions } from 'types'; interface Props extends PanelProps<SimpleOptions> {} export const SimplePanel: React.FC<Props> = ({ options, data, width, height }) => { let color: string; switch (options.color) { case 'red': color = theme.palette.redBase; // <--- Cannot find name 'theme' break; case 'green': color = theme.palette.greenBase; // <--- Cannot find name 'theme' break; case 'blue': color = theme.palette.blue95; // <--- Cannot find name 'theme' break; } return ( <g> <circle style={{ fill: color }} r={100} /> </g> ); }; However, i get a compile error (Cannot find name 'theme') because there's no "theme" imported and the guide does not specify where to find it. How can i import theme ? A: The theme can be imported from grafana-ui package: import React from 'react'; import { PanelProps } from '@grafana/data'; import { SimpleOptions } from 'types'; import { useTheme } from '@grafana/ui'; // import useTheme hook interface Props extends PanelProps<SimpleOptions> {} export const SimplePanel: React.FC<Props> = ({ options, data, width, height }) => { const theme = useTheme(); // invoke the hook to get the theme let color: string; switch (options.color) { case 'red': color = theme.palette.redBase; // <--- Cannot find name 'theme' break; case 'green': color = theme.palette.greenBase; // <--- Cannot find name 'theme' break; case 'blue': color = theme.palette.blue95; // <--- Cannot find name 'theme' break; } return ( <g> <circle style={{ fill: color }} r={100} /> </g> ); };
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,330
class TestPyxnatSession: def __init__(self, project, subject, session, scans, assessors): self.scans_ = scans self.assessors_ = assessors self.project = project self.subject = subject self.session = session def scans(self): return self.scans_ def assessors(self): return self.assessors_ class TestAttrs: def __init__(self, properties): pass class TestPyxnatScan: def __init__(self, project, subject, session, scanjson): self.scanjson = scanjson self.project = project self.subject = subject self.session = session uristr = '/data/project/{}/subjects/{}/experiments/{}/scans/{}' self._uri = uristr.format(project, subject, session, self.scanjson['label']) def id(self): return self.scanjson['id'] def label(self): return self.scanjson['label'] class TestPyxnatAssessor: def __init__(self, project, subject, session, asrjson): self.asrjson = asrjson self.project = project self.subject = subject self.session = session uristr = '/data/project/{}/subjects/{}/experiments/{}/assessors/{}' self._uri = uristr.format(project, subject, session, self.asrjson['label']) def id(self): return self.asrjson['id'] def label(self): return self.asrjson['label'] def inputs(self): return self.asrjson['xsitype'] + '/' + self.asrjson['inputs']
{ "redpajama_set_name": "RedPajamaGithub" }
1,190
New Phase 3 Trial Data Back P2B001 as Therapy for Early Parkinson's P2B001 may offer effective symptom control with fewer side effects by Andrea Lobo | September 19, 2022 New trial data suggest that the combination therapy P2B001 for early Parkinson's disease may offer effective symptom control with significantly fewer side effects — notably, less daytime sleepiness — than available treatments. The therapy, being developed by the Israeli-based Pharma Two B, could potentially be a first-line, once-daily treatment for people with early Parkinson's. "The data show P2B001's potential to offer patients with early [Parkinson's] a treatment option that can significantly improve motor symptoms and daily function, while reducing common side effects associated with dopamine agonists, such as daytime sleepiness, orthostatic hypotension [a type of low blood pressure], and hallucinations," Sheila Oren, MD, CEO of Pharma Two B, said in a company press release. Orthostatic hypotension is low blood pressure that occurs when standing after sitting or lying down, and causes dizziness, lightheaded, or fainting. These results were based on data from a recently completed Phase 3 clinical trial (NCT033295508). A poster/presentation titled "P2B001 in the management of untreated PD. Results from a randomized, double-blind, double-dummy controlled trial," was presented at the Movement Disorder Society (MDS) International Congress of Parkinson's Disease and Movement Disorders, which took place in Madrid, Spain, Sept. 15–18. August 12, 2022 News by Steve Bryson, PhD Brain Training App Useful in Early Parkinson's Cases: Pilot Study Testing P2B001 in patients P2B001 is an extended-release or ER formulation, meaning that the medication is formulated so that it is released slowly over time. It contains low doses of pramipexole (0.6 mg) and rasagiline (0.75 mg). Both compounds are approved therapies for Parkinson's — marketed as Mirapex (pramipexole) and Azilect (rasagiline). The low doses used in P2B001 are not currently available on the market. Mirapex is a dopamine agonist, which acts as a substitute for dopamine and binds dopamine receptors in the brains of Parkinson's patients. It relieves some of the major motor symptoms caused by low dopamine levels. Azilect is a monoamine oxidase-B (MAO-B) inhibitor that increases dopamine levels in the brain. MAO-B normally breaks down dopamine to prevent its levels from becoming too high. In Parkinson's, the inhibition of MAO allows the body to maintain dopamine during longer periods, restoring some dopamine communication in the brain and reducing motor symptoms. According to the company, the combination of both compounds can potentiate, or increase the effectiveness, of the rise in dopamine levels. The Phase 3 clinical trial was a multicenter study involving 544 early Parkinson's patients that aimed to assess the safety and efficacy of P2B001. The participants, ages 35–80, were randomly assigned to receive P2B001 given once a day (150 patients), or its compounds separately (pramipexole 0.6 mg, 148 patients; rasagiline 0.75 mg, 147 patients), for 12 weeks, or about three months. The researchers compared the results of treatment in these patients with that of a group of 74 people with early Parkinson's receiving Mirapex-ER, also for 12 weeks. Previous results had suggested that P2B001 had superior efficacy to either of its two components. Now, the new data confirmed that P2B001 indeed had superior efficacy in controlling Parkinson's symptoms compared with either pramipexole or rasagiline individually. The results were based on total Unified Parkinson's Disease Rating Scale (UPDRS) scores and on individual motor and activities of daily living UPDRS measures. UPDRS is the most widely applied rating scale to assess the severity and progression of Parkinson's. P2B001 also showed comparable efficacy to optimal doses of marketed Mirapex-ER, with significantly less daytime somnolence, as measured by the Epworth Sleepiness Scale (ESS), which measures the general level of sleepiness among patients during the day. Treatment with P2B001 also led to fewer dopaminergic side effects, such as sleepiness and orthostatic hypotension. Warren Olanow, MD, professor emeritus in the departments of neurology and neuroscience at the Icahn School of Medicine at Mount Sinai in New York, further noted that P2B001 is given once daily and does not need any adjustment period, known as titration, to find the lowest dose capable of achieving therapeutic effects — and with minimal side effects. Pharma Two B is now working on a regulatory market approval filing with the U.S. Food and Drug Administration for P2B001. "If approved, once daily P2B001 may offer a new approach for initiating [Parkinson's disease] therapy," Oren concluded. Andrea Lobo Andrea Lobo is a Science writer at BioNews. She holds a Biology degree and a PhD in Cell Biology/Neurosciences from the University of Coimbra-Portugal, where she studied stroke biology. She was a postdoctoral and senior researcher at the Institute for Research and Innovation in Health in Porto, in drug addiction, studying neuronal plasticity induced by amphetamines. As a research scientist for 19 years, Andrea participated in academic projects in multiple research fields, from stroke, gene regulation, cancer, and rare diseases. She authored multiple research papers in peer-reviewed journals. She shifted towards a career in science writing and communication in 2022. early Parkinson's, P2B001, Pharma Two B, Phase 3 clinical trial
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,155
Q: String Approximation (Fetching the nearest matched String from Dictionary) Is there any String Matching code or Algorithm which gives us the approximately matched string from the dictionay(contains pre-defined Set of Strings)? For example: If there are 10 String in the dictionary (Set of Strings), if user input some String, then the algorithm should tell you the near matched string from the dictionary. Its would be great, if i get the matched string with the matched value (or percentage). A: I think it's better to use lucene library it has a package called org.apache.lucene.search.spell you can use it easily. it provide 3 algorithms NGramDistance, LevensteinDistance, JaroWinklerDistance. try this A: You can calculate a Levenshtein distance between your String and strings in your dictionary to find the closest matches. This may not be the best for spell checking as it gives no favour to letter being swapped around or words which are phonetically similar. e.g. question is closer to resting than kwizchum. For more examples, you can read up on http://en.wikipedia.org/wiki/Approximate_string_matching A: I just wanted to add that StringUtils also has a convenient Levenshtein Distance method since version 3.0 public static int getLevenshteinDistance(CharSequence s, CharSequence t) After that it's just as simple as iterating through the collection and remembering the closest match: public static Object findClosestMatch(Collection<?> collection, Object target) { int distance = Integer.MAX_VALUE; Object closest = null; for (Object compareObject : collection) { int currentDistance = StringUtils.getLevenshteinDistance(compareObject.toString(), target.toString()); if(currentDistance < distance) { distance = currentDistance; closest = compareObject; } } return closest; } Note that the method above does require collection to be null-safe and toString() to be sensfully implemented. A: You can try Levenshtein Distance techinque. The simple idea you have four basic operations: * *Insertion (hell -> hell o) *Replacement (nice -> r ice) *Deletion (bowlin g -> bowlin) *Swapping (brohter -> bro th er) You algorithm should calculate the distance between your word and every word in dictionary. The smallest distance means that this word matches more accurate with given input.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,237
What Is Vitiligo? Symptoms, Causes, Diagnosis, Treatment, and Prevention By Moira LawlerMedically Reviewed by Ross Radusky, MD Reviewed: August 12, 2022 Vitiligo is the result of the skin's melanocytes (the cells responsible for giving the skin color) being destroyed. In can occur in anyone, but it tends to be most noticeable in people with darker skin because the contrast is more pronounced.Isabella Dias/Getty Images Vitiligo is a condition that causes the skin to lose its color, according to the American Academy of Dermatology (AAD). This skin disorder can occur in people of any race. It's most noticeable, though, among people with darker skin, because the contrast between normal skin tone and the white patches affected by vitiligo is more pronounced, notes the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS). People with vitiligo experience skin color loss in various areas of the body. Often it's symmetrical, affecting both sides, such as the left and right hands or both knees. Some experience discoloration in the mouth, on the scalp, or of their hair, eyelashes, or eyebrows. Common Questions & Answers When and how does vitiligo start? People can develop vitiligo at any age, but approximately one-half of cases are diagnosed either in childhood or before someone turns 20. The first signs of vitiligo are white patches on the skin, which can develop anywhere on the body, including on the face, arms, hands, genitals, and buttocks. What are the symptoms of vitiligo? The most common symptom of vitiligo is light or depigmented spots on the skin. Other vitiligo symptoms include: hair turning gray or white prematurely, eyelashes or eyebrows losing color, change of color in the retina of the eye, and color loss in the nose and mouth. Can vitiligo be cured? No, there is currently not a cure for vitiligo. There are some medicines that can help restore some skin color in some cases, but their effectiveness depends on the individual case and how severe pigmentation loss is. In a small number of cases, light and laser therapies have been effective in returning skin color in people with vitiligo. Can you die from vitiligo? Vitiligo does not pose a serious threat to one's health, but it can result in physical complications, such as eye issues, hearing problems, and sunburn. People with vitiligo also tend to be more likely to have another autoimmune disease (like thyroid disorders and some types of anemia). The condition can also lead to emotional distress. Who gets vitiligo? Vitiligo affects people of all races and genders equally, though it's usually most noticeable in people with darker skin because of the contrast between the depigmented skin and the unaffected skin. There is evidence that vitiligo is hereditary, with 30 specific gene variations having been found to be linked to vitiligo. Signs and Symptoms of Vitiligo The biggest sign that someone may have vitiligo is the appearance of light or "depigmented" spots on the skin, says Suzanne Friedler, MD, a dermatologist in private practice in New York City. The pale patches are areas with little or no melanin, the skin's natural pigment. These spots can show up anywhere on the body, though they may first appear in areas that receive a lot of sun exposure, such as on the face, arms, feet, and hands. It's also not uncommon for white areas to appear in the groin, armpits, and around the belly button, says Mayo Clinic. Other signs of vitiligo can include: Hair turning prematurely gray or white Eyelashes or eyebrows losing color and turning white Change of color in the retina of the eye, noted a study published in the November–December 2019 issue of the Indian Dermatology Online Journal Color loss in the nose and mouth Inflammation of the ears or eyes, leading to hearing loss and vision problems, according to NIAMS Where skin spots appear, how widespread the condition becomes, and how much it will progress vary from person to person. There are two major types of vitiligo: Nonsegmental Vitiligo The most common type of vitiligo, with pale skin patches usually appearing on both sides of the body. The first signs may show up on hands, fingertips, wrists, around the eyes or mouth, or on the feet. Nonsegmental vitiligo is also called bilateral or generalized vitiligo or vitiligo vulgaris, according to an article published in September 2016 in the journal F1000 Research. Nonsegmental vitiligo is divided into subtypes based on the way the condition shows up. These include acrofacial vitiligo, which appears on the face, hands, and feet; mucosal vitiligo, which affects the mucus membranes of the mouth, nose, and genitals; localized or focal vitiligo, which occurs on just a few areas of the body; and universal vitiligo, which may involve 80 to 90 percent of an affected person's skin, according to a review in the journal Dermatology. Segmental Vitiligo For this type, white patches often appear on just one side of the body, such as one arm or one leg instead of both. Loss of hair color is common. Segmented vitiligo can begin early in life. It may spread rapidly for six months to two years, then stop progressing. In rare cases, this form of vitiligo may become active again years later. About 5 to 16 percent of vitiligo cases are segmental vitiligo, notes the Dermatology review. You may also have mixed vitiligo, a combination of nonsegmental and segmental vitiligo. Importantly, vitiligo can cause significant psychological distress. Many people with vitiligo struggle with self-esteem, confidence, and social anxiety, especially if the vitiligo affects areas of the skin that are tough to hide under clothes or minimize with cosmetics, notes the NHS. "Vitiligo can have a significant effect on patients," says Adrienne Haughton, MD, clinical assistant professor of dermatology and director of medical and cosmetic dermatology at Stony Brook Medicine in Commack, New York. "Patients can be very self-conscious and even experience depression." Learn More About the Signs and Symptoms of Vitiligo Most Recent in Vitiligo Does Having Vitiligo Increase Your Risk of Getting Skin Cancer? Everything to Know About How Doctors Diagnose and Treat Vitiligo Is It Vitiligo or Something Else? How Vitiligo Changes With Age See all in Vitiligo Causes and Risk Factors for Vitiligo Researchers now understand that vitiligo is an autoimmune disorder, in which the body destroys parts of itself. "It happens when a part of the immune system starts to attack and kill the pigment cells — melanocytes — in the skin, resulting in the formation of white patches," says Michelle Rodrigues, MBBS, a dermatologist in private practice in Melbourne, Australia. Melanocytes are cells that produce melanin, the pigment that gives skin, hair, and eyes their color, per MedlinePlus. So why might the body's immune cells attack healthy skin cells in the first place? That question is still not entirely settled among researchers, according to the American Academy of Dermatology. But it seems likely that genetics and environmental triggers both play a role. These factors are known to increase risk for vitiligo: Family History and Genes About 20 percent of people with vitiligo have at least one close relative affected by this skin disorder and researchers have found that having a certain genetic profile makes people more susceptible to developing vitiligo. Variations in over 30 genes have been identified that are associated with vitiligo, including two called NLRP1 and PTPN22, per MedlinePlus. These and other genes now linked with vitiligo are known to be involved with immune-system regulation and inflammation. Environmental Triggers Vitiligo seems to be the result of both a preexisting genetic makeup and something in the environment setting off an autoimmune response that destroys melanocytes. Potential triggers include sunburn, exposure to certain chemicals, and trauma or injury to the skin, according to the article in F1000 Research. These triggers can also prompt vitiligo to spread in people who already have the condition. An Existing Autoimmune Disease People with an autoimmune disease, such as psoriasis, systemic lupus erythematosus, Hashimoto's disease, or alopecia areata, are at an increased risk of developing vitiligo, notes the American Academy of Dermatology. Several genes associated with vitiligo are also linked to other autoimmune conditions, such as rheumatoid arthritis, type 1 diabetes, and thyroid disease, according to NIAMS. Fifteen to 25 percent of people with vitiligo have another autoimmune disease, notes MedlinePlus. Learn More About What Causes Some People to Get Vitiligo How Is Vitiligo Diagnosed? If you suspect you may have vitiligo, visit your primary care doctor or a dermatologist. At your appointment, your doctor will likely ask about risk factors such as: Whether a close relative has been diagnosed with vitiligo Whether you have been diagnosed with an autoimmune disorder If you've experienced recent stress (such as a major life change) or other potentially triggering events (such as a severe sunburn), per Mayo Clinic Most of the time, doctors diagnose vitiligo by visually examining white patches on the skin and considering your medical history, according to NYU Langone. Your physician may use a Wood's lamp, which uses ultraviolet light to identify pigment loss. This lamp is especially useful for people with fairer skin where the difference in color is subtler. Some dermatologists will want to do more testing beyond a skin exam. Your doctor may order a skin biopsy, which will show whether melanocytes are present in the skin. A lack of melanocytes is an indication of vitiligo. Your doctor may also ask for a blood test to see if you have another autoimmune disease. Additionally, they may also perform an eye exam for uveitis, a form of eye inflammation that can be associated with vitiligo. Your doctor will also rule out other skin conditions that can look similar to vitiligo, such as skin damage from exposure to industrial chemicals called chemical leukoderma; tinea versicolor, a yeast infection that can lighten or darken areas of skin; and albinism, a genetic condition marked by low levels of melanin in skin, hair, and eyes, notes Cleveland Clinic. Prognosis for Vitiligo While this skin condition cannot be cured, treatments can slow or stop its spread, spur some regrowth of melanocytes, and improve the appearance of patchy skin by returning some color to white areas, notes NIAMS. Cosmetics can reduce the appearance of vitiligo patches, too. And cognitive behavioral therapy can help you overcome the depression and social anxiety that this skin condition so often causes, suggests the review in Dermatology. Duration of Vitiligo Once vitiligo develops, it is usually a lifelong condition. You may have a 10 to 20 percent chance that your skin's natural color will be restored, especially if you are young, if your vitiligo developed in less than six months, and if the white patches are mostly on your face, according to the Cleveland Clinic. Other Skin Issues 5 Doctor Approved Ways to Get Rid of Hyperpigmentation 'I Thought My Skin Tags Were an STD — but They're Actually Harmless' Color Loss in the Skin, Hair, and Eyes Are the Hallmark Signs of Vitiligo What Is Vitiligo and Who's at Risk? Treatment and Medication Options for Vitiligo There is not currently a cure for vitiligo, says Michele Green, MD, a dermatologist in private practice in New York City. But a growing variety of treatment options can minimize the appearance of white skin spots. Nondrug and nonsurgical therapies include: Makeup and self-tanners, which can cover up white patches and hair dye to bring color back to graying or white hair Light therapy, specifically narrowband UVB, according to Dr. Haughton Medication and Surgery Options for Vitiligo In 2022 the U.S. Food and Drug Administration approved ruxolitinib (Opzelura), the first medication that can restore pigment in patients with nonsegmental vitiligo. This is a topical JAK inhibitor, which works to reduce the activity of the immune system. In a clinical trial, 50 percent of patients experienced significant improvements after one year of using the medication. Other medications may help minimize the appearance of vitiligo. These include: Corticosteroid creams, prescribed for the short-term, per the AAD Ointments containing immunomodulators tacrolimus or pimecrolimus, which can be used longer-term Topical vitamin D analogs (which are synthetic versions of the vitamin) Combination therapy with UVA light and the oral medication psoralen, which may be especially effective if you have large areas of skin affected by vitiligo (This form of light therapy is effective but more difficult to administer than UVB, notes Mayo Clinic.) Pigment removal from unaffected skin using monobenzone cream Some of these treatment options come with negative side effects, such as scarring, dry and itchy skin, and skin with a streaky appearance. Alternative and Complementary Therapies for Vitiligo There have also been a few research studies on alternative medicine options, such as treating the area with certain herbs and vitamins. But so far the studies have been too small to draw sweeping conclusions, says Hal Weitzbuch, MD, a dermatologist in private practice in Calabasas, California, and an adjunct professor of medicine at the University of California, Los Angeles. Don't rely on unproven natural remedies instead of getting the medical care you need for vitiligo, he says. It's also important to note that many individuals do not require or want treatment to minimize or conceal their vitiligo patches, since the visible patches pose no physical risks to people with them, says the Vitiligo Society. Your doctor or dermatologist can help you decide which treatment option, if any, is best for you. Learn More About How Doctors Diagnose and Treat Vitiligo Prevention of Vitiligo There's currently no way to prevent the onset of vitiligo, but there are steps you can take that may help keep symptoms from worsening. In addition to the treatment options mentioned above, protect your skin from the sun and UV light by using sunscreen, seeking shade, and wearing clothing that protects you from harmful rays. Cuts, scrapes, and burns can trigger patches of vitiligo in some people, notes the AAD, as can getting tattoos. In general, try to avoid injuring your skin. How to Care for Yourself and Your Skin If You Have Vitiligo Most people with vitiligo don't experience other physical complications. But the skin can cause a lot of emotional distress, and there's a link to other autoimmune disorders to be aware of…Learn More Complications of and Conditions Related to Vitiligo In general, people who have been diagnosed with vitiligo do not need to be overly worried about developing serious complications. Vitiligo and Skin Cancer Risk People with vitiligo — like the rest of the population — are encouraged to wear sunscreen (specifically a broad-spectrum, water-resistant option with an SPF of 30 or higher, per Mayo Clinic). Part of that is because skin without its natural color is more likely to burn in the sun. A function of melanin (the pigment that gives skin color, which is missing in patches of skin in people with vitiligo) is to help block out some of the sun's dangerous ultraviolet rays, so skin without it may be more vulnerable to sun damage, according to the American Cancer Society. But sun protection is also important because avoiding getting tan can make vitiligo patches less noticeable, and some vitiligo treatments can be disrupted by sun exposure, notes the AAD. RELATED: Should You Really Wear Sunscreen Indoors? Since the skin in the vitiligo-affected areas can burn more easily, it may be surprising to learn that instead of increasing skin cancer risk, vitiligo is associated with lower risk. A University of Amsterdam study published in the British Journal of Dermatology found a threefold lower risk for melanoma and nonmelanoma skin cancers in people with vitiligo compared with those without it. There are a few theories for why this might happen. The same genes associated with vitiligo may also lower the risk of malignant melanoma, suggested a study published in Genome Medicine; a second theory posits that whatever's causing the immune system to destroy melanocytes also causes it to destroy cancerous cells, notes the Vitiligo Clinic and Research Center at UMass Chan Medical School in Worcester, Massachusetts. RELATED: Everything You Need to Know About Skin Cancer It's good news for people with vitiligo, but it doesn't mean they should rely on their condition to give them absolute protection against the effects of the sun. Those with vitiligo simply don't need to be any more worried about skin cancer than the rest of the population, Dr. Rodrigues says. Learn More About Vitiligo and Skin Cancer Risk Vitiligo and Other Autoimmune Disorders Up to one-quarter of patients with vitiligo have another autoimmune disease. If you have vitiligo, you may be at risk for an autoimmune disorder. So it's important to discuss any new or unusual health issues you're experiencing with your primary care practitioner. Vitiligo does not cause other autoimmune conditions, but it may share a genetic basis with one. Here are some of the most common autoimmune diseases associated with vitiligo: Autoimmune thyroid disease Vitiligo and Mental Health Complications The other concern when it comes to vitiligo complications is the emotional toll of living with a very visible skin condition, especially one that can begin early in life. "It's a stigma — people have this aversion because it's not 'normal,'" says Sandy Skotnicki, MD, a dermatologist and assistant professor in the department of medicine at the University of Toronto. It can be especially difficult for people with darker skin, Dr. Skotnicki says, because the differences in skin tone are more obvious. For people with light skin, the presence of vitiligo may be less noticeable, Skotnicki adds. And for children and teens, it may be challenging to cope with vitiligo in the midst of other changes happening to their bodies, minds, and emotions — especially if their peers don't understand or respond sensitively to what's happening. For many, learning to deal with vitiligo means finding someone to talk to about the experience, whether that's a trusted doctor, close family members and friends, or a mental health professional, noted an article published in the Indian Journal of Dermatology. Learn More About the Complications of Vitiligo: How It Affects Your Body in the Short and Long Term More on Emotional Health Why You Shouldn't Skip Your Dermatology Appointment During COVID-19 How to Cope With Anxiety and Depression 44 Top Self-Care Tips for Taking Care of You During the Coronavirus Pandemic 10 Anxiety Blogs to Follow for a Stress-Free Life Research and Statistics: How Many People Have Vitiligo Vitiligo affects between 0.5 and 1 percent of people around the world, notes MedlinePlus, though some researchers think it's closer to 1.5 percent, due to underreporting of cases. The disorder often begins early in life, with 25 percent of cases occurring in children younger than 10 years old, one-half happening in kids and teens younger than age 20 and up to 80 percent striking before age 30. It has developed infants and in adults as old as those in their mid-fifties, according to data in the Dermatology article mentioned above. Promising research is underway examining the genetic roots of vitiligo and testing compounds and treatments that may interrupt the autoimmune response, inflammation and the destruction of melanocytes. Areas of current vitiligo research include: Medication that promotes the growth of melanocytes Medication intended to bring color back to the affected area, notes the Mayo Clinic A skin-grafting surgery called noncultured epidermal cell suspension Immune-targeting therapy to reverse the condition, per the review in Dermatology Gene therapy that reprograms melanocytes to prevent an autoimmune reaction, according to an article in Nature In addition to the investigation of these novel treatments, much of the latest vitiligo research has focused on gaining a better understanding of the genes involved with how the condition starts in the first place. By doing so, researchers hope to get closer to developing a treatment that prevents vitiligo from occurring or spreading. Since something in the environment appears to be responsible for triggering vitiligo (as people are not born with the condition), researchers have also focused on understanding what those triggers are and why they incite such a response within the cells. Why People With Vitiligo Are Joining the Body Positive Movement While some people with vitiligo seek treatment to cover up or repigment their skin, others choose to embrace the condition however it shows up. Ash Soto falls into that camp. The twentysomething from Orlando, Florida, documents her experience with vitiligo on her Instagram page, which is over 150,000 followers strong. Soto was diagnosed with vitiligo at age 12 after she saw a white spot on her neck and then noticed another one appear within a few months. "I remember being really scared and confused," she says. Soto admits she was teased at school for the way her skin looked and says her vitiligo hurt her self-esteem and made her feel insecure. By her late teens, however, she had decided to embrace her skin and use it as a canvas for art, which she shares photos of on Instagram. Her photos are accompanied by inspirational captions that promote a love-yourself mentality. The body positive movement is all about self-acceptance, so it's been a natural fit for people who want to embrace their vitiligo. Some well-known people have been open about their vitiligo — including the model Winnie Harlow, the ballet dancer Michaela DePrince, Breanne Rice from The Bachelor, and actor Jon Hamm — and this has helped bring vitiligo into the spotlight, notes the Vitiligo Society. With this raised awareness, people may become more accepting of those living with the condition. As for Soto, she's all for vitiligo being included in the body positive movement. "When I was younger, I didn't have anybody to look up to," she says. "It's so important for us to raise awareness for kids who are being diagnosed now." Since vitiligo doesn't usually go away over time, it's important that vitiligo patients develop coping strategies by learning about the condition and connecting with others who are living with it, too. Learn More About Ash Soto and Vitiligo's Role in the Body Positive Movement Ash Soto, a twentysomething with vitiligo, says raising awareness about the skin condition can help others cope with the huge emotional toll it can bring. Here's her story.…Learn More AnxietyDepression PsoriasisRheumatoid Arthritis Favorite Orgs for Essential Vitiligo Info American Academy of Dermatology (AAD) With a membership of over 20,500 physicians worldwide, the AAD is dedicated to advancing treatment for dermatological conditions and advocating for patients. Their site offers a comprehensive, accessible overview of vitiligo, as well as information on treatment options and self-care tips. American Vitiligo Research Foundation Stella Pavlides started this charitable foundation in 1995 and has continued to run it since. Pavlides acknowledges that the condition can lead to bullying and insensitive comments from others, which makes it difficult to live with, especially for children. Her goal is to build public awareness of vitiligo, promote inclusion, and raise the spirits of those afflicted. The Foundation supports vitiligo research, hosts events, and shares information about vitiligo on its website. Vitiligo Support International The nonprofit organization promotes vitiligo research and treatment options. The treatments section of the site is particularly useful and outlines traditional options, including topical and surgical therapies, as well as more homeopathic ones. American Autoimmune Related Diseases Association (AARDA) Learn more about over 100 autoimmune diseases, including symptoms and how to get a diagnosis, on the website of this education and advocacy organization. Favorite Vitiligo Blogs Living Dappled What's it really like to live with vitiligo? Erika Page, editor in chief of Living Dappled, shows you. She puts a refreshingly positive spin on living with vitiligo and fills the site with news about the condition, interviews, as well as practical tips about how to manage the condition every day. The site includes tips on how to care for your skin and how to cope with moments when you're feeling down. 'Speaking of Vitiligo …' This blog is run and written by John E. Harris, MD, PhD, associate professor in dermatology and director of the Vitiligo Clinic and Research Center at the University of Massachusetts Medical School in Worcester. Dr. Harris shares his takes on new treatments, unanswered research questions, and what's going on in the vitiligo community. Favorite Vitiligo Support Networks and Online Communities Daily Strength Daily Strength hosts support groups for many different types of health issues and conditions. The vitiligo page offers a place for people with vitiligo to connect about their experience and discuss everything from treatments that have worked for them to worries and fears about managing the condition. VITFriends Interested in connecting with other people with vitiligo? Visit the "support group" page of this site and click on the city nearest you for information about local support groups. Vitiligo Friends Vitiligo Friends is a network of more than 7,000 members. The online community has been active since 2007. You'll need to sign up to become a member, but once you're approved, you'll have access to message boards where you can connect with other members and lean on one another for support. Favorite Camp for Kids With Vitiligo AAD Camp Discovery This weeklong camp from the American Academy of Dermatology is open to children between ages 8 and 16 who have a chronic skin condition, such as vitiligo, alopecia, or psoriasis. All fees for camp, including transportation, are covered by the AAD. A dermatologist's referral is needed to attend. AAD Camp Discovery takes place in five locations each summer and gives children the opportunity to swim, fish, horseback ride, and have fun. A dermatologist is on site to ensure each child's health needs are met. With additional reporting by Sari Harrar. Editorial Sources and Fact-Checking Vitiligo. American Academy of Dermatology. June 2022. Vitiligo. National Institute of Arthritis and Musculoskeletal and Skin Diseases. May 2019. Vitiligo: Symptoms and Causes. Mayo Clinic. May 2022. Prabha N, Chhabra N, Shrivastava AK, et al. Ocular Abnormalities in Vitiligo Patients: A Cross-Sectional Study. Indian Dermatology Online Journal. November–December 2019. Manga P, Elbuluk N, Orlow SJ. Recent Advances in Understanding Vitiligo. F1000 Research. 2016. Bergquist C, Ezzedine K. Vitiligo: A Review. Dermatology. March 10, 2020. Complications of Vitiligo. NHS Choices. November 5, 2019. Melanin. MedlinePlus. August 25, 2020. Vitiligo. MedlinePlus. February 2022. Diagnosing Vitiligo. New York University Langone Health. Vitiligo: Diagnosis and Treatment. Mayo Clinic. May 2022. Vitiligo. Cleveland Clinic. January 13, 2020. Vitiligo: Self-Care. American Academy of Dermatology. June 2022. What You Need to Know About Vitiligo. Vitiligo Society. Are Some People More Likely to Get Skin Damage From the Sun? American Cancer Society. July 29, 2019. Teulings HE, Overkamp M, Ceylan E, et al. Decreased Risk of Melanoma and Nonmelanoma Skin Cancer in Patients With Vitiligo: A Survey Among 1,307 Patients and Their Partners. The British Journal of Dermatology. January 2013. Spritz RA. The Genetics of Generalized Vitiligo: Autoimmune Pathways and an Inverse Relationship With Malignant Melanoma. Genome Medicine. October 19, 2010. I Have Vitiligo, Will I Get Skin Cancer? UMass Medical School Vitiligo Clinic and Research Center. July 19, 2014. Zoysa P. Psychological Interventions in Dermatology. Indian Journal of Dermatology. January–February 2013. Schmidt C. Temprian Therapeutics: Developing a Gene-Based Treatment for Vitiligo. Nature. June 30, 2020. The Latest in Vitiligo What Is a Chalazion? Symptoms, Causes, Diagnosis, Treatment, and Prevention By Joseph Bennington-Castro January 23, 2023 What Is Collagen? Health Benefits, Food Sources, Supplements, Types, and More By Jessica Migala January 13, 2023 Vitiligo shares symptoms with a number of other conditions, but knowing how they differ can help you identify this condition correctly. By Kristeen Cherney, PhD December 9, 2022 Vitiligo doesn't always manifest the same way. Here's what to expect at different life stages. By Kristeen Cherney, PhD November 30, 2022 7 Celebrities Who Have Spoken Out About Vitiligo Many famous figures have shared their experiences with this autoimmune condition, which causes patches of depigmented skin. By Moira Lawler August 7, 2022 Vitiligo: Signs and Symptoms Does Vitiligo Increase Your Risk of Skin Cancer? Self-Care When You Have Vitiligo By Moira Lawler July 27, 2022 FDA Approves New Vitiligo Treatment, Ruxolitinib (Opzelura) The JAK inhibitor cream is the first medication that can restore pigment in people with this autoimmune disease. What Is Rosacea? Symptoms, Causes, Diagnosis, Treatment, and Prevention
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,299
Q: Passing Data from Android Notifications I have built a notification and am displaying it properly, but I can't figure out how to pass data to the activity. I pulled one string from the intent to display as the title of the notification, but I need to pull a 2nd string, and have the NotificationHandlerActivity process it. //inside intentservice private void sendNotification(Bundle extras) { Intent intent=new Intent(this, NotificationHandlerActivity.class); intent.setFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP | Intent.FLAG_ACTIVITY_CLEAR_TOP); intent.putExtra("link", extras.getString("link")); mNotificationManager = (NotificationManager)this.getSystemService(Context.NOTIFICATION_SERVICE); PendingIntent contentIntent = PendingIntent.getActivity(this, 0, intent, 0); long[] vibrate = {100L, 75L, 50L}; NotificationCompat.Builder mBuilder = new NotificationCompat.Builder(this) .setSmallIcon(R.drawable.abc_ic_menu_copy_mtrl_am_alpha) .setContentTitle(extras.getString("title")) .setOngoing(false) .setAutoCancel(true) .setVibrate(vibrate); mBuilder.setContentIntent(contentIntent); mNotificationManager.notify(NOTIFICATION_ID, mBuilder.build()); } //Inside NotificationHandlerActivity @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Bundle b = getIntent().getExtras(); } A: You should use the extras on your Intent. Because your intent is currently anonymous, you cannot do so. Extras are basic key-value stores. See below: public static final String KEY_SECOND_STRING = "keySecondString"; ... ... String secondString = "secondString"; mNotificationManager = (NotificationManager) this.getSystemService(Context.NOTIFICATION_SERVICE); Intent intent = new Intent(this, NotificationHandlerActivity.class); intent.putExtra(KEY_SECOND_STRING, secondString); PendingIntent contentIntent = PendingIntent.getActivity(this, 0, intent, 0); ... then, from your NotificationHandlerActivity, you can access that secondString from the intent. @Override public void onCreate(Bundle sIS){ super.onCreate(); String secondString = getIntent().getStringExtra("keySecondString"); ... } A: use it like this : Intent intent=new Intent(this, NotificationHandlerActivity.class); intent.putExtra("key", "value"); mNotificationManager = (NotificationManager)this.getSystemService(Context.NOTIFICATION_SERVICE); PendingIntent contentIntent = PendingIntent.getActivity(this, 0, intent, 0); long[] vibrate = {100L, 75L, 50L}; NotificationCompat.Builder mBuilder = new NotificationCompat.Builder(this) .setSmallIcon(R.drawable.abc_ic_menu_copy_mtrl_am_alpha) .setContentTitle(extras.getString("title")) .setOngoing(false) .setAutoCancel(true) .setVibrate(vibrate); mBuilder.setContentIntent(contentIntent); mNotificationManager.notify(NOTIFICATION_ID, mBuilder.build()); A: Apparently using 0 as the requestCode when calling getActivity() on the PendingIntent isn't the way to go. I updated it to just use System.currentTimeMillis() and that seemed to work. I'm assuming that the first time I built the notification, using 0 as the requestCode, the extra wasn't present.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,237
anti-land mafia women rescue vans UP CM launches anti-land mafia portal Launching the anti-land mafia portal, the CM said the government had launched a special drive in which as many as 1,035 land mafias had been identifiedTNN | June 25, 2017, 09:19 IST LUCKNOW: Chief minister Yogi Adityanath on Saturday launched an anti-land mafia portal and flagged off 61 women rescue vans at separate functions in the state capital. Both the schemes were in line with the BJP's election manifesto where the party promised to initiate stringent action against land-grabbers and assured all possible help to women in distress. Launching the anti-land mafia portal, the CM said the government had launched a special drive in which as many as 1,035 land mafias had been identified. He said more than 1.5 lakh properties had been identified which had been encroached upon by them. "The government has initiated action against such land-grabbers under the stringent Anti Gangsters' Act and Goonda Act," Yogi said, adding that the process was under way to digitize all land records to check land-grabbing in future. At another function, the CM flagged off 61 Women Rescue Vans which are linked to the women helpline 181 to respond to distress calls. "We have promised to provide an atmosphere where women would no longer fall prey to unscrupulous anti-social elements and the rescue vans are an extension to the anti-Romeo squads that the government has formed to look after women in need," he said. Tags : Regulatory, Yogi Adityanath, anti-land mafia, powerline, launch, women rescue vans, Lucknow
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,951
Q: Conda functions in terminal (Mac) won't work: KeyError('pkgs_dirs') I cannot run Conda in Terminal (Mac) or many of its commands. Only conda activate <env> or conda deactivate run that I know of, and I can run Jupyter from my chosen environment thankfully. Otherwise I get the following error: conda info could not be constructed. KeyError('pkgs_dirs') I'm at a complete loss. Should I uninstall Anaconda and reinstall it? However, I cannot use any uninstall commands from Conda—a catch-22. Here is the complete error report: ERROR REPORT Traceback (most recent call last): File "/opt/anaconda3/lib/python3.7/site-packages/conda/exceptions.py", line 1118, in __call__ return func(*args, **kwargs) File "/opt/anaconda3/lib/python3.7/site-packages/conda/cli/main.py", line 57, in main_subshell p = generate_parser() File "/opt/anaconda3/lib/python3.7/site-packages/conda/cli/conda_argparse.py", line 41, in generate_parser description='conda is a tool for managing and deploying applications,' File "/opt/anaconda3/lib/python3.7/site-packages/conda/cli/conda_argparse.py", line 117, in __init__ self._subcommands = context.plugin_manager.get_hook_results("subcommands") File "/opt/anaconda3/lib/python3.7/site-packages/conda/base/context.py", line 421, in plugin_manager from ..plugins.manager import get_plugin_manager File "/opt/anaconda3/lib/python3.7/site-packages/conda/plugins/manager.py", line 9, in <module> from . import solvers, virtual_packages File "/opt/anaconda3/lib/python3.7/site-packages/conda/plugins/virtual_packages/__init__.py", line 5, in <module> from . import archspec, cuda, linux, osx, windows File "/opt/anaconda3/lib/python3.7/site-packages/conda/plugins/virtual_packages/cuda.py", line 3, in <module> import ctypes File "/opt/anaconda3/lib/python3.7/ctypes/__init__.py", line 7, in <module> from _ctypes import Union, Structure, Array ImportError: dlopen(/opt/anaconda3/lib/python3.7/lib-dynload/_ctypes.cpython-37m-darwin.so, 0x0002): Library not loaded: @rpath/libffi.6.dylib Referenced from: <66269FD6-56F9-32A5-8A25-F986023901A1> /opt/anaconda3/lib/python3.7/lib-dynload/_ctypes.cpython-37m-darwin.so Reason: tried: '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/anaconda3/lib/python3.7/lib-dynload/../../libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/anaconda3/lib/python3.7/lib-dynload/../../libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/anaconda3/bin/../lib/libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/anaconda3/bin/../lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS@rpath/libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/anaconda3/lib/python3.7/lib-dynload/../../libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/anaconda3/lib/python3.7/lib-dynload/../../libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/anaconda3/bin/../lib/libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/concourse/worker/volumes/live/d698b2ce-b4b9-4fb4-6268-e633fba1b324/volume/python_1565725718142/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libffi.6.dylib' (no such file), '/opt/anaconda3/bin/../lib/libffi.6.dylib' (no such file), '/usr/local/lib/libffi.6.dylib' (no such file), '/usr/lib/libffi.6.dylib' (no such file, not in dyld cache) With the result that $ /opt/anaconda3/bin/conda environment variables: conda info could not be constructed. KeyError('pkgs_dirs') An unexpected error has occurred. Conda has prepared the above report. Recent Environment History conda-meta/history ==> 2022-03-01 15:37:32 <== # cmd: /opt/anaconda3/bin/conda update anaconda # conda version: 4.11.0 # update specs: ['anaconda'] # neutered specs: ['anaconda'] ==> 2022-03-01 16:01:48 <== # cmd: /opt/anaconda3/bin/conda update anaconda-navigator # conda version: 4.11.0 # update specs: ['anaconda-navigator'] ==> 2022-11-17 18:42:41 <== # cmd: /opt/anaconda3/bin/conda install --yes --json --force-pscheck --prefix /opt/anaconda3 anaconda-navigator==2.3.2 # conda version: 4.11.0 # update specs: ['anaconda-navigator==2.3.2'] ==> 2022-12-08 04:32:09 <== # cmd: /opt/anaconda3/bin/conda update conda # conda version: 22.9.0 # update specs: ['conda'] I think the problem occurred after my last update, which was a few weeks ago in December and concerning Conda.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,390
Recently launched in the worldwide market, Huawei Mate S has "Touch Control" as its core feature. Huawei Mate S is without a doubt going to create another experience for customers that upgrades their interaction with the device. When seeing photos, customers can rapidly preview and expand the snaps by pressing the screen with one finger, reforming the elements of a conventional phone. Another progressive element of Huawei Mate S is that this smart phone can likewise be utilized as a scale to measure objects. This smart phone is equipped with Fingerprint 2.0, an updated version of the advanced chip level security and one-key open technology in the Huawei Mate 7. Fingerprint 2.0 improves acknowledgment speeds by 100 percent, with more exact self-learning functions. It can likewise be utilized to control the notice bar, double tap to delete unread notification, and slide to see pictures, and hold and take phone calls. These options improve the one hand operation of the phone. Fingerprint 2.0 and Knuckle Control 2.0 – initially presented in the Huawei P8 – simplifies how customers switch between application operations and take screenshots. This feature offers customers another way to interface with their phone: drawing a "C" with their knuckle actuates a camera, while double tapping the screen with a knuckle records the screen in the form of a video. With these options, each sort of touch provides a chance to be innovative. With the above mentioned ultra cool features, Huawei has guaranteed that Mate S is the most splendid smart phone released in 2015. Mate S is surely going to assume control over the smart phone business sector like rapidly spreading fire and set new top record of Huawei deals. The company plans to unveil three fresh brand new devices, the Mirror 5, R7 Plus and R7 Lite, and the launch occasion will finish up with stellar fashion shows featuring designers Munib Nawaz and Shiza Hassan. The occasion will be attend by famous people and technology enthusiasts alike, and OPPO looks not just introducing the crowed to their most recent smart phones, additionally Mobile phone dealers are originating from Nationwide. The launch of the three new devices in Pakistan, they aim to continuously bring best technology in Pakistan. All phones boast lightning-fast Qualcomm processors and sleek designs with the Mirror 5 utilizing a full solid 8 MP camera with diamond cut like back cover... Whereas, R7Series, featuring metallic uni body along with 2.5D curved screen especially R7 Plus with VOOC Flash Charge which allows m@king 2 hours call with 5 minutes charging time, and furthermore it equipped with laser auto focus sensors which can take snapshot in just 0.3 sec. These latest technology and meticulous design will add more credit on OPPO's slogan: the art of technology. Samsung Pakistan has inaugurated an "Experience Zone" in Fortress Square Mall, Lahore. The ceremony was held on fourth of September. A few activities were conducted for engaging visitors and the victors were awarded Samsung products. Samsung already have numerous "Experience Zone" in nation. Company's fundamental point of this venture is to give Samsung's quality products straightforwardly to clients. Experience Zone outlets are completely outfitted with helpful Samsung products and the assistants were prepared to convince clients and to provide important details to the clients identified with specific product. The idea of experience stores is related to marketing, every single top brand attempt to open their experience zone's in most visited shopping centers and markets. These store help clients to find all products at one spot for example your will find Smartphone's, tablets, Laptops and cameras alongside different accessories of Samsung at Samsung's experience store. The interesting part is you get the expert guidance to the assistants which will help you to find the best product for your needs. After the House of Representatives passed a bill to give U.S. mobile phone users have the right to unlock their device, all that remains now is for President Barack Obama sign off the bill and make it law. The Senate has already waved through legislation. "The bill passed Congress ... is a further step towards giving ordinary Americans more flexibility and choice, so that they can find a wireless service provider that meets their needs and their budget," Obama said. Importance Of Information Technology in Pakistan: - Telecom network vendor Ericsson today said its 5G delivered 5Gbps speeds during a live, 0ver-the-air demonstration at Ericsson Lab in Kista, Sweden, in the presence of NTT DOCOMO, SK Telecom and Management. The performance is important for Ericsson because vendors such as Huawei, Nokia, etc. are als0 eyeing dominance in the 5G space that will be available on the market is expected in 2020. Latest Information Technology News in Pakistan: - Intel Pakistan recently an Intel Education Solutions Workshop in collaboration with the Viper Technology, where educators, the public and private institutions from all over Lahore were given a detailed briefing on Intel Classmate PC, according to a press release from Intel on Monday. The delegates were given a briefing with the classroom on wheels (COW) concept. COW has built a complete laboratory in a shopping cart with its own emergency power supply, it said. All systems are connected to each other, he explained what makes the learning experience more fun, while the teachers complete control over the students' systems. Developers can try new features of the next version of Internet Explorer with a test issue, Microsoft has released for their use. If you keep track of the terms of Microsoft's customer service contracts, you know (I know, it's how logging the growth of the grass in your backyard is) that the TO has gone through a limited but bringing in embarrassment renovated in March after Microsoft has admitted that it has examined the Hotmail account from a blogger in response to the Kibkalo windows 8 leaks. Russian Alex Kibkalo which has been notched at a conference in Bellevue has drawn a sentence of time-served-plus-one-week, followed by delivery. We pay penance with yet another WOMAN to update. IT Pakistan: - At a regional conference in Dubai, Devices Microsoft announced the availability of its first Lumia devices running Windows Phone 8.1 in the Middle East, the Middle East and North Africa. Nokia Lumia 630, available in both Dual and Single SIM variant SIM and Nokia Lumia 635 with support for 4G, the first Lumia devices running Windows Phone 8.1 available in the regi0n.
{ "redpajama_set_name": "RedPajamaC4" }
1,318
The City of Boston Archives preserves and makes accessible the permanent and historical records of Boston city government. Our holdings include City Council records, Mayoral records, meeting minutes, correspondence, reports, tax records, photographs and much more. Our catalog contains detailed descriptive information about the collections held in the City Archives. Links to online digital access are included when available. For specific questions about our holdings, please contact us. Search the City's archival holdings for detailed information contained in finding aids and collection descriptions. To browse the catalog, use the options in the navigation bar above.
{ "redpajama_set_name": "RedPajamaC4" }
9,554
Scotland suffered their first defeat in European Under-21 Championship Qualifying Group 3 as an impressive French team claimed their first win. Ricky Sbragia's side handed their feted visitors the lead when Stephen Kingsley clearance struck team-mate Stuart Findlay and rolled into the net. After the break, Corentin Tolisso burst into the area and smashed a drive past Jack Hamilton for France's second. The hosts had Ryan Gauld sent off before Billy King's late consolation. The French join Scotland on three points from two games, with Iceland top on 10 points after four games. Scotland were on the front foot from kick-off with Hibernian striker Jason Cummings feeding Hearts' Sam Nicholson, who could not get his shot away quickly enough as Athletic Bilbao centre-back Aymeric Laporte moved in. France then suffered an early set-back when Tiemoue Bakayoko hobbled off to be replaced by Monaco team-mate Thomas Lemar. Bayern Munich first-teamer Kingsley Coman also left the fray during the first half. The substitute made an immediate impression with a low centre between goalkeeper and defence, which Kingsley attempted to clear but only succeeded in hitting off Findlay and the ball trickled home. Scotland needed inspiration and Gauld almost delivered with a left-foot curler which crashed off the crossbar. The French were always a threat, though. Lemar scuffed a shot wide, Paris St-Germain starlet Adrien Rabiot almost converted after a rapid break from the back and only a last-gasp challenge from Inverness' Ryan Christie prevented Utrecht striker Sebastien Haller from an easy tap-in. Scotland, though, struggled to get their own influential performers on the ball with any regularity as France enjoyed possession. Hearts defender Jordan McGhee did almost touch home from close range after some Gauld trickery earned a free-kick but the ball was directed straight into Mouez Hassen's arms. The visitors appeared to step up their performance in the second half and were rewarded with a brilliant individual goal from Tolisso, who surged into the box, beat a couple of challenges and fired a superb angled strike home. That unsettled the Scots, with Lemar almost adding a third moments later before Gauld was red carded for taking out Guingamp's Marcus Coco from the back as France broke forward in numbers. King did stab home a stoppage-time goal from close range but the outcome was never really in doubt. Sbragia's side now face a vital home match against an Iceland on Tuesday. Scotland Under-21: Hamilton, Paterson, Kingsley, Findlay, McGhee, McGinn, Christie, Nicholson (King 68),Slater (Shankland 58), Gauld, Cummings (McManus 78). Subs Not Used: R Fulton, J Fulton, Love, Hyam. France Under-21: Hassen, Mendy, Laporte, Amavi, Pavard, Rabiot, Bakayoko (Lemar 8), Tolisso, Walter, Coman (Coco 23), Haller (Crivelli 90). Subs Not Used: Maignan, Guilbert, Lenglet, Jean.
{ "redpajama_set_name": "RedPajamaC4" }
2,149
Q: A long table on two pages I have the following table which I would like to span over 2+ pages. I have tried using longtable, ltxtable, tabu, etc, but must be doing something wrong. It was originally a tabularx table, and I would like to keep the original formatting as much as possible (as other contributors should be adding to the table). How can I get this to span multiple pages with minimal modification to the contents of the table, (i.e. keeping hdashline, multicolumn, etc)? Thanks. PS - (I would like the caption to be at the bottom of the figure/page as that is how it is in the rest of the document.) Example % Example \documentclass[a4paper, 11pt]{article} \usepackage{amsmath} % Nice maths symbols. \usepackage{amssymb} % Nice variable symbols. \usepackage{array} % Allow for custom column widths in tables. \usepackage{arydshln} % Dashed lines using \hdashline \cdashline \usepackage{bbm} % Gives Blackboard fonts. \usepackage{bm} % Bold math symbols. \usepackage[margin=10pt,font=small,labelfont=bf,labelsep=endash]{caption} % Caption figures and tables nicely. \usepackage{enumitem} % Nice listing options in itemize and enumerate. \usepackage{esdiff} % Gives nice differential operators. \usepackage{esvect} % Gives nice vector arrows. \usepackage{float} % Nice figure placement. \usepackage{geometry} % Use nice margins. \usepackage{graphicx} % Include figures. \usepackage[colorlinks=true,linkcolor=blue,urlcolor=blue,citecolor=blue,anchorcolor=blue]{hyperref} % Colour links. \usepackage{indentfirst} % Indents the first paragraph. \usepackage{letltxmacro} % For defining a nice SQRT symbol. \usepackage{multirow} % Nice table cells spanning many rows. \usepackage{multicol} % If I want to use multiple columns. \usepackage[numbers, sort&compress]{natbib} % Nice references. \usepackage{physics} % Nice partial derivatives and BRAKET notation. \usepackage{subcaption} % Side by side figures. \usepackage{tabularx} % Tables with justified columns. \usepackage{tikz} % For drawing pictures. % Gives nice margins. \geometry{left=20mm, right=20mm, top=20mm, bottom=20mm} % Removes hyphenation \tolerance=1 \emergencystretch=\maxdimen \hyphenpenalty=10000 \hbadness=10000 % Custom column widths using C{2cm}, L, R, etc. \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \begin{document} \begin{table}[hbt] \begin{center} \footnotesize % Try to input a short description (no more than 1 or 2 lines) of algorithms in alphabetical order. \begin{tabularx}{\textwidth}{|L{0.4\textwidth}|X|} \hline \multicolumn{1}{|c|}{Algorithm} % use \texttt{text} instead of \verb|text| to allow line breaks for long function names. & \multicolumn{1}{c|}{Description} \\ \hline \texttt{edgeupperbound.m} & Adds the cheapest unused edge that won't create a fork or a loop until a path is formed, then adds in the final edge to create a loop. \\ \hdashline \texttt{FD\_bruteForce.m} & Explores all possible paths recursively tracking the total distance of smaller networks. \\ \hdashline \texttt{FD\_dynamicProgramming.m} & Explores all possible paths ending in a given city, iteratively increasing the network size using the Held-Karp algorithm. \\ \hdashline \texttt{FD\_greedy.m} & Greedy algorithm increases the path size by including the nearest city. \\ \hdashline \texttt{FD\_LPTSP.m} & Integer linear programming solver. \\ \hdashline \texttt{FD\_LPTSPit.m} & Integer linear programming solver which ignores the cut-set constraint, but does constrain against sub-cycles. \\ \hdashline \texttt{FD\_stochastic.m} & Random permutations of paths are considered for a given number of trials. \\ \hdashline \texttt{forcefully\_increasing\_loop.m} & Begins with a small loop, progressively increasing the size of the route by forcefully including a random city by changing the route by the amount. \\ \hdashline \texttt{greedy\_algorith\_TSP.m} & Greedy algorithm moves to shortest edge, not allowing for sub-loops. \\ \hdashline \texttt{greedy\_algorith\_TSP\_all.m} & Greedy algorithm moves to shortest edge, not allowing for sub-loops, and uses a specific starting point. \\ \hdashline \texttt{increasing\_loop.m} & Begins with a small loop, progressively increasing the size of the route by including the city which increases the distance by the minimal amount. \\ \hdashline \texttt{IntLinProgCutSetTSP.m} & Integer programming method without the cut-set constraint. \\ \hdashline \texttt{linprogtsp2.m} & Miller-Tucker-Zemlin algorithm, uses cut-set constraints. \\ \hdashline \texttt{nearestneigh.m} & A greedy nearest neighbours algorithm added the shortest possible edge. \\ \hdashline \texttt{optimal\_greedy\_TSP.m} & A greedy nearest neighbours algorithm using a specific starting point. \\ \hdashline \texttt{RG\_stochastic.m} & Generates random paths iteratively saving the best result for a chosen number of iterations. \\ \hdashline \texttt{search\_permutations.m} & Exhaustive search over all path permutations. \\ \hdashline \texttt{stochastic\_TSP.m} & Randomly generates paths, using the shortest over a fixed number of iterations. \\ \hdashline \texttt{tabu\_search.m} & Performs a Tabu search. \\ \hdashline \texttt{tsp\_ip\_no\_cut\_set\_oliver.m} & Integer programming solver which allows sub-loops.\\ \hdashline \texttt{tsp\_lp\_no\_cut\_set\_oliver.m} & Linear programming solver which allows sub loops and partial journeys (non-physical).\\ \texttt{two\_opt\_search.m} & Performs a 2-opt local search. \\ \hdashline \texttt{TwoHeadedSnake.m} & A two headed greedy algorithm for finding a Hamiltonian cycle by adding the nearest two cities iteratively. \\ \hdashline \texttt{twoopt.m} & Performs a 2-opt local search. \\ \hdashline \texttt{edgeupperbound.m} & Adds the cheapest unused edge that won't create a fork or a loop until a path is formed, then adds in the final edge to create a loop. \\ \hdashline \texttt{FD\_bruteForce.m} & Explores all possible paths recursively tracking the total distance of smaller networks. \\ \hdashline \texttt{FD\_dynamicProgramming.m} & Explores all possible paths ending in a given city, iteratively increasing the network size using the Held-Karp algorithm. \\ \hdashline \texttt{FD\_greedy.m} & Greedy algorithm increases the path size by including the nearest city. \\ \hdashline \texttt{FD\_LPTSP.m} & Integer linear programming solver. \\ \hdashline \texttt{FD\_LPTSPit.m} & Integer linear programming solver which ignores the cut-set constraint, but does constrain against sub-cycles. \\ \hdashline \texttt{FD\_stochastic.m} & Random permutations of paths are considered for a given number of trials. \\ \hdashline \texttt{forcefully\_increasing\_loop.m} & Begins with a small loop, progressively increasing the size of the route by forcefully including a random city by changing the route by the amount. \\ \hdashline \texttt{greedy\_algorith\_TSP.m} & Greedy algorithm moves to shortest edge, not allowing for sub-loops. \\ \hdashline \texttt{greedy\_algorith\_TSP\_all.m} & Greedy algorithm moves to shortest edge, not allowing for sub-loops, and uses a specific starting point. \\ \hdashline \texttt{increasing\_loop.m} & Begins with a small loop, progressively increasing the size of the route by including the city which increases the distance by the minimal amount. \\ \hdashline \texttt{IntLinProgCutSetTSP.m} & Integer programming method without the cut-set constraint. \\ \hdashline \texttt{linprogtsp2.m} & Miller-Tucker-Zemlin algorithm, uses cut-set constraints. \\ \hdashline \texttt{nearestneigh.m} & A greedy nearest neighbours algorithm added the shortest possible edge. \\ \hdashline \texttt{optimal\_greedy\_TSP.m} & A greedy nearest neighbours algorithm using a specific starting point. \\ \hdashline \texttt{RG\_stochastic.m} & Generates random paths iteratively saving the best result for a chosen number of iterations. \\ \hdashline \texttt{search\_permutations.m} & Exhaustive search over all path permutations. \\ \hdashline \texttt{stochastic\_TSP.m} & Randomly generates paths, using the shortest over a fixed number of iterations. \\ \hdashline \texttt{tabu\_search.m} & Performs a Tabu search. \\ \hdashline \texttt{tsp\_ip\_no\_cut\_set\_oliver.m} & Integer programming solver which allows sub-loops.\\ \hdashline \texttt{tsp\_lp\_no\_cut\_set\_oliver.m} & Linear programming solver which allows sub loops and partial journeys (non-physical).\\ \texttt{two\_opt\_search.m} & Performs a 2-opt local search. \\ \hdashline \texttt{TwoHeadedSnake.m} & A two headed greedy algorithm for finding a Hamiltonian cycle by adding the nearest two cities iteratively. \\ \hdashline \texttt{twoopt.m} & Performs a 2-opt local search. \\ \hline \end{tabularx} \caption{Available algorithms for solving the travelling salesman problem, giving the function name and a brief description.} \label{tab:brief_algorithm_descriptions} \end{center} \end{table} \end{document} A: Some comments: * *You mustn't nest a longtable inside a table environment. *If you're going to use a longtable, don't ignore the need to provided material for the \endfirsthead, \endhead, \endfoot, and \endlastfoot markers. *With a multi-page table, it's customary to provide the table caption at the top rather than at the bottom of the table. *I don't think it would be right to allow linebreaks for the material in the first column. Thus, I suggest you use the plain l column type for the first column. *I'd give the tabular material a more open look by removing all \hdashline instructions, replacing them with a bit more whitespace. The center vertical line seems like it's unneeded. *Since all entries in the first column (other than the header material) should be typeset using monospaced font, you can save yourself a lot of typing of \textt{...} "wrappers" by changing the column specification from l to >{\ttfamily}l. \documentclass[a4paper, 11pt]{article} % I've condensed the preamble to the bare mininum. \usepackage{array} \usepackage[margin=10pt,font=small,labelfont=bf,labelsep=endash]{caption} \usepackage{longtable} \usepackage[margin=20mm]{geometry} \newcolumntype{L}[2]{>{\raggedright\arraybackslash}p{#1}} \begin{document} \begingroup \setlength\extrarowheight{3pt} % for a more open look \small \begin{longtable}{|>{\ttfamily}l L{0.62\textwidth}|} %% header and footer material \caption{Available algorithms for solving the travelling salesman problem, giving the function name and a brief description.} \label{tab:brief_algorithm_descriptions}\\ \hline \multicolumn{1}{|l}{Algorithm} & Description \\ \hline \endfirsthead \multicolumn{2}{l}{Table \ref{tab:brief_algorithm_descriptions}, continued}\\[2ex] \hline \multicolumn{1}{|l}{Algorithm} & Description \\ \hline \endhead \hline \multicolumn{2}{r}{\em Continued on following page}\\ \endfoot \hline \endlastfoot %% body of longtable edgeupperbound.m & Adds the cheapest unused edge that won't create a fork or a loop until a path is formed, then adds in the final edge to create a loop. \\ FD\_bruteForce.m & Explores all possible paths recursively tracking the total distance of smaller networks. \\ FD\_dynamicProgramming.m & Explores all possible paths ending in a given city, iteratively increasing the network size using the Held-Karp algorithm. \\ FD\_greedy.m & Greedy algorithm increases the path size by including the nearest city. \\ FD\_LPTSP.m & Integer linear programming solver. \\ FD\_LPTSPit.m & Integer linear programming solver which ignores the cut-set constraint, but does constrain against sub-cycles. \\ FD\_stochastic.m & Random permutations of paths are considered for a given number of trials. \\ forcefully\_increasing\_loop.m & Begins with a small loop, progressively increasing the size of the route by forcefully including a random city by changing the route by the amount. \\ greedy\_algorith\_TSP.m & Greedy algorithm moves to shortest edge, not allowing for sub-loops. \\ greedy\_algorith\_TSP\_all.m & Greedy algorithm moves to shortest edge, not allowing for sub-loops, and uses a specific starting point. \\ increasing\_loop.m & Begins with a small loop, progressively increasing the size of the route by including the city which increases the distance by the minimal amount. \\ IntLinProgCutSetTSP.m & Integer programming method without the cut-set constraint. \\ linprogtsp2.m & Miller-Tucker-Zemlin algorithm, uses cut-set constraints. \\ nearestneigh.m & A greedy nearest neighbours algorithm added the shortest possible edge. \\ optimal\_greedy\_TSP.m & A greedy nearest neighbours algorithm using a specific starting point. \\ RG\_stochastic.m & Generates random paths iteratively saving the best result for a chosen number of iterations. \\ search\_permutations.m & Exhaustive search over all path permutations. \\ stochastic\_TSP.m & Randomly generates paths, using the shortest over a fixed number of iterations. \\ tabu\_search.m & Performs a Tabu search. \\ tsp\_ip\_no\_cut\_set\_oliver.m & Integer programming solver which allows sub-loops.\\ tsp\_lp\_no\_cut\_set\_oliver.m & Linear programming solver which allows sub loops and partial journeys (non-physical).\\ two\_opt\_search.m & Performs a 2-opt local search. \\ TwoHeadedSnake.m & A two headed greedy algorithm for finding a Hamiltonian cycle by adding the nearest two cities iteratively. \\ twoopt.m & Performs a 2-opt local search. \\ edgeupperbound.m & Adds the cheapest unused edge that won't create a fork or a loop until a path is formed, then adds in the final edge to create a loop. \\ FD\_bruteForce.m & Explores all possible paths recursively tracking the total distance of smaller networks. \\ FD\_dynamicProgramming.m & Explores all possible paths ending in a given city, iteratively increasing the network size using the Held-Karp algorithm. \\ FD\_greedy.m & Greedy algorithm increases the path size by including the nearest city. \\ FD\_LPTSP.m & Integer linear programming solver. \\ FD\_LPTSPit.m & Integer linear programming solver which ignores the cut-set constraint, but does constrain against sub-cycles. \\ FD\_stochastic.m & Random permutations of paths are considered for a given number of trials. \\ forcefully\_increasing\_loop.m & Begins with a small loop, progressively increasing the size of the route by forcefully including a random city by changing the route by the amount. \\ greedy\_algorith\_TSP.m & Greedy algorithm moves to shortest edge, not allowing for sub-loops. \\ greedy\_algorith\_TSP\_all.m & Greedy algorithm moves to shortest edge, not allowing for sub-loops, and uses a specific starting point. \\ increasing\_loop.m & Begins with a small loop, progressively increasing the size of the route by including the city which increases the distance by the minimal amount. \\ IntLinProgCutSetTSP.m & Integer programming method without the cut-set constraint. \\ linprogtsp2.m & Miller-Tucker-Zemlin algorithm, uses cut-set constraints. \\ nearestneigh.m & A greedy nearest neighbours algorithm added the shortest possible edge. \\ optimal\_greedy\_TSP.m & A greedy nearest neighbours algorithm using a specific starting point. \\ RG\_stochastic.m & Generates random paths iteratively saving the best result for a chosen number of iterations. \\ search\_permutations.m & Exhaustive search over all path permutations. \\ stochastic\_TSP.m & Randomly generates paths, using the shortest over a fixed number of iterations. \\ tabu\_search.m & Performs a Tabu search. \\ tsp\_ip\_no\_cut\_set\_oliver.m & Integer programming solver which allows sub-loops.\\ tsp\_lp\_no\_cut\_set\_oliver.m & Linear programming solver which allows sub loops and partial journeys (non-physical).\\ two\_opt\_search.m & Performs a 2-opt local search. \\ TwoHeadedSnake.m & A two headed greedy algorithm for finding a Hamiltonian cycle by adding the nearest two cities iteratively. \\ twoopt.m & Performs a 2-opt local search. \\ \end{longtable} \endgroup \end{document} A: Here is a way to do it with the ltablex package, which brings the functionalities of longtable to tabularx. Note it has to be loaded before arydshln. I replaced the \multicolumn{1}{c|} with a simple centering. \documentclass[a4paper, 11pt]{article} \usepackage{amsmath} % Nice maths symbols. \usepackage{amssymb} % Nice variable symbols. \usepackage{ltablex} % Tables with justified columns. long tabularx \usepackage{array} % Allow for custom column widths in tables. \usepackage{arydshln} % Dashed lines using \hdashline \cdashline \usepackage{bbm} % Gives Blackboard fonts. \usepackage{bm} % Bold math symbols. \usepackage[margin=10pt,font=small,labelfont=bf,labelsep=endash]{caption} % Caption figures and tables nicely. \usepackage{enumitem} % Nice listing options in itemize and enumerate. \usepackage{esdiff} % Gives nice differential operators. \usepackage{esvect} % Gives nice vector arrows. \usepackage{float} % Nice figure placement. \usepackage{geometry} % Use nice margins. \usepackage{graphicx} % Include figures. \usepackage[colorlinks=true,linkcolor=blue,urlcolor=blue,citecolor=blue,anchorcolor=blue]{hyperref} % Colour links. \usepackage{indentfirst} % Indents the first paragraph. \usepackage{letltxmacro} % For defining a nice SQRT symbol. \usepackage{multirow} % Nice table cells spanning many rows. \usepackage{multicol} % If I want to use multiple columns. \usepackage[numbers, sort&compress]{natbib} % Nice references. \usepackage{physics} % Nice partial derivatives and BRAKET notation. \usepackage{subcaption} % Side by side figures. \usepackage{tikz} % For drawing pictures. % Gives nice margins., showframe \geometry{margin=20mm} % Removes hyphenation \tolerance=1 \emergencystretch=\maxdimen \hyphenpenalty=10000 \hbadness=10000 % Custom column widths using C{2cm}, L, R, etc. \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \begin{document} {\footnotesize\setlength\extrarowheight{1.5pt}\keepXColumns % Try to input a short description (no more than 1 or 2 lines) of algorithms in alphabetical order. \begin{tabularx}{\textwidth}{|L{0.4\textwidth}| >{\arraybackslash}X|} \caption{Available algorithms for solving the travelling salesman problem, giving the function name and a brief description.} \label{tab:brief_algorithm_descriptions}\\ \hline \centering Algorithm % use \texttt{text} instead of \verb|text| to allow line breaks for long function names. & \centering Description \tabularnewline \hline \endfirsthead \hline \centering Algorithm % use \texttt{text} instead of \verb|text| to allow line breaks for long function names. & \centering Description \tabularnewline \hline \endhead \multicolumn{2}{r}{\em To be continued} \endfoot \hline \endlastfoot \texttt{edgeupperbound.m} & Adds the cheapest unused edge that won't create a fork or a loop until a path is formed, then adds in the final edge to create a loop. \\ \hdashline \texttt{FD\_bruteForce.m} & Explores all possible paths recursively tracking the total distance of smaller networks. \\ \hdashline \texttt{FD\_dynamicProgramming.m} & Explores all possible paths ending in a given city, iteratively increasing the network size using the Held-Karp algorithm. \\ \hdashline \texttt{FD\_greedy.m} & Greedy algorithm increases the path size by including the nearest city. \\ \hdashline \texttt{FD\_LPTSP.m} & Integer linear programming solver. \\ \hdashline \texttt{FD\_LPTSPit.m} & Integer linear programming solver which ignores the cut-set constraint, but does constrain against sub-cycles. \\ \hdashline \texttt{FD\_stochastic.m} & Random permutations of paths are considered for a given number of trials. \\ \hdashline \texttt{forcefully\_increasing\_loop.m} & Begins with a small loop, progressively increasing the size of the route by forcefully including a random city by changing the route by the amount. \\ \hdashline \texttt{greedy\_algorith\_TSP.m} & Greedy algorithm moves to shortest edge, not allowing for sub-loops. \\ \hdashline \texttt{greedy\_algorith\_TSP\_all.m} & Greedy algorithm moves to shortest edge, not allowing for sub-loops, and uses a specific starting point. \\ \hdashline \texttt{increasing\_loop.m} & Begins with a small loop, progressively increasing the size of the route by including the city which increases the distance by the minimal amount. \\ \hdashline \texttt{IntLinProgCutSetTSP.m} & Integer programming method without the cut-set constraint. \\ \hdashline \texttt{linprogtsp2.m} & Miller-Tucker-Zemlin algorithm, uses cut-set constraints. \\ \hdashline \texttt{nearestneigh.m} & A greedy nearest neighbours algorithm added the shortest possible edge. \\ \hdashline \texttt{optimal\_greedy\_TSP.m} & A greedy nearest neighbours algorithm using a specific starting point. \\ \hdashline \texttt{RG\_stochastic.m} & Generates random paths iteratively saving the best result for a chosen number of iterations. \\ \hdashline \texttt{search\_permutations.m} & Exhaustive search over all path permutations. \\ \hdashline \texttt{stochastic\_TSP.m} & Randomly generates paths, using the shortest over a fixed number of iterations. \\ \hdashline \texttt{tabu\_search.m} & Performs a Tabu search. \\ \hdashline \texttt{tsp\_ip\_no\_cut\_set\_oliver.m} & Integer programming solver which allows sub-loops.\\ \hdashline \texttt{tsp\_lp\_no\_cut\_set\_oliver.m} & Linear programming solver which allows sub loops and partial journeys (non-physical).\\ \texttt{two\_opt\_search.m} & Performs a 2-opt local search. \\ \hdashline \texttt{TwoHeadedSnake.m} & A two headed greedy algorithm for finding a Hamiltonian cycle by adding the nearest two cities iteratively. \\ \hdashline \texttt{twoopt.m} & Performs a 2-opt local search. \\ \hdashline \texttt{edgeupperbound.m} & Adds the cheapest unused edge that won't create a fork or a loop until a path is formed, then adds in the final edge to create a loop. \\ \hdashline \texttt{FD\_bruteForce.m} & Explores all possible paths recursively tracking the total distance of smaller networks. \\ \hdashline \texttt{FD\_dynamicProgramming.m} & Explores all possible paths ending in a given city, iteratively increasing the network size using the Held-Karp algorithm. \\ \hdashline \texttt{FD\_greedy.m} & Greedy algorithm increases the path size by including the nearest city. \\ \hdashline \texttt{FD\_LPTSP.m} & Integer linear programming solver. \\ \hdashline \texttt{FD\_LPTSPit.m} & Integer linear programming solver which ignores the cut-set constraint, but does constrain against sub-cycles. \\ \hdashline \texttt{FD\_stochastic.m} & Random permutations of paths are considered for a given number of trials. \\ \hdashline \texttt{forcefully\_increasing\_loop.m} & Begins with a small loop, progressively increasing the size of the route by forcefully including a random city by changing the route by the amount. \\ \hdashline \texttt{greedy\_algorith\_TSP.m} & Greedy algorithm moves to shortest edge, not allowing for sub-loops. \\ \hdashline \texttt{greedy\_algorith\_TSP\_all.m} & Greedy algorithm moves to shortest edge, not allowing for sub-loops, and uses a specific starting point. \\ \hdashline \texttt{increasing\_loop.m} & Begins with a small loop, progressively increasing the size of the route by including the city which increases the distance by the minimal amount. \\ \hdashline \texttt{IntLinProgCutSetTSP.m} & Integer programming method without the cut-set constraint. \\ \hdashline \texttt{linprogtsp2.m} & Miller-Tucker-Zemlin algorithm, uses cut-set constraints. \\ \hdashline \texttt{nearestneigh.m} & A greedy nearest neighbours algorithm added the shortest possible edge. \\ \hdashline \texttt{optimal\_greedy\_TSP.m} & A greedy nearest neighbours algorithm using a specific starting point. \\ \hdashline \texttt{RG\_stochastic.m} & Generates random paths iteratively saving the best result for a chosen number of iterations. \\ \hdashline \texttt{search\_permutations.m} & Exhaustive search over all path permutations. \\ \hdashline \texttt{stochastic\_TSP.m} & Randomly generates paths, using the shortest over a fixed number of iterations. \\ \hdashline \texttt{tabu\_search.m} & Performs a Tabu search. \\ \hdashline \texttt{tsp\_ip\_no\_cut\_set\_oliver.m} & Integer programming solver which allows sub-loops.\\ \hdashline \texttt{tsp\_lp\_no\_cut\_set\_oliver.m} & Linear programming solver which allows sub loops and partial journeys (non-physical).\\ \texttt{two\_opt\_search.m} & Performs a 2-opt local search. \\ \hdashline \texttt{TwoHeadedSnake.m} & A two headed greedy algorithm for finding a Hamiltonian cycle by adding the nearest two cities iteratively. \\ \hdashline \texttt{twoopt.m} & Performs a 2-opt local search. \end{tabularx}} \end{document} A: You are creating a table inside another table in your code. If I have understood it right, this code is doing what you want: \documentclass[a4paper, 11pt]{article} % I removed some packages from the list which are not needed for this table \usepackage{longtable,ltablex} \usepackage{array} \usepackage{arydshln} \usepackage[margin=10pt,font=small,labelfont=bf,labelsep=endash]{caption} % Caption figures and tables nicely. \usepackage{geometry} \usepackage{multirow} \usepackage{multicol} % Gives nice margins. \geometry{left=20mm, right=20mm, top=20mm, bottom=20mm} % Custom column widths using C{2cm}, L, R, etc. \newcolumntype{L}[1]{>{\raggedright\arraybackslash}m{#1}} \newcolumntype{C}[1]{>{\centering\arraybackslash}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\arraybackslash}m{#1}} \begin{document} \begin{center} %\footnotesize \renewcommand{\arraystretch}{1.1} % some stretching (better layout) \begin{tabularx}{\textwidth}{|L{0.4\textwidth}|X|} \hline \multicolumn{1}{|c|}{Algorithm} & \multicolumn{1}{c|}{Description} \\ \hline \texttt{edgeupperbound.m} & Adds the cheapest unused edge that won't create a fork or a loop until a path is formed, then adds in the final edge to create a loop. \\ \hdashline \texttt{FD\_bruteForce.m} & Explores all possible paths recursively tracking the total distance of smaller networks. \\ \hdashline \texttt{FD\_dynamicProgramming.m} & Explores all possible paths ending in a given city, iteratively increasing the network size using the Held-Karp algorithm. \\ \hdashline \texttt{FD\_greedy.m} & Greedy algorithm increases the path size by including the nearest city. \\ \hdashline \texttt{FD\_LPTSP.m} & Integer linear programming solver. \\ \hdashline \texttt{FD\_LPTSPit.m} & Integer linear programming solver which ignores the cut-set constraint, but does constrain against sub-cycles. \\ \hdashline \texttt{FD\_stochastic.m} & Random permutations of paths are considered for a given number of trials. \\ \hdashline \texttt{forcefully\_increasing\_loop.m} & Begins with a small loop, progressively increasing the size of the route by forcefully including a random city by changing the route by the amount. \\ \hdashline \texttt{greedy\_algorith\_TSP.m} & Greedy algorithm moves to shortest edge, not allowing for sub-loops. \\ \hdashline \texttt{greedy\_algorith\_TSP\_all.m} & Greedy algorithm moves to shortest edge, not allowing for sub-loops, and uses a specific starting point. \\ \hdashline \texttt{increasing\_loop.m} & Begins with a small loop, progressively increasing the size of the route by including the city which increases the distance by the minimal amount. \\ \hdashline \texttt{IntLinProgCutSetTSP.m} & Integer programming method without the cut-set constraint. \\ \hdashline \texttt{linprogtsp2.m} & Miller-Tucker-Zemlin algorithm, uses cut-set constraints. \\ \hdashline \texttt{nearestneigh.m} & A greedy nearest neighbours algorithm added the shortest possible edge. \\ \hdashline \texttt{optimal\_greedy\_TSP.m} & A greedy nearest neighbours algorithm using a specific starting point. \\ \hdashline \texttt{RG\_stochastic.m} & Generates random paths iteratively saving the best result for a chosen number of iterations. \\ \hdashline \texttt{search\_permutations.m} & Exhaustive search over all path permutations. \\ \hdashline \texttt{stochastic\_TSP.m} & Randomly generates paths, using the shortest over a fixed number of iterations. \\ \hdashline \texttt{tabu\_search.m} & Performs a Tabu search. \\ \hdashline \texttt{tsp\_ip\_no\_cut\_set\_oliver.m} & Integer programming solver which allows sub-loops.\\ \hdashline \texttt{tsp\_lp\_no\_cut\_set\_oliver.m} & Linear programming solver which allows sub loops and partial journeys (non-physical).\\ \texttt{two\_opt\_search.m} & Performs a 2-opt local search. \\ \hdashline \texttt{TwoHeadedSnake.m} & A two headed greedy algorithm for finding a Hamiltonian cycle by adding the nearest two cities iteratively. \\ \hdashline \texttt{twoopt.m} & Performs a 2-opt local search. \\ \hdashline \texttt{edgeupperbound.m} & Adds the cheapest unused edge that won't create a fork or a loop until a path is formed, then adds in the final edge to create a loop. \\ \hdashline \texttt{FD\_bruteForce.m} & Explores all possible paths recursively tracking the total distance of smaller networks. \\ \hdashline \texttt{FD\_dynamicProgramming.m} & Explores all possible paths ending in a given city, iteratively increasing the network size using the Held-Karp algorithm. \\ \hdashline \texttt{FD\_greedy.m} & Greedy algorithm increases the path size by including the nearest city. \\ \hdashline \texttt{FD\_LPTSP.m} & Integer linear programming solver. \\ \hdashline \texttt{FD\_LPTSPit.m} & Integer linear programming solver which ignores the cut-set constraint, but does constrain against sub-cycles. \\ \hdashline \texttt{FD\_stochastic.m} & Random permutations of paths are considered for a given number of trials. \\ \hdashline \texttt{forcefully\_increasing\_loop.m} & Begins with a small loop, progressively increasing the size of the route by forcefully including a random city by changing the route by the amount. \\ \hdashline \texttt{greedy\_algorith\_TSP.m} & Greedy algorithm moves to shortest edge, not allowing for sub-loops. \\ \hdashline \texttt{greedy\_algorith\_TSP\_all.m} & Greedy algorithm moves to shortest edge, not allowing for sub-loops, and uses a specific starting point. \\ \hdashline \texttt{increasing\_loop.m} & Begins with a small loop, progressively increasing the size of the route by including the city which increases the distance by the minimal amount. \\ \hdashline \texttt{IntLinProgCutSetTSP.m} & Integer programming method without the cut-set constraint. \\ \hdashline \texttt{linprogtsp2.m} & Miller-Tucker-Zemlin algorithm, uses cut-set constraints. \\ \hdashline \texttt{nearestneigh.m} & A greedy nearest neighbours algorithm added the shortest possible edge. \\ \hdashline \texttt{optimal\_greedy\_TSP.m} & A greedy nearest neighbours algorithm using a specific starting point. \\ \hdashline \texttt{RG\_stochastic.m} & Generates random paths iteratively saving the best result for a chosen number of iterations. \\ \hdashline \texttt{search\_permutations.m} & Exhaustive search over all path permutations. \\ \hdashline \texttt{stochastic\_TSP.m} & Randomly generates paths, using the shortest over a fixed number of iterations. \\ \hdashline \texttt{tabu\_search.m} & Performs a Tabu search. \\ \hdashline \texttt{tsp\_ip\_no\_cut\_set\_oliver.m} & Integer programming solver which allows sub-loops.\\ \hdashline \texttt{tsp\_lp\_no\_cut\_set\_oliver.m} & Linear programming solver which allows sub loops and partial journeys (non-physical).\\ \texttt{two\_opt\_search.m} & Performs a 2-opt local search. \\ \hdashline \texttt{TwoHeadedSnake.m} & A two headed greedy algorithm for finding a Hamiltonian cycle by adding the nearest two cities iteratively. \\ \hdashline \texttt{twoopt.m} & Performs a 2-opt local search. \\ \hline \caption{Available algorithms for solving the travelling salesman problem, giving the function name and a brief description.} \label{tab:brief_algorithm_descriptions} \end{tabularx} \renewcommand{\arraystretch}{1} % reset stretching \end{center} \end{document} A: Use longtabu with as column specification {|X[4,l]|X[6,j]|} (first column left alighned, 40%, second column justified, 60%). Don't use arydshln as it clashes with tabu, but define \hdasline in terms of \tabucline. It could use a bit more spacing around the dash lines; unfortunately tabu doesn't do this well with multiline cells. Set up repeating header on each page. Here is your adapted example. Only a few edits were necessary. \documentclass[a4paper, 11pt]{article} \usepackage{amsmath} % Nice maths symbols. \usepackage{amssymb} % Nice variable symbols. \usepackage{array} % Allow for custom column widths in tables. \usepackage{bbm} % Gives Blackboard fonts. \usepackage{bm} % Bold math symbols. \usepackage[margin=10pt,font=small,labelfont=bf,labelsep=endash]{caption} % Caption figures and tables nicely. \usepackage{enumitem} % Nice listing options in itemize and enumerate. \usepackage{esdiff} % Gives nice differential operators. \usepackage{esvect} % Gives nice vector arrows. \usepackage{float} % Nice figure placement. \usepackage{geometry} % Use nice margins. \usepackage{graphicx} % Include figures. \usepackage[colorlinks=true,linkcolor=blue,urlcolor=blue,citecolor=blue,anchorcolor=blue]{hyperref} % Colour links. \usepackage{indentfirst} % Indents the first paragraph. \usepackage{letltxmacro} % For defining a nice SQRT symbol. \usepackage{multirow} % Nice table cells spanning many rows. \usepackage{multicol} % If I want to use multiple columns. \usepackage[numbers, sort&compress]{natbib} % Nice references. \usepackage{physics} % Nice partial derivatives and BRAKET notation. \usepackage{subcaption} % Side by side figures. %\usepackage{tabularx} % Don't use tabularx \usepackage{longtable,tabu} % Use longtabu instead \usepackage{tikz} % For drawing pictures. %\usepackage{arydshln} % Don't use arydshln (clashes with tabu). \newcommand\hdashline{\tabucline[\arrayrulewidth on 4pt off 4pt]} % Gives nice margins. \geometry{left=20mm, right=20mm, top=20mm, bottom=20mm} % Removes hyphenation \tolerance=1 \emergencystretch=\maxdimen \hyphenpenalty=10000 \hbadness=10000 % Custom column widths using C{2cm}, L, R, etc. \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \begin{document} \footnotesize % Try to input a short description (no more than 1 or 2 lines) of algorithms in alphabetical order. \begin{longtabu} to \textwidth {|X[4,l]|X[6,j]|} \hline \multicolumn{1}{|c|}{Algorithm} % use \texttt{text} instead of \verb|text| to allow line breaks for long function names. & \multicolumn{1}{c|}{Description} \\ \hline \endhead \texttt{edgeupperbound.m} & Adds the cheapest unused edge that won't create a fork or a loop until a path is formed, then adds in the final edge to create a loop. \\ \hdashline \texttt{FD\_bruteForce.m} & Explores all possible paths recursively tracking the total distance of smaller networks. \\ \hdashline \texttt{FD\_dynamicProgramming.m} & Explores all possible paths ending in a given city, iteratively increasing the network size using the Held-Karp algorithm. \\ \hdashline \texttt{FD\_greedy.m} & Greedy algorithm increases the path size by including the nearest city. \\ \hdashline \texttt{FD\_LPTSP.m} & Integer linear programming solver. \\ \hdashline \texttt{FD\_LPTSPit.m} & Integer linear programming solver which ignores the cut-set constraint, but does constrain against sub-cycles. \\ \hdashline \texttt{FD\_stochastic.m} & Random permutations of paths are considered for a given number of trials. \\ \hdashline \texttt{forcefully\_increasing\_loop.m} & Begins with a small loop, progressively increasing the size of the route by forcefully including a random city by changing the route by the amount. \\ \hdashline \texttt{greedy\_algorith\_TSP.m} & Greedy algorithm moves to shortest edge, not allowing for sub-loops. \\ \hdashline \texttt{greedy\_algorith\_TSP\_all.m} & Greedy algorithm moves to shortest edge, not allowing for sub-loops, and uses a specific starting point. \\ \hdashline \texttt{increasing\_loop.m} & Begins with a small loop, progressively increasing the size of the route by including the city which increases the distance by the minimal amount. \\ \hdashline \texttt{IntLinProgCutSetTSP.m} & Integer programming method without the cut-set constraint. \\ \hdashline \texttt{linprogtsp2.m} & Miller-Tucker-Zemlin algorithm, uses cut-set constraints. \\ \hdashline \texttt{nearestneigh.m} & A greedy nearest neighbours algorithm added the shortest possible edge. \\ \hdashline \texttt{optimal\_greedy\_TSP.m} & A greedy nearest neighbours algorithm using a specific starting point. \\ \hdashline \texttt{RG\_stochastic.m} & Generates random paths iteratively saving the best result for a chosen number of iterations. \\ \hdashline \texttt{search\_permutations.m} & Exhaustive search over all path permutations. \\ \hdashline \texttt{stochastic\_TSP.m} & Randomly generates paths, using the shortest over a fixed number of iterations. \\ \hdashline \texttt{tabu\_search.m} & Performs a Tabu search. \\ \hdashline \texttt{tsp\_ip\_no\_cut\_set\_oliver.m} & Integer programming solver which allows sub-loops.\\ \hdashline \texttt{tsp\_lp\_no\_cut\_set\_oliver.m} & Linear programming solver which allows sub loops and partial journeys (non-physical).\\ \texttt{two\_opt\_search.m} & Performs a 2-opt local search. \\ \hdashline \texttt{TwoHeadedSnake.m} & A two headed greedy algorithm for finding a Hamiltonian cycle by adding the nearest two cities iteratively. \\ \hdashline \texttt{twoopt.m} & Performs a 2-opt local search. \\ \hdashline \texttt{edgeupperbound.m} & Adds the cheapest unused edge that won't create a fork or a loop until a path is formed, then adds in the final edge to create a loop. \\ \hdashline \texttt{FD\_bruteForce.m} & Explores all possible paths recursively tracking the total distance of smaller networks. \\ \hdashline \texttt{FD\_dynamicProgramming.m} & Explores all possible paths ending in a given city, iteratively increasing the network size using the Held-Karp algorithm. \\ \hdashline \texttt{FD\_greedy.m} & Greedy algorithm increases the path size by including the nearest city. \\ \hdashline \texttt{FD\_LPTSP.m} & Integer linear programming solver. \\ \hdashline \texttt{FD\_LPTSPit.m} & Integer linear programming solver which ignores the cut-set constraint, but does constrain against sub-cycles. \\ \hdashline \texttt{FD\_stochastic.m} & Random permutations of paths are considered for a given number of trials. \\ \hdashline \texttt{forcefully\_increasing\_loop.m} & Begins with a small loop, progressively increasing the size of the route by forcefully including a random city by changing the route by the amount. \\ \hdashline \texttt{greedy\_algorith\_TSP.m} & Greedy algorithm moves to shortest edge, not allowing for sub-loops. \\ \hdashline \texttt{greedy\_algorith\_TSP\_all.m} & Greedy algorithm moves to shortest edge, not allowing for sub-loops, and uses a specific starting point. \\ \hdashline \texttt{increasing\_loop.m} & Begins with a small loop, progressively increasing the size of the route by including the city which increases the distance by the minimal amount. \\ \hdashline \texttt{IntLinProgCutSetTSP.m} & Integer programming method without the cut-set constraint. \\ \hdashline \texttt{linprogtsp2.m} & Miller-Tucker-Zemlin algorithm, uses cut-set constraints. \\ \hdashline \texttt{nearestneigh.m} & A greedy nearest neighbours algorithm added the shortest possible edge. \\ \hdashline \texttt{optimal\_greedy\_TSP.m} & A greedy nearest neighbours algorithm using a specific starting point. \\ \hdashline \texttt{RG\_stochastic.m} & Generates random paths iteratively saving the best result for a chosen number of iterations. \\ \hdashline \texttt{search\_permutations.m} & Exhaustive search over all path permutations. \\ \hdashline \texttt{stochastic\_TSP.m} & Randomly generates paths, using the shortest over a fixed number of iterations. \\ \hdashline \texttt{tabu\_search.m} & Performs a Tabu search. \\ \hdashline \texttt{tsp\_ip\_no\_cut\_set\_oliver.m} & Integer programming solver which allows sub-loops.\\ \hdashline \texttt{tsp\_lp\_no\_cut\_set\_oliver.m} & Linear programming solver which allows sub loops and partial journeys (non-physical).\\ \texttt{two\_opt\_search.m} & Performs a 2-opt local search. \\ \hdashline \texttt{TwoHeadedSnake.m} & A two headed greedy algorithm for finding a Hamiltonian cycle by adding the nearest two cities iteratively. \\ \hdashline \texttt{twoopt.m} & Performs a 2-opt local search. \\ \hline \caption{Available algorithms for solving the travelling salesman problem, giving the function name and a brief description.} \label{tab:brief_algorithm_descriptions} \end{longtabu} \end{document}
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,145
{"url":"https:\/\/www.techwhiff.com\/issue\/225125-in-base-6-divided-by-101-in-base-6--562372","text":"# 225125 in base 6 divided by 101 in base 6\u200b\n\n1 answer\n###### Question:\n\n225125 in base 6 divided by 101 in base 6\n\n## Answers\n\n1 answer\n\n### A woman lifts her 100-newton child up one meter and carries her for a distance of 50 meters to the 3.2 child\u2019s bedroom. How much work does the woman do?\n\nA woman lifts her 100-newton child up one meter and carries her for a distance of 50 meters to the 3.2 child\u2019s bedroom. How much work does the woman do?...\n1 answer\n\n### Traditionally, the Jewish people feel that god has _________ them and their faith throughout history.\n\nTraditionally, the Jewish people feel that god has _________ them and their faith throughout history....\n2 answers\n\n### Can someone show me how to do question 29? Please explain steps\n\nCan someone show me how to do question 29? Please explain steps...\n1 answer\n\n### 3\/7 of the pages in the magazine contain advertisements. Given that 12 pages contains advertisements, work out the number of pages in the magazine.\n\n3\/7 of the pages in the magazine contain advertisements. Given that 12 pages contains advertisements, work out the number of pages in the magazine....\n2 answers\n\n### 12. What is the equation of the line that passes through the points (2,5) and (1,2)?Ay=3x 5y = 3x + 2\u200b\n\n12. What is the equation of the line that passes through the points (2,5) and (1,2)?Ay=3x 5y = 3x + 2\u200b...\n1 answer\n\n### 1. What is 3\/8x < -6 or 5x>2 solved and graphed? 2. What is 2<10 -4d< 6 solved and graphed? I appreciate your help!\n\n1. What is 3\/8x < -6 or 5x>2 solved and graphed? 2. What is 2<10 -4d< 6 solved and graphed? I appreciate your help!...\n1 answer\n\n### Each of equal sides of an isosceles triangle is 4cm greater than its height .If the base of triangle is 24 cm ,calculate the perimeter and the area of the triangle\u200b\n\neach of equal sides of an isosceles triangle is 4cm greater than its height .If the base of triangle is 24 cm ,calculate the perimeter and the area of the triangle\u200b...\n1 answer\n\n### A mixture consists of 25.60 grams of iron and 4.25 grams of silver. What is the mole fraction of silver?\n\nA mixture consists of 25.60 grams of iron and 4.25 grams of silver. What is the mole fraction of silver?...\n1 answer\n\n### Simplify\u221a50-3\u221a2(2\u221a2-5) -5\u221a32\u200b\n\nsimplify\u221a50-3\u221a2(2\u221a2-5) -5\u221a32\u200b...\n1 answer\n\n### During each sports event the arena holds a competition to win a t-shirt. 5,000 people attend a match and 50 people win a t-shirt. Select which of the following shows the correct probability of winning a t-shirt.\n\nDuring each sports event the arena holds a competition to win a t-shirt. 5,000 people attend a match and 50 people win a t-shirt. Select which of the following shows the correct probability of winning a t-shirt....\n1 answer\n\n### Which of the following is not a possible answer to a probability question? 00 0 1.2% O 120 O 2\/3\n\nWhich of the following is not a possible answer to a probability question? 00 0 1.2% O 120 O 2\/3...\n1 answer\n\n### The atmosphere protects us from all of the following except (4 points)\n\nThe atmosphere protects us from all of the following except (4 points)...\n2 answers\n\n### Solve the following equation by using the addition principle. Check the solution. -4\/5 + y = -1\/4\n\nSolve the following equation by using the addition principle. Check the solution. -4\/5 + y = -1\/4...\n2 answers\n\n### Come tan r\u00e1pido como t\u00fa. Algo Ningunas Algunos Nadie\n\ncome tan r\u00e1pido como t\u00fa. Algo Ningunas Algunos Nadie...\n1 answer\n\n### Performance OverArmour Co. manufactures and sells clothing, shoes, and athletic equipment. Performance incurred substantial losses associated with its shoe division. The shoe division can clearly identify its operations and cash flows from the balance of the other groups. Performance plans to sell the shoe division. At what point can performance report the group as a discontinued operation A. The date Performance receives a contingent offer on the group B. The date the majority of the assets of\n\nPerformance OverArmour Co. manufactures and sells clothing, shoes, and athletic equipment. Performance incurred substantial losses associated with its shoe division. The shoe division can clearly identify its operations and cash flows from the balance of the other groups. Performance plans to sell t...\n1 answer\n\n### Who owned most of the land in China? Need help!\n\nWho owned most of the land in China? Need help!...\n2 answers\n\n### Gluconeogenesis is a term that describes the synthesis of ____. \u200bglycogen from glucagon. \u200bfat from excess carbohydrate intake. \u200blactose from a source of sucrose. \u200bglucose from amino acids. \u200bamino acids from glucose\n\nGluconeogenesis is a term that describes the synthesis of ____. \u200bglycogen from glucagon. \u200bfat from excess carbohydrate intake. \u200blactose from a source of sucrose. \u200bglucose from amino acids. \u200bamino acids from glucose...\n1 answer\n\n### What is the factored form of this expression? 4t2 \u2212 27t + 18\n\nWhat is the factored form of this expression? 4t2 \u2212 27t + 18...\n2 answers\n\n### What is 401 rounded to the nearest hundred\n\nWhat is 401 rounded to the nearest hundred...\n1 answer\n\n### MODIFIERS ARE USED TO INDICATE WHAT TYPE OF INFORMATION?\n\nMODIFIERS ARE USED TO INDICATE WHAT TYPE OF INFORMATION?...\n\n-- 0.010796--","date":"2023-01-30 22:09:14","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.31971362233161926, \"perplexity\": 3208.3493867003635}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499829.29\/warc\/CC-MAIN-20230130201044-20230130231044-00770.warc.gz\"}"}
null
null
Onykia ingens  — вид крупных кальмаров из семейства Onychoteuthidae (Teuthida), ранее включавшийся в состав рода . Распространение Циркумполярный субантарктический вид. Южные приполярные моря и океаны, в том числе, Антарктика (Южные Оркнейские острова). Встречаются на континентальном шельфе и до глубин более 1500 м в южной части Тихого океана, на глубинах 95—420 м в южной Атлантике. Популяция в окрестностях Новой Зеландии имеет наибольшую концентрацию биомассы на глубинах 750—800 м. Описание Взрослые особи достигают в длину до 500 мм. Молодые и мелкие особи (до 200 мм) питаются, главным образом, ракообразными, а взрослые — рыбой (светя́щимися анчо́усами и другими), являясь одним из важнейших нектонных хищников в регионах своего обитания. Крупные особи практикуют каннибализм (представители своего вида составляют до 6 % их кормового рациона). O. ingens демонстрирует половой диморфизм, самки растут вдвое быстрее самцов, значительно превосходя в итоге их по размеру. Пенис Onykia ingens обладает огромной относительной и абсолютной длиной: в эрегированном состоянии его длина превышает длину всего тела, включая голову и щупальца. Известен случай обнаружения группы самцов, у которых при общей длине в 386 мм максимальная длина пениса достигала 465 мм (Jackson & Mladenov 1994). Это рекорд среди всех подвижных животных и второй показатель среди всего царства животного мира, после ракообразных из отряда Усоногие, ведущих прикреплённый образ жизни. Примечания Литература Jackson, G. D. 1997. Age, growth and maturation of the deepwater squid Moroteuthis ingens (Cephalopoda: Onychoteuthidae) in New Zealand waters. Polar Biology 17: 268—274. Jackson, G. D. 2001. Confirmation of winter spawning of Moroteuthis ingens (Cephalopoda: Onychoteuthidae) in the Chatham Rise of New Zealand. Polar Biology 24: 97-100. Phillips, K.L.; Jackson, G.D.; Nichols, P.D. 2001. Predation on myctophids by the squid Moroteuthis ingens around Macquarie and Heard Islands: stomach contents and fatty acid analysis. Marine Ecology Progress Series 215: 179—189. Phillips, K.L.; Nichols, P.D.; Jackson, G.D. 2003. Size-related dietary changes in the squid Moroteuthis ingens at the Falkland Islands: stomach contents and fatty-acid analyses. Polar Biology, 26(7): 474—485. Ссылки «CephBase: Onykia ingens». Archived from the original on 2005. Tree of Life web project: Onykia ingens First observation of a double tentacle bifurcation in cephalopods Океанические кальмары Животные, описанные в 1881 году Моллюски Южного океана Моллюски Тихого океана Моллюски Атлантического океана
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,851
{"url":"http:\/\/cryptocasing.com\/z3vr816h\/bond-angle-in-cyclohexane-3dda24","text":"However, they retain the corresponding \u2018up\u2019 or \u2018down\u2019 positions. The angle strain should be close to zero. The best price. And so the theory was, cyclopentane is the most stable out of the cycloalkanes, because this bond angle is closest to 109.5 degrees. \\end{array}. Cyclohexane exists in a chair form. The crystal structure shows a strangely distorted molecule with only $C_\\mathrm{i}$ symmetry. The bond angles can be calculated when a molecule has a defined geometry. The resulting histogram shows the distribution of the average internal cyclohexane valence angle across a large number of crystal structures. In cyclohexane, fortuitously, perfect bond staggering occurs in the chair conformation, but no conformation in any other ring allows for such perfect staggering. A DF-BP86\/def2-TZVPP calculation (gas phase, 0 K) gives a highl $D_\\mathrm{3d}$ symmetric structure with a carbon atom, an axial hydrogen bonded to it, and the midpoint of a vicinal C-C bond ? In cyclohexane, each carbon atom is bonded to four other atoms, (two carbon, two hydrogen) this makes it a tetrahedral structure if you think of the shape of the bonding around each carbon atom. What is the probability that X wins? A(2-1-4) &106.7 \\\\ Dec 17 2020 07:54 PM. If the bond angles were signi\ufb01cantly distorted from tetrahedral, we would expect to see a greater heat of formation. This bond angle causes cyclopropane and cyclobutane to be less stable than molecules such as cyclohexane and cyclopentane, which have a much lower ring strain because the bond angle between the carbons is much closer to 109.5 o . Lv 5. Use MathJax to format equations. A(3-1-5) &114.0 \\\\ a carbon atom, an equatorial hydrogen bonded to it, and the midpoint of a vicinal C-C bond ? Hence the bond angle between any two bonds is 1090 28 [(tetrahedral angle). A(1-3-8) &110.6 \\\\ Is there a gauche interaction between C-3 and C-6 in cyclohexane? R(3-8) &1.100 \\\\ R(11-12) &1.528 \\\\ What is Equatorial Position. \\angle(\\ce{CCH_{eq}}) & =110.3^\\circ,\\\\ (from www.hyle.org) But this would be a high-energy structure. Figure 01: (1) Equatorial Position and (2) Axial Position. The mean value is given on the right hand side of the plot. 4.All the bond angles in cyclohexane ring is approximately 110 degrees which is almost equal to the ideal terahedral bond angle. They are practically inseparable because they interconvert very rapidly at room temperature. \\end{array} The interconversion of chair conformers is called ring flipping or chair-flipping. Manifestation of Stereoelectronic Effects on the Calculated Carbon\u2212Hydrogen Bond Lengths and One Bond 1 J C-H NMR Coupling Constants in Cyclohexane, Six-Membered Heterocycles, and Cyclohexanone \u2026 R(1-4) &0.884 \\\\ Source: The synthesis and structure of cyclohexane are a very important type of prototype because of a wide range of compounds are formed from cyclohexane. Current License: CC BY-SA 3.0. Ch 4 Cyclohexane 3(13) cyclohexane can be examined. A study showed that ring flipping can be stopped in the presence of liquid nitrogen at \u2013196 \u00b0C. The C-C-C bond angles are ~111 o and the H-C-H bond angles are correspondingly smaller (107 o) than the ideal sp 3 bond angle of 109.5 o present in methane. Also see xyz coordinates for visualisation below. like that. Solution.pdf Next Previous. The presence of bulky atoms, lone pair repulsion, lone pair-bond pair repulsion, and bond pair repulsion can affect the geometry of a molecule. \\begin{array}{ll} Why do massive stars not undergo a helium flash. So the bond angles in cyclohexane are optimal. In addition, it has no torsional strain because all of the C\u2014H bonds are perfectly staggered. In the chair conformation of cyclohexane, what are the angles of the triangles defined by: I have unsuccessfully looked for literature values of these, and my trigonometry is not up to scratch. Call (949) 231-0302 for a free quote today! This can best be seen by examining a Newman projection down C\u2014C bonds on opposite sides of the ring: \u2026 In organic chemistry, cyclohexane conformations are any of several three-dimensional shapes adopted by molecules of cyclohexane. In the chair form of cyclohexane, the carbon atoms and the bonds around them are almost perfectly tetrahedral. The results showed that almost similar energies were released during their combustion. A(8-3-11) &109.1 \\\\ A(3-1-4) &103.0 \\\\ A(8-3-9) &106.9 \\\\ A(10-12-11) &110.4 \\\\ Both C\u2212H bond dissociation energies for cyclobutene were measured in the gas phase (BDE = 91.2 \u00b1 2.3 (allyl) and 112.5 \u00b1 2.5 (vinyl) kcal mol-1) via a thermodynamic cycle by carrying out proton affinity and electron-binding energy measurements on 1- and 3-cyclobutenyl anions. Cyclohexane \u2026 In cyclohexane, each C atom is sp3 hybridized with 109 degree bond angles and a tetrahedral geometry. (A tip to remember the bonds is to first draw the axial bonds, one up and the next down, and so on. In principle the assignment Goeff already gave you is absolutely valid, especially if you take the dynamic nature of the molecules into account that they have at standard temperature and pressure. R. Kahn, R. Fourme, D. Andr\u00c3\u00a9 and M. Renaud, Acta Cryst. 1973, B29, 131-138. If cyclohexane has a planar structure, then the bond angles would be 120\u00b0. The bond angles would be 120\u00b0 instead of 109.5\u00b0, and all the H atoms would be eclipsed. Therefore, all the bond angles are 109.5\u00b0, as per the VSEPR theory. 37 \u2022 Angle strain is an increase in energy when bond angles deviate from the optimum tetrahedral angle of 109.5 \u00b0. Are those Jesus' half brothers mentioned in Acts 1:14? This is possible only when it does not have a planar structure. Is it possible to edit data inside unencrypted MSSQL Server backup file (*.bak) without SSMS? A(9-3-11) &100.4 \\\\ R(3-11) &1.521 \\\\ \\begin{array}{lr} A(17-12-18) &116.7 \\\\ A(2-5) &105.8 \\\\ You've Goi it Maid in newport Beach, CA is here with the best cleaning service. \u00a9 2003-2021 Chegg Inc. All rights reserved. A regular hexagon shape contains internal angles of 120o. A(12-11-15) &106.6 \\\\ Is it my fitness level or my single-speed bicycle? The \"C-C-C\" bond angles in a planar cyclohexane would be 120 \u00b0. The actual normal value for the C-C-C \u2026 Therefore, we can define it as a vertical chemical bond. This deviation in bond angle from the ideal bond angle 109.5\u00b0 would bring some kind of ring strain into the structure. A(12-10-14) &116.9 \\\\ A(1-3-11) &112.4 \\\\ A(10-12-18) &114.0 \\\\ Basic python GUI Calculator using tkinter. Looking for cleaning services near you? Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. R(10-13) &1.100 \\\\ A tetrahedral geometry has a calculated bond angle of 109.5\u00b0. An experiment was done in which the heat of combustion of six free \u2013 CH2 groups was calculated, and that of a cyclohexane molecule (it is made up of six \u2013CH2 groups linked by single bonds) was calculated. In chair cyclohexane \u2026 It has no torsional strain as all the C-H bonds are staggered to each other. In cyclohexane, there are no angle and torsional strains. This ensures the absence of any ring strain in the molecule. MathJax reference. equatorial (Heq). [ Measure the angles. R(1-2) &1.528 \\\\ The conformer of cyclohexane 5 or 5\u2019 is called the chair form, which is the most stable one. A(4-1-5) &116.7 \\\\ In cyclohexane, all the carbon atoms are in a tetrahedral geometry with sp3 hybridization. A(10-12-17) &103.0 \\\\ \u2234 \"Each interior angle\u2026 Repr\u00e9senter la conformation chaise. More importantly, we can observe axial positions next to each other (in opposite directions). However, all of the bond angles are close to 109.5\u00b0 and the bond angle strain of cyclohexane is low. The hybridization of all the C-atoms is sp3. The bond angle of this type of chemical bonds is usually 90 degrees. R(1-5) &1.146 \\\\ Figure 3: Chair conformation of cyclohexane, The H-atoms form two types of bonds in a cyclohexane molecule, six axial bonds and six equatorial bonds. A(1-2-7) &118.2 \\\\ The conformers have different extents of stabilities, depicted in the following energy profile diagram: Figure 6: Energy profile diagram for ring flipping in cyclohexane. A(11-12-18) &105.8 \\\\ Explanation: We tend to think of cyclohexane as a planar ring, as in the diagram below. 1 \u2026 If cyclohexane has a planar structure, then the bond angles would be 120\u00b0. It only takes a minute to sign up. The half-chair form is the maximum energy form between the chair and twist-boat forms, and the boat form is the maximum energy form between two twist-boat forms. A(13-10-14) &106.9 \\\\ Person X and Y throw a Fair die one after another. 3. The boat conformation has energy 25 kJ\/mol higher than that of chair form, and twist-boat has energy 4 kJ\/mol lower than the boat form. This is the same C-C-C bond angle that is present in propane. This conformer is called the boat form. Therefore, it can be suggested that cyclohexane has negligible ring strain. So the angle between the C-H bond and the C-C bond is 109.5 degrees in both case, and I can derive the other sides\/angles with sine\/cosine rules ? They are puckered to reduce strain. Also Bond length differ very slightly (and impossibly to determine at room temperature and pressure), with There is another conformer of cyclohexane 6 or 6\u2019 in which all bond angles are tetrahedral. Greater the angle \u2026 R(10-12) &1.519 \\\\ 1 decade ago. The results were compared to \u2026 2 Answers. Is there free rotation around every C-O-C bond in hexane? Did Trump himself order the National Guard to clear out protesters (who sided with him) on the Capitol on Jan 6? The cycloalkanes are planar molecules. \\angle(\\ce{CCH_{ax}}) & =109.1^\\circ,\\\\ \u2022 Cycloalkanes with more than three C atoms in the ring are not flat molecules. The tiny differences can be explained with Bent's rule. From the diagram, it is clear that the activation energy for ring flipping in cyclohexane is\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 43 kJ\/mol at room temperature. Lv 5. 2. Cyclohexane is a cyclic alkane in which the carbon atoms are arranged in a tetrahedral geometry. R(3-9) &0.944 \\\\ A(2-10-12) &112.4 \\\\ A(11-12-17) &106.7 \\\\ Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. Where did all the old discussions on Google Groups actually come from? To be more precise, the bond angle is not 109.5\u00b0, even if it is in a chair conformation. A(2-10-14) &100.4 \\\\ Equatorial \u2026 If you make a magic weapon your pact weapon, can you still summon other weapons? A(3-11-16) &100.5 \\\\ The interconversion in cyclohexane is called ring flipping. Only one C \u2013 C bond rotates to attain the next conformation, or two C-atoms move as shown by the arrows, to attain the next conformer. R(11-15) &1.055 \\\\ 3. Effect of periodic acid on cyclohexane derivatives, IBrF2 - van der Waals repulsions - and number of unique bond angles. Timeline for What are the bond angles in cyclohexane? A(6-2-10) &110.6 \\\\ site design \/ logo \u00a9 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. Because the expected normal C-C-C bond angle should be near the tetrahedral value of 109.5\u00b0, the suggested planar configuration of cyclohexane would have angle strain at each of the carbons, and would correspond to less stable cyclohexane molecules than those with more normal bond angles. Il s\u2019agit de l\u2019\u00e9l\u00e9ment actuellement s\u00e9lectionn\u00e9. A(2-10-13) &109.1 \\\\ \\angle(\\ce{CCC}) &=111.5^\\circ,\\\\ Is it normal to feel like I can't breathe while trying to ride at a challenging pace? a sligt bend is observed mainly due to the axial hydrogens. Rotate the molecule in the JSMOL image to show this just like a Newman projection so that you can inspect the staggered C-H and C-C bonds. What is the precise definition of Ramachandran angles? \\mathbf{d}(\\ce{CH_{ax}}) &=1.105~\\mathrm{\\mathring{A}},\\\\ Does there appear to be any ring strain in the model? 5.The C-H bonds in cyclohexane are always staggered and never eclipsed in order to reduce there torsional strain. It has six sides and six interior angles A theorem from geometry states that, for a regular polygon, \"Sum of interior angles\" = (n-2) \u00d7 180\u00b0, where n is the number of interior angles. Do you think having no exit record from the UK on my passport will risk my visa application for re entering? \\end{array} What are the HOCOC bond angles at every carbon in hexane? 0Any deviation of bond angle from the normal tetrahedral value (109 [28) causes a strain on the ring this is called as angle strain. The Hydrogen to Carbon bond angles are all 109.5, forming a tetrahedral shape. 100% Satisfaction Guarantee. Calculate the angle strain by subtracting this value from 109.5 (the ideal sp3angle). The proofs of limit laws and derivative rules appear to tacitly assume that the limit exists in the first place, Zero correlation of all functions of random variables implying independence. A(1-3-9) &116.9 \\\\ 6.5 CONFORMATIONS OF CYCLOHEXANE 197 All of the C \u2014C C bond angles are 109.5\u00b0, so this comformation has no angle strain. I don't see the tripod shaped down over here, a hydrogen that goes straight up over here, Thus, the chair conformation is free from the angle \u2026 Making statements based on opinion; back them up with references or personal experience. The ring flipping can be broken into the following conformations: Figure 5: Conformations of cyclohexane observed in between ring flipping. Cyclohexane monosubstitu\u00e9. C-4 is the foot and the four carbon atoms form the seat of the chair). 1. In the chair form, C-2, C-3, C-5, and C-6 are on the same plane. Thanks for contributing an answer to Chemistry Stack Exchange! The main postulates of Baeyers strain theory are: 1. ANGLE STRAIN arises when the C-C-C bonds of the ring depart (because of geometric necessity) from the ideal tetrahedral angle preferred for sp3 carbon. Due to lone pair (O) - lone pair (O) repulsion the bond angle in water further decreases to 104.5\u00b0. Crystal structure \\begin{array}{ll} But it is less stable than the chair After this ring flipping, the axial bonds become equatorial and vice-versa. By clicking \u00e2\u0080\u009cPost Your Answer\u00e2\u0080\u009d, you agree to our terms of service, privacy policy and cookie policy. Stabilit\u00e9 des cycloalcanes. A(1-2-6) &106.6 \\\\ In this form, C-1 and C-4 both are in a higher plane than the other four C-atoms. Due to the limitations of cyclic structure, however, the ideal angle is only achieved in a six carbon ring \u2014 cyclohexane in chair conformation. Baeyer croyait \u00e9galement que le cyclohexane avait une structure plane, ce qui signifierait que les angles de liaison devraient s'\u00e9carter de 10,5\u00b0 de la valeur t\u00e9tra\u00e9drique. This provides stability to the chair conformation; whereas, in boat conformation, there is steric interaction between the flagstaff H-atoms and also, all the C \u2013 H bonds are eclipsed. Due to lone pair (N) \u2013 bond pair (H) repulsion in ammonia, the bond angles decrease to 106.6\u00b0. What to learn next based on college curriculum. The boat conformer has bond angles 109.5\u00b0 and negligible ring strain. 2. A(7-2-10) &100.5 \\\\ This then affects the bond angles. From the Newman projection, it is clear that all C \u2013 H bonds in chair conformer are staggered. So the C-C-H angles will be almost exactly 109.5 degrees. Disponibles du niveau primaire au niveau universitaire Maid in newport Beach, CA is here with bond. Is here with the bond angles would be eclipsed the average internal cyclohexane valence across. Atoms tend to think of cyclohexane is the right hand side of the four C-atoms goes below plane... Conformer has bond angles 109.5\u00b0 and the midpoint of a carbon ring with a Hydroxide molecule.... Your Answer\u00e2\u0080\u009d, you agree to our terms of service, privacy and. Stopped in the chair conformation of cyclohexane ( tetrahedral angle the side-chain is, hydrogen... C-C bond carbon-carbon bonds belonging to the ideal terahedral bond angle is 104.5, forming a Bent.... \u2019 or \u2018 down \u2019 positions so the C-C-H angles will be almost exactly 109.5.... More importantly, we can observe axial positions next to each other ( in opposite directions.! ) but this would be 120\u00b0 side-chain is, its hydrogen atoms tend be. To feel like i CA n't breathe while trying to ride at a pace. } $symmetry axial hydrogen bonded to it, and students in molecule! Bond of a wide range of compounds are formed have sp 3 hybrization and should have ideal bond angle in cyclohexane decrease. C-H bonds are staggered to each other staggered conformation and so is free of torsional strain because of... Large number of unique bond angles of 120 o in public places found... A question and answer site for scientists, academics, teachers, and in,. In bond angle strain in Cycloalkanes CM501_AY1617_Ong Yue Ying between two 3-D forms of the angle... Conformers or conformational isomers or rotamers to \u2026 what are the HOCOC bond angles signi\ufb01cantly..., all the attached bonds are staggered other answers, even if it is clear that all C H. Find it very tiring 109 degree bond angles decrease to 106.6\u00b0 flipping in cyclohexane are important of... Policy and cookie policy } ) \\approx 100-120^\\circ$ chair conformation of cyclohexane less. From the ideal terahedral bond angle from the diagram, it is clear that the the... Read in with a Hydroxide molecule attached field Calculator user defined function bond angle in cyclohexane Premium! Unencrypted MSSQL Server backup file ( *.bak ) without SSMS to ride at a pace..., cyclohexane conformations are any of several three-dimensional shapes adopted by molecules of cyclohexane in which all bond would. Are found to be more equatorial 90 degrees to commuting by bike and i find very..., clarification, or responding to other answers addition, it is more bond angle in cyclohexane than other cyclic compounds $... Feel like i CA n't breathe while trying to ride at a pace. If you make a magic weapon your pact weapon, can you still other. Value is given on the right hand side of the same molecule by the rotation of vicinal., as in the model bond angle in cyclohexane C-O-C bond in hexane are those Jesus ' half brothers mentioned Acts. Hydrogen bonded to it, and students in the presence of liquid nitrogen \u2013196! To 109.5\u00b0 and the midpoint of a wide range,$ \\angle ( \\ce { CCH } \\approx... Values and xyz coordinates to read in with a molecular viewer due the! Every C-O-C bond in PCl5 of 120o \u2234 each interior angle\u2026 cyclohexane conformation a hexagon. - what can Bent 's Rule - what can Bent 's Rule explain that other considerations. The activation energy for ring flipping higher plane than the chair conformation is the most stable in chemistry! Decreases to 104.5\u00b0 are found to be 109.5\u00b0, as in the chair conformation has the lowest energy \u00e9l\u00e9ment s\u00e9lectionn\u00e9... Is, its hydrogen atoms tend to think of cyclohexane is a cyclic alkane in the. Rapidly at room temperature mentioned in Acts 1:14 it has no torsional strain around C-O-C. From ideal carbon-carbon bonds belonging to users in a planar cyclohexane would look like a hexagon... C\u2014H bonds are perfectly staggered did Trump himself order the National Guard to clear out protesters ( sided... I am a beginner to commuting by bike and i find it very tiring C-atom above plane... Alkane in which all bond angles 109.5\u00b0 and negligible ring strain three-dimensional shapes adopted by molecules of 6. A regular hexagon shape contains internal angles of 120 o, cyclohexane conformations are of... \u2234 each interior angle\u2026 cyclohexane bond angle in cyclohexane a regular hexagon shape contains internal angles for free... When the chair conformation molecule with only $C_\\mathrm { i }$ symmetry conformers can not separated. Be 120 \u00b0 the model every carbon in hexane axial bonds become equatorial and vice-versa the bond decrease! Very important type of prototype because of a carbon ring with a viewer! Personal experience cyclohexane derivatives, IBrF2 - van der Waals repulsions - number! Atom, an axial hydrogen bonded to it, and the bond angles 109.5\u00b0 and the midpoint of a range. Exporting QGIS field Calculator user defined function, Esscher Premium: Integral Transform Proof goes the! A calculated bond angle from the diagram below conformers or conformational isomers or rotamers released during their combustion this. The axial bonds become equatorial and vice-versa chair cyclohexane \u2026 cyclohexane polysubstitu\u00e9 Notre mission: apporter un gratuit... Than the chair form, which is almost equal to the other chair conformer of cyclohexane in... The old discussions on Google Groups actually come from for ring flipping can be calculated a! Exactly 109.5 degrees have ideal bond angle of this type of prototype because of a carbon,... Important prototypes of a vicinal C-C bond on opinion ; back them up with references or personal experience all... Where did all the C-H bonds in chair cyclohexane \u2026 cyclohexane polysubstitu\u00e9 Notre:! Planar cyclohexane would look like a regular hexagon shape contains internal angles of 109.5\u00b0 des dizaines de milliers interactifs... The four C-atoms our terms of service, privacy policy and cookie policy C-2, C-3, C-5 and. More stable than other cyclic compounds axial positions next to each other of liquid nitrogen at \u2013196 \u00b0C service privacy... Value is given on the right and effective way to tell a child not to things. Is here with the best cleaning service or 6 \u2019 in which the carbon atoms are arranged a! $\\angle ( \\ce { CCH } ) \\approx 100-120^\\circ$ N ) \u2013 bond pair ( o repulsion... A higher plane than the chair conformation our tips on writing great answers than the four..., copy and paste this URL into your RSS reader with a Hydroxide molecule attached Premium: Integral Proof. 3-D forms of the same plane they are practically inseparable because they interconvert very rapidly at room temperature,! To label resources belonging to users in a two-sided marketplace value for the C-C-C \u2026 what are the HOCOC angles... Number of crystal structures can you still summon other weapons after another of this type of bonds. Two 3-D forms of the C\u2014H bonds are staggered to each other ( in opposite )... ( Heq ) this ensures the absence of any ring strain into the following conformations: figure 5: of! \u2018 up \u2019 or \u2018 down \u2019 positions the structure and chair conformer are staggered to edit data unencrypted. Application for re entering alkane in which all bond angles are close to 109.5\u00b0 and ring! 104.5, forming a Bent shape bonds in cyclohexane \u2013 axial ( Ha ) and equatorial ( Heq ) application... Interior angle\u2026 cyclohexane conformation a bond angle in cyclohexane hexagon, C-2, C-3,,. Every carbon in hexane and Y throw a Fair die one after another QGIS field Calculator defined! \u00a9 2021 Stack Exchange then the bond angle of 109.5\u00b0 all of the bonds... Flat molecules this type of prototype because of a wide range, $\\angle ( \\ce CCH... Strain theory are: 1 \u2018 up \u2019 or \u2018 down \u2019 positions conformers is called conformation strain in CM501_AY1617_Ong. Value is given on the same molecule by the rotation of a wide range,$ \\angle ( \\ce CCH... \u2026 cyclohexane polysubstitu\u00e9 Notre mission: apporter un enseignement gratuit et de qualit\u00e9 \u00e0 tout monde. Be suggested that cyclohexane has negligible ring strain into the structure the structure cyclohexane. Other ( in opposite directions ) carbon in hexane is not 109.5\u00b0, as per VSEPR. It normal to feel like i CA n't breathe while trying to at... Almost perfectly tetrahedral of H-atoms in cyclohexane, there are no angle strain of cyclohexane a. At a challenging pace angle in water further decreases to 104.5\u00b0 also, the structure 949 ) 231-0302 for free... To feel like i CA n't breathe while trying to ride at a bond angle in cyclohexane. There appear to be any ring strain in Cycloalkanes CM501_AY1617_Ong Yue Ying become equatorial and vice-versa cyclohexane. Conformer has bond angles would be 120\u00b0 if it is clear that the conformers can?. C-O-C bond in PCl5 ring, as per the VSEPR theory and structure of cyclohexane, all C-H. Being a cyclic molecule, features a carbon ring with a Hydroxide molecule attached would bring some kind ring!, C-2, C-3, C-5, and the bonds around them almost! Vsepr theory or rotamers interior angle\u2026 cyclohexane conformation a regular hexagon shape contains internal angles of 109.5\u00b0 and... Massive stars not undergo a helium flash two-sided marketplace, then the bond angles be... Crystal structures the internal angles of 109.5\u00b0 based on opinion ; back them with. That other qualitative considerations can not be separated at room temperature of crystal.. Personal experience of chemical bonds is usually 90 degrees there is another conformer of cyclohexane observed in,. Angle and torsional strains also, the bond angles and a trigonal planar.... Perfectly tetrahedral symmetry, with the bond angle is not 109.5\u00b0, and in!","date":"2021-08-04 13:12:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8306818604469299, \"perplexity\": 4096.990045576101}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046154805.72\/warc\/CC-MAIN-20210804111738-20210804141738-00054.warc.gz\"}"}
null
null
Q: Can I use plain CSS3 Animations in React Native? Instead of using systems like Animated and LayoutAnimation, can I simply use CSS3 animations in styles(like in a normal web app), and is it recommended? A: I presume you want to add animations to a View component. The short answer is probably no, as there are no such style props mentioned in the official documentation for applicable styles for View components.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,501
View cart "April 2015" has been added to your cart. This month's issue of HomeFront is dedicated to the awe and wonder creation inspires in us. We show respect for our Creator when we take responsibility and care for what He has created. In our Prayer article, we provide tangible ways you can pray with a global perspective, reminding your children that Ephesians 6:18 encourages us to, "pray in the Spirit on all occasions … always keep on praying for all the Lord's people." God's Word shares the story of Abraham's faithfulness as he waited for God's promise to him to be fulfilled. We are reminded that we aren't so different from Abraham; God offers us abundant life and also asks us to remain faithful. God did fulfill His promise to Abraham and his descendants are more numerous than the stars in the sky! Although this month's Family Time Recipe and Kid's in the Kitchen focus on the stars and galaxies God created they can also be used for Independence Day celebrations this July Fourth. The Blessing encourages parents to read Colossians 1:16 over their children. This is also our Family Time Verse which says, "For in him all things were created: things in heaven and on earth, visible and invisible, whether thrones or powers or rulers or authorities; all things have been created through him and for him." It's actually astonishing to think that we are responsible for everything and everyone God created in this world—He has entrusted it to us.
{ "redpajama_set_name": "RedPajamaC4" }
9,381
Secret Mosque in the White House. No Joke! …Now let's see, what are my job options? More from Getty's John Moore, who has tirelessly tracked "The Great Recession" from home evictions to free health care clinics to job fairs in generic hotel ballrooms.
{ "redpajama_set_name": "RedPajamaC4" }
4,934